Search Results

Search found 1274 results on 51 pages for 'pros'.

Page 16/51 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Should a Perl constructor return an undef or a "invalid" object?

    - by DVK
    Question: What is considered to be "Best practice" - and why - of handling errors in a constructor?. "Best Practice" can be a quote from Schwartz, or 50% of CPAN modules use it, etc...; but I'm happy with well reasoned opinion from anyone even if it explains why the common best practice is not really the best approach. As far as my own view of the topic (informed by software development in Perl for many years), I have seen three main approaches to error handling in a perl module (listed from best to worst in my opinion): Construct an object, set an invalid flag (usually "is_valid" method). Often coupled with setting error message via your class's error handling. Pros: Allows for standard (compared to other method calls) error handling as it allows to use $obj->errors() type calls after a bad constructor just like after any other method call. Allows for additional info to be passed (e.g. 1 error, warnings, etc...) Allows for lightweight "redo"/"fixme" functionality, In other words, if the object that is constructed is very heavy, with many complex attributes that are 100% always OK, and the only reason it is not valid is because someone entered an incorrect date, you can simply do "$obj->setDate()" instead of the overhead of re-executing entire constructor again. This pattern is not always needed, but can be enormously useful in the right design. Cons: None that I'm aware of. Return "undef". Cons: Can not achieve any of the Pros of the first solution (per-object error messages outside of global variables and lightweight "fixme" capability for heavy objects). Die inside the constructor. Outside of some very narrow edge cases, I personally consider this an awful choice for too many reasons to list on the margins of this question. UPDATE: Just to be clear, I consider the (otherwise very worthy and a great design) solution of having very simple constructor that can't fail at all and a heavy initializer method where all the error checking occurs to be merely a subset of either case #1 (if initializer sets error flags) or case #3 (if initializer dies) for the purposes of this question. Obviously, choosing such a design, you automatically reject option #2.

    Read the article

  • svn dev cycle. howto lots minor "features" pending for approval.

    - by Julian Davchev
    Hi I've read similar questions regarding that but still feel the need to ask a question. I have scenario where we have lots of tiny "features" pending for approval. I generally see two approaches. 1.Keep trunk solid and have tons of branches for each tiny "feature". Basically every new thingy is a branch. Cons: - Might become nightmare to support so many branches no matter how small a change. Keeping all branches in sync etc etc. - Worst con I see in this is setup of test system so one can easily examine changes to approve (basically need to support all branches which seems insane). Pros: - Seemningly easy once approved a branch to be merged back to trunk and new release to be tagged and deployed. 2.For big features a branch is released and for small changes all goes in trunk(relatively stable) directly. Pros: - Easier to set test system as most of the time all will be directly visible. For big features should be easy to maintain separate branch on test. Cons: - Don't really see how release will go. I will not be able to basically release one part of trunk This would involve cherrypicking which is crazy to follow. Other approach is I just enforce that after some time (a week or so) all small features need to be approved so they can deployed before giving new tasks. I just create release branch and either all or none of small features are going live. This will be some fun discussion with head people. I guess having lots of small pending stuff is very problematic to follow technically.

    Read the article

  • Benefits of migrating my work to a new web development framework?

    - by John
    When I first started programming with PHP, I was ignorant of other php frameworks (like code igniter, cake php, etc...). So I fell into the trap of re-inventing wheels, which had the benefit of being "fun" and "educational". Overtime, I discovered other open source products that I found useful, like smarty templating engine, jquery library, tcpdf library, fdf etc...so I started bundling these technologies along with things I've built over the years into a LAMP development framework to make life easier for myself. This pass year, I've been having fun developing on the code igniter framework. It does many of the things I do in my framework. Coding in CI feels natural because the MVC and ORM feels similar to the MVC and ORM of my framework. So now I'm contemplating migrating a lot of the plugins in my framework over to CI. The pros and cons I can think of for such a project are: Pros: benefit from the vast community of CI developers lots of other developers will be familiar with it better documentation Cons: I've built a lot of useful plugins against my own framework, and it will take a lot of time to move even just the essential ones at the moment, I still, work faster against my own framework than CI, just because I'm more familiar with it even if I did migrate to CI, there will always be newer and better frameworks in the near future, and i'll be contemplating this scenario again So my question is the following: perhaps I should leave my old framework as is, and for each new project I receive, I make a decision on whether the requirements are best served by developing with CI or my own framework. Is this the right approach?

    Read the article

  • Database and logic layer for ASP.NET MVC application

    - by Ismail
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern as it allows to apply filter in logic layer rather than in data access layer. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right? So questions basically are - Is MySQL to Linq possible? If yes where can I get more details on it? Pros and cons of using EntityFramework with MySQL? Will it be easy to access data using EntityFramework with MySQL? Will I be able to implement repository patter which allows applying filter in logic layer rather than data access layer (when I use EntityFramework with MySQL) Does it fetches hell lot of data from database and then apply filter on it? If it sounds too many questions from my side in that case, if you can just let me know what you will do (with a considerable reason) in this situation as an experienced person in this area, that should answer my question.

    Read the article

  • ASP .NET confusion - server controls

    - by Brandi
    I have read through the information in this question: http://stackoverflow.com/questions/22084/asp-net-aspxxx-controls-versus-standard-html but am still rather confused. The situation was I was asked to do a web project where I made a wizard. When I was done with the project everyone asked why I had used an <asp:Wizard...>. I thought this was what was being asked for, but apparently not, so after this I was led to believe that server controls were just prototyping tools. However, the next project I did my DB queries through C# code-behind and loaded the results via html. I was then asked why I had not used a gridview and a dataset. Does anyone have a list of pros and cons why they would choose to use specific html controls over specific server controls and why? I guess I'm looking for a list... what server controls are okay to use and why? EDIT: I guess this question is open ended, so I'll clarify a few more specific questions... Is it okay to use very simple controls such as asp:Label or do these just end up wasting space? It seems like it would be difficult to access html in the code behind otherwise. Are there a few controls that should just never be used? Does anyone have a good resource that will show me pros and cons of each control?

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • How can I cluster short messages [Tweets] based on topic ? [Topic Based Clustering]

    - by Jagira
    Hello, I am planning an application which will make clusters of short messages/tweets based on topics. The number of topics will be limited like Sports [ NBA, NFL, Cricket, Soccer ], Entertainment [ movies, music ] and so on... I can think of two approaches to this Ask for users to tag questions like Stackoverflow does. Users can select tags from a predefined list of tags. Then on server side I will cluster them on based of tags. Pros:- Simple design. Less complexity in code. Cons:- Choices for users will be restricted. Clusters will not be dynamic. If a new event occurs, the predefined tags will miss it. Take the message, delete the stopwords [ predefined in a dictionary ] and apply some clustering algorithm to make a cluster and depending on its popularity, display the cluster. The cluster will be maintained according to its sustained popularity. New messages will be skimmed and assigned to corresponding clusters. Pros:- Dynamic clustering based on the popularity of the event/accident. Cons:- Increased complexity. More server resources required. I would like to know whether there are any other approaches to this problem. Or are there any ways of improving the above mentioned methods? Also suggest some good clustering algorithms.I think "K-Nearest Clustering" algorithm is apt for this situation.

    Read the article

  • ASP.NET Web Application: use 1 or multiple virtual directories

    - by tster
    I am working on a (largish) internal web application which has multiple modules (security, execution, features, reports, etc.). All the pages in the app share navigation, CSS, JS, controls, etc. I want to make a single "Web Application" project, which includes all the pages for the app, then references various projects which will have the database and business logic in them. However, some of the people on the project want to have separate projects for the pages of each module. To make this more clear, this is what I'm advocating to be the projects. /WebInterface* /SecurityLib /ExecutionLib etc... And here is what they are advocating: /SecurityInterface* /SecutiryLib /ExecutionInterface* /ExecutionLib etc... *project will be published to a virtual directory of IIS Basically What I'm looking for is the advantages of both approaches. Here is what I can think of so far: Single Virtual Directory Pros Modules can share a single MasterPage Modules can share UserControls (this will be common) Links to other modules are within the same Virtual directory, and thus don't need to be fully qualified. Less chance of having incompatible module versions together. Multiple Virtual Directories Pros Can publish a new version of a single module without disrupting other modules Module is more compartmentalized. Less likely that changes will break other modules. I don't buy those arguments though. First, using load balanced servers (which we will have) we should be able to publish new versions of the project with zero downtime assuming there are no breaking database changes. Second, If something "breaks" another module, then there is either an improper dependency or the break will show up eventually in the other module, when the developers copy over the latest version of the UserControl, MasterPage or dll. As a point of reference, there are about 10 developers on the project for about 50% of their time. The initial development will be about 9 months.

    Read the article

  • Keeping third-party libraries under a Mercurial project: Sub-repos or not?

    - by fraktal
    Hello, We are developing a closed-source project, versionned with Mercurial. We are using two libraries in our project : One of those libraries is being developed by a third-party. They are using git, and we usually just pull from their repo once in a week to get the latest changes. The other library is being developed by ourselves, and is under active development. It must live in its own public mercurial repository, as it is licensed under LGPL. (It's a fork of a third-party LGPL component, ported to our platform) So my question is: How should I organize the source to ensure that: A developer from our team should be able to get all the source (main project + libraries) with a single "clone" command We should be able to pull easily the latest changes from the libraries, even though one of them is managed by git Should we use mercurial sub-repos functionnality, with hg-git to access to the library under git? Is it well supported by TortoiseHg and BitBucket? (pros: easy to pull library changes / cons: does it works well?) Or should we keep only snapshots of the libraries under our project? (thus, when there are new upstream changes in the libraries, we pull them to a separate place, and then copy the whole source to our project? (pros: will work / cons: pain in the ass, especially for the library that is being developed by ourselves, which is subject to a lot of daily changes)

    Read the article

  • How to handle product ratings in a database

    - by Mel
    Hello, I would like to know what is the best approach to storing product ratings in a database. I have in mind the following two (simplified, and assuming a MySQL db) scenarios: Scenario 1: Create two columns in the product table to store number of votes and the sum of all votes. Use columns to get an average on the product display page: products(productedID, productName, voteCount, voteSum) Pros: I will only need to access one table, and thus execute one query to display product data and ratings. Cons: Write operations will be executed in a table whose original purpose is only to furnish product data. Scenario 2: Create an additional table to store ratings. products(productID, productName) ratings(productID, voteCount, voteSum) Pros: Isolate ratings into a separate table, leaving the products table to furnish data on available products. Cons: I will have to execute two separate queries on product page requests (one for data and another for ratings). In terms of performance, which of the following two approaches is best: Allow users to execute an occasional write query to a table that will handle hundreds of read requests? Execute two queries at every product page, but isolate the write query into a separate table. I'm a novice to database development, and often find myself struggling with simple questions such as these. Many thanks,

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Confluence vs Sharepoint

    - by FerranB
    We use Confluence mainly for documentation and want to make an step forward moving all the files (pdfs, etc) to Confluence but we want to determine if it's the best option. As far as I know Confluence is a wiki and Sharepoint is not. How compare confluence and Sharepoint as file containers? Which benefits have Sharepoint over Confluence and vice-versa? Pros and Cons?

    Read the article

  • Comparison of Hyper-V, Hyper-V Server, VMware ESXi, Xen and Parallels Bare Metal (Community Wiki)

    - by Andrew J. Brehm
    Can we use this question to collect information and the pros and cons of each of the above products? Specifically I am wondering whethere there is any sane reason to use Hyper-V (the role built into Windows Server) over Hyper-V server (the stand-alone product based on the same technology) and what exactly the differences are between ESXi, Xen and Hyper-V and why nobody seems to use Parallels Bare Metal. Make this a Community Wiki. I want comparisons, not reputation.

    Read the article

  • Linux Mint vs Kubuntu

    - by Hannes de Jager
    I'm currently running Kubuntu Karmic Koala and are eager to upgrade to 10.04 the end of the month. But I've also spotted Linux Mint and heard a couple of good things about it. It looks snazzy but I was wondering how it compares to Ubuntu/Kubuntu. For those that ran both can you provide some pros and cons?

    Read the article

  • Snow Leopard on Core Duo

    - by Brendan Foote
    The first run of MacBook Pros have Core Duo processors, whereas all the ones after that have Core 2 Duos. Apple says Snow Leopard only requires an Intel processor, but will a first-gen MacBook Pro get enough of the improvements to be worth upgrading? This is similar to the question about Snow Leopard on an old iBook, but it differs because this processor is supported by Apple, but seems counter to the 64-bit theme of the upgrade.

    Read the article

  • Macbook Pro with Windows 7 - GPU always on

    - by Joonas Pulakka
    Übergizmo is reporting an issue with the new Macbook Pros' GeForce 330M GPU being always "on" under Windows 7, and thus almost halving the battery life compared to that with OS X (which is able to somehow suspend that GPU and use the the low-end integrated GPU to do the light work). Any solutions, or rumors of coming solutions?

    Read the article

  • Software load balancing fail-over vs hardware

    - by SmartLemon
    Please correct me, but my understanding is that with software load balancing a service must be run on each server while there is one DS that notifies the other servers that a server has gone down and that they should consume that servers load. With hardware load balancing what happens in a fail-over? Could someone explain? Is there advantages with using hardware load balancing when it comes to fail-over, or is there advantages with software? Or do they both have their pros and cons?

    Read the article

  • Managing SQL Server users via Active directory groups

    - by hyty
    I'm building SQL Server instance for reporting purposes. My plan is to use AD groups for server and database logins. I have several groups with different roles (admin, developer, user etc.), and I would like to map these roles into SQL Server database roles (db_owner, db_datawriter etc.). What are the pros and cons of using AD groups for logins? What kind of problems you have noticed?

    Read the article

  • Bonjour for Windows vs. SMB for printer sharing

    - by Ryan O
    My landlord would like to print to her printer connected to her Mac from Windows machines in her house. (I'm unsure of what versions of windows, but I assume Vista or 7.) Looking at these docs from Apple, it sounds like I can set up the share via Bonjour for Windows or SMB. What are the pros and cons of doing it one way or the other? Has anybody who has tried both found one more reliable than the other, or is it pretty much a tossup?

    Read the article

  • Using /etc/services for in-house well-known ports

    - by LavaScornedOven
    I couldn't find anything much about this, but I'm interested what are pros and cons (if any) in using /etc/services for in-house software? On my Linux distro (Ubuntu 14.04) at the end of /etc/services is a comment: # Local services hinting that it could be a good thing to do. One thing that comes to mind is that having in-house ports in /etc/services would make serv database a reference point for common knowledge and much better source of default ports for applications throughout the system.

    Read the article

  • SELinux vs. AppArmor vs. grsecurity

    - by Marco
    I have to set up a server that should be as secure as possible. Which security enhancement would you use and why, SELinux, AppArmor or grsecurity? Can you give me some tips, hints, pros/cons for those three? AFAIK: SELinux: most powerful but most complex AppArmor: simpler configuration / management than SELinux grsecurity: simple configuration due to auto training, more features than just access control

    Read the article

  • VMware or Xen support for AIX on pSeries architecture

    - by A.Rashad
    I tried to find an explicit confirmation on VMware website if there is any chance we could virtualize AIX running on pSeriese architecture (P5, P6 and P7), but in vain. so far we have only one product available which is PowerVM (IBM Product) but we are trying to find alternative solutions to evaluate pros and cons before taking any action. Even Xen mentions the support for Power PC but for Linux not AIX. I hope someone could give an insight on this matter.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >