Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 125/663 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • How do you differentiate between "box," "machine," "computer" and whatever else?

    - by Corey
    There seems to be a few terms for referring to a computer, especially in the tech world. Different terms seem to be used based on technical expertise. When talking with people with some technical knowledge, I'll refer to it as a machine. When talking to non-technical people (family, friends) I'll call it a computer. On the rare occasion I'm talking about servers, I might call it a box, but even then I'll probably still call it a machine. Is that just me, or do there exist rules already for what to call a computer?

    Read the article

  • Catering for client's web-hosting needs, minus the headaches ?

    - by julien
    I'll be trying to sell my Ruby on Rails development skills to small local businesses. It seems I'd be shooting myself in the foot if I couldn't manage to put their apps into production, in fact catering for this would be a selling point. However, I do not want to bill every client monthly for the cost of their hosting, they would have to be the contract holders with the hosting service, and I'd only consult if they needed technical help when scaling. I've looked on one hand at cloud platforms, like engine yard, which seem like they would be too costly for the smaller clients, and on the other hand at vps providers which seem they would not be client friendly enough. Has anyone faced the same issue and come up with a decent solution ?

    Read the article

  • What about introduction to programming with C# via LINQPad?

    - by Gulshan
    From different questions/answers/articles in this and some other sites, I got the idea that the introductory language for programming should be- High level Less verbose C# is one of the heavily used high level languages being used these days. It's also multi-paradigm and descendant of C, the lingua-franca of all programming languages. So, I think it has the potential to be the introductory programming language. But I felt it's a bit verbose for the novice learners. Then LINQPad came into my mind. With LINQPad, someone can start with C# without it's verbosity. Because you can just run one statement or few statements or a standalone function with LINQPad. Again you can run a full source file also. Another thing it provide is- using SQL. So, it can be used for learning SQL too. And not to mention, it's free. So, what you guys think about the idea of introducing programming with C# via LINQPad? Any thing to watch out? Any suggestion?

    Read the article

  • Does using GCC specific builtins qualify as incorporation within a project?

    - by DavidJFelix
    I understand that linking to a program licensed under the GPL requires that you release the source of your program under the GPL as well, while the LGPL does not require this. The terminology of the (L)GPL is very clear about this. #include "gpl_program.h" means you'd have to license GPL, because you're linking to GPL licensed code. And #include "lgpl_program.h" means you're free to license however you want, so that it doesn't explicitly prohibit linking to LGPL source. Now, my question about what isn't clear is: [begin question] GCC is GPL licensed, compiling with GCC, does not constitute "integration" into your program, as the GPL puts it; does using builtin functions (which are specific to GCC) constitute "incorporation" even though you haven't explicitly linked to this GPL licensed code? My intuition tells me that this isn't the intention, but legality isn't always intuitive. I'm not actually worried, but I'm curious if this could be considered the case. [end question] [begin aside] The reason for my equivocation is that GCC builtins like __builtin_clzl() or __builtin_expect() are an API technically and could be implemented in another way. For example, many builtins were replicated by LLVM and the argument could be made that it's not implementation specific to GCC. However, many builtins have no parallel and when compiled will link GPL licensed code in GCC and will not compile on other compilers. If you make the argument here that the API could be replicated by another compiler, couldn't you make that identical claim about any program you link to, so long as you don't distribute that source? I understand that I'm being a legal snake about this, but it strikes me as odd that the GPL isn't more specific. I don't see this as a reasonable ploy for proprietary software creators to bypass the GPL, as they'd have to bundle the GPL software to make it work, removing their plausible deniability. However, isn't it possible that if builtins don't constitute linking, then open source proponents who oppose the GPL could simply write a BSD/MIT/Apache/Apple licensed product that links to a GPL'd program and claim that they intend to write a non-GPL interface that is identical to the GPL one, preserving their BSD license until it's actually compiled? [end aside] Sorry for the aside, I didn't think many people would follow why I care about this if I'm not facing any legal trouble or implications. Don't worry too much about the hypotheticals there, I'm just extrapolating what either answer to my actual question could imply.

    Read the article

  • What can I do with dynamic typing that I can not do with static typing

    - by Justin984
    I've been using python for a few days now and I think I understand the difference between dynamic and static typing. What I don't understand is why it's useful. I keep hearing about its "flexibility" but it seems like it just moves a bunch of compile time checks to runtime, which means more unit tests. This seems like an awfully big tradeoff to make for small advantages like readability and "flexibility". Can someone provide me with a real world example where dynamic typing allows me to do something I can't do with static typing?

    Read the article

  • I need to develop a parser. Can I use Lex and Yacc for the purpose?

    - by Scrooge
    I need to extract very particular data from log files(of different types and formats). Since I am a recent college passout; my mind ran to using Lex and Yacc for the purpose. Now I have the following Questions 1. Will it be legal to do so ? (This product I am working for belongs to one of the biggest tech companies in the world.) 2. Also ; I would like to know if I am being too afraid to write my own parser? 3. How can I use Lex and Yacc if my product is Windows based? Please tell me if you need any clarification or extra information.

    Read the article

  • Implementation details of database synchronisation API

    - by Daniel
    I want to achieve a database synchronisation between my server database and a client application. The server would run MySQL and the applications may run different database technologies, their implementation isn't important. I have a MySQL database online and web accessible via an API I wrote in PHP (just a detail). My client application ships with a copy of the online data. As time passes my goal is to check for any changes in the online database and make these updates available to the client app via an API call, by sending a date to an API endpoint corresponding to the last date the app was updated, the response would be a JSON filled with all new objects and updated objects, and delete IDs, this makes possible to update the local store appropriately. Essentially I want to do this: http://dbconvert.com/synchronization.php My question is about the implementation details. Would I need to add a column to my database tables with a "last modified" date? Since the client app could be very out of date if it's been offline for a long time, does that also mean I shouldn't delete data from the online database but instead have another column called "delete" set to 1 and a modified date updated appropriately? Would my SQL query simply check for all data with a modified date superior then the date passed into the API request by the client? I feel like there's a lot more to it then having a ton of dates everywhere. And also, worry that I will need to persist a lot of old data in order to ensure that old versions of the client app always have the opportunity to delete parts of their data when they are able to sync.

    Read the article

  • Is sticking to one language on a particular project a good practice?

    - by Ans
    I'm developing a pipeline for processing text that will go into production. The question I keep asking myself is: should I stick to one language for the project when I'm looking for a tool to do a particular task (e.g. NLTK, PDFMiner, CLD, CRFsuite, etc.)? Or is it OK to mix and match languages on the project? So I pick the best tool regardless of what language it's written in (e.g. OpenNLP, ParsCit, poppler, CFR++, etc.) and warp (wrap) my code around it? Note, I am not asking about should a developer stick to just one language for their career.

    Read the article

  • Best Practices and Etiquette for Setting up Email Notifications

    - by George Stocker
    If you were going to set up a Email Alerts for the customers of your website to subscribe to, what rules of etiquette ought to be followed? I can think of a few off the top of my head: Users can Opt-Out Text Only (Or tasteful Remote Images) Not sent out more than once a week Clients have fine-grained control over what they receive emails about (Only receive what they are interested in) What other points should I consider? From a programming standpoint, what is the best method for setting up and running email notifications? Should I use an ASP.NET Service? A Windows Service? What are the pitfalls to either? How should I log emails that are sent? I don't care if they're received, but I do need to be able to prove that I did or did not send an email.

    Read the article

  • Were you a good programmer when you first left university?

    - by dustyprogrammer
    I recently graduated, from university. I have since then joined a development team where I am by far the least experienced developer, with maybe with a couple work terms under my belt, meanwhile the rest of the team is rocking 5-10 years experience. I am/was a very good student and a pretty good programmer when it came to bottled assignments and tests. I have worked on some projects with success. But now I working with a much bigger code-base, and the learning curve is much higher... I was wondering how many other developers started out their careers in teams and left like they sucked. When does this change? How can I speed up the process? My seniors are helping me but I want to be great and show my value now. I don't to start a flame war, this is just a question I have been having and I was hoping to get some advice from other experienced developers, as well as other beginners like me.

    Read the article

  • Troubleshooting VC++ DLL in VB.Net

    - by Jolyon
    I'm trying to make a solution in Visual Studio that consists of a VC++ DLL and a VB.Net application. To figure this out, I created a VC++ Class Library project, with the following code (I removed all the junk the wizard creates): mathfuncs.cpp: #include "MathFuncs.h" namespace MathFuncs { double MyMathFuncs::Add(double a, double b) { return a + b; } } mathfuncs.h: using namespace System; namespace MathFuncs { public ref class MyMathFuncs { public: static double Add(double a, double b); }; } This compiles quite happily. I can then add a VC++ console project to the solution, add a reference to the original project for this new project, and call it as follows: test.cpp: using namespace System; int main(array<System::String ^> ^args) { double a = 7.4; int b = 99; Console::WriteLine("a + b = {0}", MathFuncs::MyMathFuncs::Add(a, b)); return 0; } This works just fine, and will build to test.exe and mathsfuncs.dll. However, I want to use a VB.Net project to call the DLL. To do this, I add a VB.Net project to the solution, make it the startup project, and add a reference to the original project. Then, I attempt to use it as follows: MsgBox(MathFuncs.MyMathFuncs.Add(1, 2)) However, when I run this code, it gives me an error: "Could not load file or assembly 'MathFuncsAssembly, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An attempt was made to load a program with an incorrect format." Do I need to expose the method somehow? I'm using Visual Studio 2008 Professional.

    Read the article

  • Combining template method with strategy

    - by Mekswoll
    An assignment in my software engineering class is to design an application which can play different forms a particular game. The game in question is Mancala, some of these games are called Wari or Kalah. These games differ in some aspects but for my question it's only important to know that the games could differ in the following: The way in which the result of a move is handled The way in which the end of the game is determined The way in which the winner is determined The first thing that came to my mind to design this was to use the strategy pattern, I have a variation in algorithms (the actual rules of the game). The design could look like this: I then thought to myself that in the game of Mancala and Wari the way the winner is determined is exactly the same and the code would be duplicated. I don't think this is by definition a violation of the 'one rule, one place' or DRY principle seeing as a change in rules for Mancala wouldn't automatically mean that rule should be changed in Wari as well. Nevertheless from the feedback I got from my professor I got the impression to find a different design. I then came up with this: Each game (Mancala, Wari, Kalah, ...) would just have attribute of the type of each rule's interface, i.e. WinnerDeterminer and if there's a Mancala 2.0 version which is the same as Mancala 1.0 except for how the winner is determined it can just use the Mancala versions. I think the implementation of these rules as a strategy pattern is certainly valid. But the real problem comes when I want to design it further. In reading about the template method pattern I immediately thought it could be applied to this problem. The actions that are done when a user makes a move are always the same, and in the same order, namely: deposit stones in holes (this is the same for all games, so would be implemented in the template method itself) determine the result of the move determine if the game has finished because of the previous move if the game has finished, determine who has won Those three last steps are all in my strategy pattern described above. I'm having a lot of trouble combining these two. One possible solution I found would be to abandon the strategy pattern and do the following: I don't really see the design difference between the strategy pattern and this? But I am certain I need to use a template method (although I was just as sure about having to use a strategy pattern). I also can't determine who would be responsible for creating the TurnTemplate object, whereas with the strategy pattern I feel I have families of objects (the three rules) which I could easily create using an abstract factory pattern. I would then have a MancalaRuleFactory, WariRuleFactory, etc. and they would create the correct instances of the rules and hand me back a RuleSet object. Let's say that I use the strategy + abstract factory pattern and I have a RuleSet object which has algorithms for the three rules in it. The only way I feel I can still use the template method pattern with this is to pass this RuleSet object to my TurnTemplate. The 'problem' that then surfaces is that I would never need my concrete implementations of the TurnTemplate, these classes would become obsolete. In my protected methods in the TurnTemplate I could just call ruleSet.determineWinner(). As a consequence, the TurnTemplate class would no longer be abstract but would have to become concrete, is it then still a template method pattern? To summarize, am I thinking in the right way or am I missing something easy? If I'm on the right track, how do I combine a strategy pattern and a template method pattern? This is part of a homework assignment but I'm not looking to be gifted the answer, I have deliberately been very verbose in my question to show that I have thought about it before coming here to ask a question

    Read the article

  • Understand how the TLB (Translation Lookaside buffer) works and interacts with pagetable and addresses

    - by Darxval
    So I am trying to understand this TLB (Translation Lookaside Buffer). But I am having a hard time grasping it. in context of having two streams of addresses, tlb and pagetable. I don't understand the association of the TLB to the streamed addresses/tags and page tables. a. 4669, 2227, 13916, 34587, 48870, 12608, 49225 b. 12948, 49419, 46814, 13975, 40004, 12707 TLB Valid Tag Physical Page Number 1 11 12 1 7 4 1 3 6 0 4 9 Page Table Valid Physical Page or in Disk 1 5 0 Disk 0 Disk 1 6 1 9 1 11 0 Disk 1 4 0 Disk 0 Disk 1 3 1 12 How does the TLB work with the pagetable and addresses? The homework question given is: Given the address stream in the table, and the initial TLB and page table states shown above, show the final state of the system also list for each reference if it is a hit in the TLB, a hit in the page table or a page fault. But I think first i just need to know how does the TLB work with these other elements and how to determine things. How do I even start to answer this question?

    Read the article

  • Test iPhone app on iPad mini?

    - by Devfly
    I have developed an iPhone app, right now I only need a device for testing. I have 300$, and two choices - second hand iPhone 4, or brand new iPad mini. The better choice obviously is the iPad, but is it sufficient for testing iPhone apps on? On the iPad, iPhone apps can run just fine in 2X mode, but are there any differences between the app performance on iPhone and iPad (except the chipset). Should I test my app on actual iPhone, or the iPad will suffice? My app is RSS reader, not some game, so I think everything will be fine with testing on iPad mini. If I buy the iPad I will find some friends iPhone 4/3gs running iOS 5.1 (because my app's deployment target is 5.1, and the iPad comes with 6.0), but of course I can't extensively test on this iPhone. Thank you!

    Read the article

  • In rails, what defines unit testing as opposed to other kinds of testing

    - by junky
    Initially I thought this was simple: unit testing for models with other testing such as integration for controller and browser testing for views. But more recently I've seen a lot of references to unit testing that doesn't seem to exactly follow this format. Is it possible to have a unit test of a controller? Does that mean that just one method is called? What's the distinction? What does unit testing really means in my rails world?

    Read the article

  • Intellectual Property for in house development

    - by Kyle Rogers
    My company is a sub contractor on a major government contract. Over the past 5 years we've been developing in house applications to help support our company and streamline our work. Apparently in 2008 our president of the company at that time signed a continuation of services contract with the company we subcontract with on this project. In the contract amendment various things were discussed such as intellectual property and the creation of new and existing tools. The contract states that all the subcontractor's tools/scripts/etc... become the intellectual property of the main contractor holder. Basically all tools that were created in support of the project which we work on are no longer ours exclusively and they have rights to them. My company really doesn't do software development specifically but because of this contract these tools helped tremendously with our daily tasking. Does my company have any sort of recourse or actions to help keep our tools? My team of developers were completely unaware of any of these negotiations and until recently were kept in the dark about the agreements that were made. Do we as developers have any rights to the software? Since our company is not a software development shop, we have created all these tools without any sort of agreements or contracts within the company stating that we give our company full rights to our creations? I was reading an article by Joel Spolsky on this topic and was just wonder if there is any advice out there to help assist us? Thank you Joel Spolsky's Article

    Read the article

  • Where to draw the line between development-led security and administration-led security?

    - by haylem
    There are cases where you have the opportunity, as a developer, to enforce stricter security features and protections on a software, though they could very well be managed at an environmental level (ie, the operating system would take care of it). Where would you say you draw the line, and what elements do you factor in your decision? Concrete Examples User Management is the OS's responsibility Not exactly meant as a security feature, but in a similar case Google Chrome used to not allow separate profiles. The invoked reason (though it now supports multiple profiles for a same OS user) used to be that user management was the operating system's responsibility. Disabling Web-Form Fields A recurrent request I see addressed online is to have auto-completion be disabled on form fields. Auto-completion didn't exist in old browsers, and was a welcome feature at the time it was introduced for people who needed to fill in forms often. But it also brought in some security concerns, and so some browsers started to implement, on top of the (obviously needed) setting in their own preference/customization panel, an autocomplete attribute for form or input fields. And this has now been introduced into the upcoming HTML5 standard. For browsers who do not listen to this attribute, strange hacks *\ are offered, like generating unique IDs and names for fields to avoid them from being suggested in future forms (which comes with another herd of issues, like polluting your local auto-fill cache and not preventing a password from being stored in it, but instead probably duplicating its occurences). In this particular case, and others, I'd argue that this is a user setting and that it's the user's desire and the user's responsibility to enable or disable auto-fill (by disabling the feature altogether). And if it is based on an internal policy and security requirement in a corporate environment, then substitute the user for the administrator in the above. I assume it could be counter-argued that the user may want to access non-critical applications (or sites) with this handy feature enabled, and critical applications with this feature disabled. But then I'd think that's what security zones are for (in some browsers), or the sign that you need a more secure (and dedicated) environment / account to use these applications. * I obviously don't deny the ingenuity of the people who were forced to find workarounds, just the necessity of said workarounds. Questions That was a tad long-winded, so I guess my questions are: Would you in general consider it to be the application's (hence, the developer's) responsiblity? Where do you draw the line, if not in the "general" case?

    Read the article

  • How to name an subclass that add a minor, detailed thing?

    - by Louis Rhys
    What is the most concise (yet descriptive) way of naming a subclass that only add a specific minor thing to the parent? I encountered this case a lot in WPF, where sometime I have to add a small functionality to an out-of-the-box control for specific cases. Example: TreeView doesn't change the SelectedItem on right-click, but I have to make one that does in my application. Some possible names are TreeViewThatChangesSelectedItemOnRightClick (way too wordy and maybe difficult to read because there is so many words concantenated together) TreeView_SelectedItemChangesOnRightClick (slightly more readable, but still too wordy and the underscore also breaks the normal convention for class names) TreeViewThatChangesSIOnRC (non-obvious acronym), ExtendedTreeView (more concise, but doesn't describe what it is doing. Besides, I already found a class called this in the library, that I don't want to use/modify in my application). LouisTreeView, MyTreeView, etc. (doesn't describe what it is doing). It seems that I can't find a name which sounds right. What do you do in situation like this?

    Read the article

  • Using Visual Studio as a Task-Focused IDE

    - by Jay Stevens
    Are there patterns or libraries or any official Microsoft SDK for using Visual Studio as a specifically Task-Focused UI? For example, both Revolution R (IDE for the R language) and SQL 2012 (and I think SQL 2008 and possibly 2005) use Visual Studio as the underlying IDE framework. Is there an officially supported SDK and/or examples/samples for doing this type of thing? I am building a language Parser for an existing language - whose only available IDE is INSANELY expensive - using Irony (and eventually will generate a Language Service as well). Any direct or indirect suggestions/answers are appreciated.

    Read the article

  • When is type testing OK?

    - by svidgen
    Assuming a language with some inherent type safety (e.g., not JavaScript): Given a method that accepts a SuperType, we know that in most cases wherein we might be tempted to perform type testing to pick an action: public void DoSomethingTo(SuperType o) { if (o isa SubTypeA) { o.doSomethingA() } else { o.doSomethingB(); } } We should usually, if not always, create a single, overridable method on the SuperType and do this: public void DoSomethingTo(SuperType o) { o.doSomething(); } ... wherein each subtype is given its own doSomething() implementation. The rest of our application can then be appropriately ignorant of whether any given SuperType is really a SubTypeA or a SubTypeB. Wonderful. But, we're still given is a-like operations in most, if not all, type-safe languages. And that seems suggests a potential need for explicit type testing. So, in what situations, if any, should we or must we perform explicit type testing? Forgive my absent mindedness or lack of creativity. I know I've done it before; but, it was honestly so long ago I can't remember if what I did was good! And in recent memory, I don't think I've encountered a need to test types outside my cowboy JavaScript.

    Read the article

  • how to follow python polymorphism standards with math functions

    - by krishnab
    So I am reading up on python in Mark Lutz's wonderful LEARNING PYTHON book. Mark makes a big deal about how part of the python development philosophy is polymorphism and that functions and code should rely on polymorphism and not do much type checking. However, I do a lot of math type programming and so the idea of polymorphism does not really seem to apply--I don't want to try and run a regression on a string or something. So I was wondering if there is something I am missing here. What are the applications of polymorphism when I am writing functions for math--or is type checking philosophically okay in this case.

    Read the article

  • Getting from a user-story to code while using TDD (scrum)

    - by Ittai
    I'm getting into scrum and TDD and I think I have some confusion which I'd like to get your feedback about. Let's assume I have a user-story in my backlog, in order for me to start developing it as part of TDD I need to have requirements, right so far? Is it true to say that the product manager and the QA should be responsible for taking the user-story and breaking it down to acceptance tests? I think the above is true since the acceptance tests need to be formal, so they can be used as tests, but also human readable so that the product can approve they are the requirements, right? Is it also true that I later take these acceptance tests and use them as my requirements, i.e. they are a set of use-cases which I implement (through TDD)? I hope I'm not making too much of a mess but that's the current flow I have in mind right now. Update I think my initial intentions were unclear so I'll try to rephrase. I want to know more details about the scrum flow of turning a user-story into code while using TDD. The starting point is obvious, a user surfaces a need (or the user's representative as the product) which is a short 1-2 lines description in the known format and that is added to the product backlog. When there is a spring planning meeting user-stories are taken from the backlog and assigned to developers. In order for a developer to write code they need requirements (especially in TDD since the requirements are what the tests are derived from). When, by whom and to which format are the requirements compiled? What I had in mind was that the product and QA define the requirements via acceptance tests (I'm thinking of automatic using FitNesse or the sort but that's not the core I think) which help to serve 2 purposes at the same time: They define "Done" properly. They give a developer something to derive tests from. I wasn't sure when these were written (before the sprint they're picked then that might be a waste since additional information will arrive or the story won't be picked, during the iteration then the developer might get stuck waiting for them...)

    Read the article

  • Logical progressions through the job market

    - by Philluminati
    I'm 5 years out of a unrecognised university where I did Software Engineering. First job was VB.NET, one job was Python, Linux and Web development. I feel cast as a web developer. I'd love a role doing C but no one is interested in juniors if the applicant hasn't got 3 years of C development experience already. I've done some C and a drop of open source coding but I'll never have the confidence to convince someone I know absolutely what I'm doing. Do I just spend more and more time letting life pass me by as I sit in my room on a friday night writing a C problem "for the sake of learning more C" Basically I'm just not sure I want to continue my career if it's going to involve nothing but high level, machine abstracted, business logic and as interested as I am in low level development and enjoy reading books by Taunembaum I struggle to see how I can make the jump and I just feel life would be easier if I got a job in a cafe in Amsterdam rolling spliffs for customers. My ideal job, being a paid member of the Fedora development team seems so far away, without anyone to pay me to learn the skills to get there, and the only way would be to literally spend weeks and weeks of my life contributing code without recognition for free and without any guarentees at the end. Not that I've contributed anything at all so far. Are there any career paths that are logically set out so that jumping between roles is "correctly" incremental and where hard work and learning does eventually lead to the kind of places I might want to go? [ and also getting paid at the same time? ]

    Read the article

  • Should I choose Doctrine 2 or Propel 1.5/1.6, and why?

    - by Billy ONeal
    I'd like to hear from those who have used Doctrine 2 (or later) and Propel 1.5 (or later). Most comparisons between these two object relational mappers are based on old versions -- Doctrine 1 versus Propel 1.3/1.4, and both ORMs went through significant redesigns in their recent revisions. For example, most of the criticism of Propel seems to center around the "ModelName Peer" classes, which are deprecated in 1.5 in any case. Here's what I've accumulated so far (And I've tried to make this list as balanced as possible...): Propel Pros Extremely IDE friendly, because actual code is generated, instead of relying on PHP magic methods. This means IDE features like code completion are actually helpful. Fast (In terms of database usage -- no runtime introspection is done on the database) Clean migration between schema versions (at least in the 1.6 beta) Can generate PHP 5.3 models (i.e. namespaces) Easy to chain a lot of things into a single database query with things like useXxx methods. (See the "code completion" video above) Cons Requires an extra build step, namely building the model classes. Generated code needs rebuilt whenever Propel version is changed, a setting is changed, or the schema changes. This might be unintuitive to some and custom methods applied to the model are lost. (I think?) Some useful features (i.e. version behavior, schema migrations) are in beta status. Doctrine Pros More popular Doctrine Query Language can express potentially more complicated relationships between data than easily possible with Propel's ActiveRecord strategy. Easier to add reusable behaviors when compared with Propel. DocBlock based commenting for building the schema is embedded in the actual PHP instead of a separate XML file. Uses PHP 5.3 Namespaces everywhere Cons Requires learning an entirely new programming language (Doctrine Query Language) Implemented in terms of "magic methods" in several places, making IDE autocomplete worthless. Requires database introspection and thus is slightly slower than Propel by default; caching can remove this but the caching adds considerable complexity. Fewer behaviors are included in the core codebase. Several features Propel provides out of the box (such as Nested Set) are available only through extensions. Freakin' HUGE :) This I have gleaned though only through reading the documentation available for both tools -- I've not actually built anything yet. I'd like to hear from those who have used both tools though, to share their experience on pros/cons of each library, and what their recommendation is at this point :)

    Read the article

  • Tester that doesn't test

    - by George
    What should I do about a tester that does not test? We have a complicated dry run scenario, that takes a lot of time to execute. Mostly this tester will execute it's tests in very slow way...checking emails, internet, etc. He reports just a few bugs, but! Whenever the official dry-run begins (these are logged with testlink) the tester starts to open new bugs that where not discovered before. Is he not doing his job correctly? Or am I just overlooking how tests work? I'm not his supervisor, but he is testing code that I wrote.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >