Search Results

Search found 30601 results on 1225 pages for 'team development'.

Page 302/1225 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Are there any "best practices" on cross-device development?

    - by vstrien
    Developing for smartphones in the way the industry is currently doing is relatively new. Of course, there has been enterprise-level mobile development for several decades. The platforms have changed, however. Think of: from stylus-input to touch-input (different screen res, different control layout etc.) new ways of handling multi-tasking on mobile platforms (e.g. WP7's "tombstoning") The way these platforms work aren't totally new (iPhone has been around for quite awhile now for example), but at the moment when developing a functionally equal application for both desktop and smartphone it comes down to developing two applications from ground up. Especially with the birth of Windows Phone with the .NET-platform on board and using Silverlight as UI-language, it's becoming appealing to promote the re-use of (parts of the UI). Still, it's fairly obvious that the needs of an application on a smartphone (or tablet) are very different compared to the needs of a desktop application. An (almost) one-on-one conversion will therefore be impossible. My question: are there "best practices", pitfalls etc. documented about developing "cross-device" applications (for example, developing an app for both the desktop and the smartphone/tablet)? I've been looking at weblogs, scientific papers and more for a week or so, but what I've found so far is only about "migratory interfaces".

    Read the article

  • Can it be a good idea to lease a house rather than a standard office-space for a software development shop? [closed]

    - by hamlin11
    Our lease is up on our US-based office-space in July, so it's back on my radar to evaluate our office-space situation. Two of our partners rather like the idea of leasing a house rather than standard office-space. We have 4 partners and one employee. I'm against the idea at this moment in time. Pros, as I see them Easier to get a good location (minimize commutes) All partners/employees have dogs. Easier to work longer hours without dog-duties pulling people back home More comfortable bathroom situation Residential Internet Rate Control of the thermostat Clients don't come to our office, so this would not change our image The additional comfort-level should facilitate a significantly higher-percentage of time "in the zone" for programmers and artists. Cons, as I see them Additional bills to pay (house-cleaning, yard, util, gas, electric) Additional time-overhead in dealing with bills (house-cleaning, yard, util, gas, electric) Additional overhead required to deal with issues that maintenance would have dealt with in a standard office-space Residential neighbors to contend with The equation starts to look a little nasty when factoring in potential time-overhead, especially on issues that a maintenance crew would deal with at a standard office complex. Can this be a good thing for a software development shop?

    Read the article

  • Is there a massive other side to software development which I've somehow missed, revolving entirely around Microsoft?

    - by Aerovistae
    I'm still a beginning programmer; I've been at it for 2 years. I've learned to work with a few languages, a bit of web development technologies, a handful of libraries, frameworks, and IDEs. But over the past two years (and long before I even started, really), I keep hearing references to these...things. A million of them. Things such as C#, ADO, SOAP, ASP, ASP.NET, the .NET framework, CLR, F#, etc etc. And I've read their Wikipedia articles, in-depth, multiple times, and they all mention a million other things on that list, but I just can't seem to grasp what it all is. The only thing I've taken away with any certainty is that Microsoft is behind all of it. It sounds almost like a conspiracy. Are all these technologies just for developing on the Windows platform? What is .NET? Do some software developers dedicate their entire career just to that side of things? Why would I want to get into it, and what advantage does...whatever it is...have over all the other technologies there are? I hope this makes sense. It's a broad question, but inside it there's a very specific question asking about something I don't know the name of. Hopefully you can grasp my confusion.

    Read the article

  • How to convince an employer to move to VB.Net for new development?

    - by Dabblernl
    Some history:For the last six months I have been employed at a small firm with just three programmers, my employer among them. The firm maintains two programs written in VB6. I am asssigned as the lead programmer to one of these. In the last six months I did some maintenance and bug hunting, but created some new functionality too. I had an interview last december, which was favorable, and my contract was prolonged. I am very happy with this course of events as I only obtained a .Net certification a year ago and have no other qualifications (in the field of coding, that is). It is my strong opinion that, while migration of the existing program to .Net is advisable, it is paramount that from now on the new functionality should be written in VB.Net class libraries. After some study I found out how simple it is to integrate .Net class libraries into the VB6 development environment and how easy it is to add their functionality to existing installations by using application manifests. So, I have decided that now is the moment to roll up my sleeves and try and convince my employer that he should let me develop new code in VB.Net, using VB6 for maintenance only. We get along quite well, but I think I am going to need all the ammunition I can get to convince him. Any arguments, preferably backed up up ones, are very welcome, even arguments to dissuade me ;-)

    Read the article

  • Is C# development effectively inseparable from the IDE you use?

    - by Ghopper21
    I'm a Python programmer learning C# who is trying to stop worrying and just love C# for what it is, rather than constantly comparing it back to Python. I'm really get caught up on one point: the lack of explicitness about where things are defined, as detailed in this Stack Overflow question. In short: in C#, using foo doesn't tell you what names from foo are being made available, which is analogous to from foo import * in Python -- a form that is discouraged within Python coding culture for being implicit rather than the more explicit approach of from foo import bar. I was rather struck by the Stack Overflow answers to this point from C# programmers, which was that in practice this lack of explicitness doesn't really matter because in your IDE (presumably Visual Studio) you can just hover over a name and be told by the system where the name is coming from. E.g.: Now, in theory I realise this means when you're looking with a text editor, you can't tell where the types come from in C#... but in practice, I don't find that to be a problem. How often are you actually looking at code and can't use Visual Studio? This is revelatory to me. Many Python programmers prefer a text editor approach to coding, using something like Sublime Text 2 or vim, where it's all about the code, plus command line tools and direct access and manipulation of folders and files. The idea of being dependent on an IDE to understand code at such a basic level seems anathema. It seems C# culture is radically different on this point. And I wonder if I just need to accept and embrace that as part of my learning of C#. Which leads me to my question here: is C# development effectively inseparable from the IDE you use?

    Read the article

  • What are most demanded web-development languages today for startups?

    - by Liston Catch
    What technologies are in demand nowaydas for web-development for web-startups? For frontend its all clear: HTML5, JS, AJAX, JQuery. But what about backend? What languages (and frameworks) should I consider using? I am not asking "which language is best", I just need a list of modern languages and frameworks (and not Pascal, Delphi or Basic) which are demanded and well-payed. UPD: I totally decline the "it's all about logic, not about language. language is just a tool" concept. While THEORETICALLY it's true, in reality the time you need to study required frameworks is counted by months, so language DOES matter indeed. That's why I made this topic UPD 2: Mason Wheeler, so you seriously advice me to go for Delphi? You think its DEMANDED nowadays? Or you just tell me an exception which only confirms the rule? It's like "one guy won 100,000,000$ in lottery. Just for you to know that lottery is not a bad way to earn money."

    Read the article

  • Case studies for successful service (project) based software development businesses without constant overtime from its employees [closed]

    - by Ryan Taylor
    I work for an IT company that is primarily services (project) based rather than product based. All software engineers are salaried. The company has set new expectations that everyone should work 48 hours per week instead of 40. Note, this isn't occasional overtime due to crunches. This is the new 40. The reasoning is that this enables the company to provide benefits to its employees such as monetary incentives and training because the company is more profitable. more hours worked = more billable hours = larger profit I understand the need for profitability and the occasional crunch time and have put in the extra hours when it was needed and beneficial to the project. However, I am also very sensitive to work life balance and have raised my concerns about the the new expectation. My employer is open to other methods to increase profitability so I hold hope that we can turn things around before it becomes a horrible place to work. How does a services based company become more profitable without increasing the number of hours expected from it's salaried employees? Are there any case studies showing the pros and cons of consistent overtime? Are there any case studies for a successful service based business model (for software development companies) that does not require consistent overtime from its employees?

    Read the article

  • How can one find software development work that involves directly the final end user?

    - by RJa
    I've worked in software development for 15 years and, while there have been signficant personal achievements and a lot of experience, I've always felt detached from the man/woman-on-the-street, the every day person, how it affects their lives, in a number of ways: the technologies: embedded software, hidden away, stuff not seen by the everyday person. Or process technology supporting manufactured products the size of the systems, meaning many jobs, divided up, work is abstract, not one person can see the whole picture the organisations: large, with departments dealing with different areas, the software, the hardware, the marketing, the sales, the customer support the locations and hours: out-of-town business parks away from the rest of society, fixed locations, inflexible: 9-5 everyday This to me seems typical of the companies I worked for and see elsewhere. Granted, there are positives such as the technology itself and usually being among high calibre co-workers, but the above points frustrate me about the industry because they detach the work from its meaning. How can one: change these things in an existing job, or compensate for them? find other work that avoids these and connects with the final end user? Job designs tend to focus on the job content and technical requirements rather than how the job aims to fulfil end user needs, is meaningful.

    Read the article

  • How to handle the fear of future licensing issues of third-party products in software development?

    - by Ian Pugsley
    The company I work for recently purchased some third party libraries from a very well-known, established vendor. There is some fear among management that the possibility exists that our license to use the software could be revoked somehow. The example I'm hearing is of something like a patent issue; i.e. the company we purchased the libraries from could be sued and legally lose the ability to distribute and provide the libraries. The big fear is that we get some sort of notice that we have to cease usage of the libraries entirely, and have some small time period to do so. As a result of this fear, our ability to use these libraries (which the company has spent money on...) is being limited, at the cost of many hours worth of development time. Specifically, we're having to develop lots of the features that the library already incorporates. Should we be limiting ourselves in this way? Is it possible for the perpetual license granted to us by the third party to be revoked in the case of something like a patent issue, and are there any examples of something like this happening? Most importantly, if this is something to legitimately be concerned about, how do people ever go about taking advantage third-party software while preparing for the possibility of losing that capability entirely? P.S. - I understand that this will venture into legal knowledge, and that none of the answers provided can be construed as legal advice in any fashion.

    Read the article

  • How should a non-IT manager secure the long-term maintenance and development of essential legacy software?

    - by user105977
    I've been hunting for a place to ask this question for quite a while; maybe this is the place, although I'm afraid it's not the kind of "question with an answer" this site would prefer. We are a small, very specialized, benefits administration firm with an extremely useful, robust collection of software, some written in COBOL but most in BASIC. Two full-time consultants have ably maintained and improved this system over more than 30 years. Needless to say they will soon retire. (One of them has been desperate to retire for several years but is loyal to a fault and so hangs on despite her husband's insistence that golf should take priority.) We started down the path of converting to a system developed by one of only three firms in the country that offer the type of software we use. We now feel that although this this firm is theoretically capable of completing the conversion process, they don't have the resources to do so timely, and we have come to believe that they will be unable to offer the kind of service we need to run our business. (There's nothing like being able to set one's own priorities and having the authority to allocate one's resources as one sees fit.) Hardware is not a problem--we are able to emulate very effectively on modern servers. If COBOL and BASIC were modern languages, we'd be willing to take the risk that we could find replacements for our current consultants going forward. It seems like there ought to be a business model for an IT support firm that concentrates on legacy platforms like this and provides the programming and software development talent to support a system like ours, removing from our backs the risks of finding the right programming talent and the job of convincing younger programmers that they can have a productive, rewarding career, in part in an old, non-sexy language like BASIC. Where do I find such firms?

    Read the article

  • has_one and has_many associations: which side of the association is saved first

    - by SeeBees
    I have three simplified models: class Team < ActiveRecord::Base has_many :players has_one :coach end class Player < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end class Coach < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end I use the following code to test these models: t = Team.new team.coach = Coach.new team.save! team.save! returns true. But in another test: t = Team.new team.players << Player.new team.save! team.save! gives the following error: > ActiveRecord::RecordInvalid: > Validation failed: Players is invalid > from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/validations.rb:1090:in > `save_without_dirty!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/dirty.rb:87:in `save_without_transactions!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:182:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:208:in > `rollback_active_record_state!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from (irb):14 I figured out when team.save! is called, it first calls player.save!. player needs to validate the presence of the id of the associated team. But at the time player.save! is called, team hasn't been saved yet, and therefore, team_id doesn't yet exist for player. This fails the player's validation, so the error occurs. But on the other hand, team is saved before coach.save!, otherwise the first example will get the same error as the second. So I've concluded that when a has_many bs, a.save! will save bs prior to a. When a has_one b, a.save! will save a prior to b. If I am right, why is this the case? It doesn't seem logical to me. Why has_one and has_many association have different order in saving? Any ideas? Thanks.

    Read the article

  • When saving a model with has_one or has_many associations, which side of the association is saved fi

    - by SeeBees
    I have three simplified models: class Team < ActiveRecord::Base has_many :players has_one :coach end class Player < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end class Coach < ActiveRecord::Base belongs_to :team validates_presence_of :team_id end I use the following code to test these models: t = Team.new team.coach = Coach.new team.save! team.save! returns true. But in another test: t = Team.new team.players << Player.new team.save! team.save! gives the following error: > ActiveRecord::RecordInvalid: > Validation failed: Players is invalid > from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/validations.rb:1090:in > `save_without_dirty!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/dirty.rb:87:in `save_without_transactions!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connection_adapters/abstract/database_statements.rb:136:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:182:in > `transaction' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:208:in > `rollback_active_record_state!' from > /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/transactions.rb:200:in > `save!' from (irb):14 I figured out that when team.save! is called, it first calls player.save!. player needs to validate the presence of the id of the associated team. But at the time player.save! is called, team hasn't been saved yet, and therefore, team_id doesn't yet exist for player. This fails the player's validation, so the error occurs. But on the other hand, team is saved before coach.save!, otherwise the first example will get the same error as the second one. So I've concluded that when a has_many bs, a.save! will save bs prior to a. When a has_one b, a.save! will save a prior to b. If I am right, why is this the case? It doesn't seem logical to me. Why do has_one and has_many association have different order in saving? Any ideas? And is there any way I can change the order? Say I want to have the same saving order for both has_one and has_many. Thanks.

    Read the article

  • Rails: Problem with routes and special Action.

    - by Newbie
    Hello! Sorry for this question but I can't find my error! In my Project I have my model called "team". A User can create a "team" or a "contest". The difference between this both is, that contest requires more data than a normal team. So I created the columns in my team table. Well... I also created a new view called create_contest.html.erb : <h1>New team content</h1> <% form_for @team, :url => { :action => 'create_content' } do |f| %> <%= f.error_messages %> <p> <%= f.label :name %><br /> <%= f.text_field :name %> </p> <p> <%= f.label :description %><br /> <%= f.text_area :description %> </p> <p> <%= f.label :url %><br /> <%= f.text_fiels :url %> </p> <p> <%= f.label :contact_name %><br /> <%= f.text_fiels :contact_name %> </p> <p> <%= f.submit 'Create' %> </p> <% end %> In my teams_controller, I created following functions: def new_contest end def create_contest if @can_create @team = Team.new(params[:team]) @team.user_id = current_user.id respond_to do |format| if @team.save format.html { redirect_to(@team, :notice => 'Contest was successfully created.') } format.xml { render :xml => @team, :status => :created, :location => @team } else format.html { render :action => "new" } format.xml { render :xml => @team.errors, :status => :unprocessable_entity } end end else redirect_back_or_default('/') end end Now, I want on my teams/new.html.erb a link to "new_contest.html.erb". So I did: <%= link_to 'click here for new contest!', new_contest_team_path %> When I go to the /teams/new.html.erb page, I get following error: undefined local variable or method `new_contest_team_path' for #<ActionView::Base:0x16fc4f7> So I changed in my routes.rb, map.resources :teams to map.resources :teams, :member=>{:new_contest => :get} Now I get following error: new_contest_team_url failed to generate from {:controller=>"teams", :action=>"new_contest"} - you may have ambiguous routes, or you may need to supply additional parameters for this route. content_url has the following required parameters: ["teams", :id, "new_contest"] - are they all satisfied? I don't think adding :member => {...} is the right way doing this. So, can you tell me what to do? I want to have an URL like /teams/new-contest or something. My next question: what to do (after fixing the first problem), to validate presentence of all fields for new_contest.html.erb? In my normal new.html.erb, a user does not need all the data. But in new_contest.html.erb he does. Is there a way to make a validates_presence_of only for one action (in this case new_contest)? UPDATE: Now, I removed my :member part from my routes.rb and wrote: map.new_contest '/teams/contest/new', :controller => 'teams', :action => 'new_contest' Now, clicking on my link, it redirects me to /teams/contest/new - like I wanted - but I get another error called: Called id for nil, which would mistakenly be 4 -- if you really wanted the id of nil, use object_id I think this error is cause of @team at <% form_for @team, :url => { :action => 'create_content_team' } do |f| %> What to do for solving this error?

    Read the article

  • Has test driven development (TDD) actually benefited a real world project?

    - by James
    I am not new to coding. I have been coding (seriously) for over 15 years now. I have always had some testing for my code. However, over the last few months I have been learning test driven design/development (TDD) using Ruby on Rails. So far, I'm not seeing the benefit. I see some benefit to writing tests for some things, but very few. And while I like the idea of writing the test first, I find I spend substantially more time trying to debug my tests to get them to say what I really mean than I do debugging actual code. This is probably because the test code is often substantially more complicated than the code it tests. I hope this is just inexperience with the available tools (RSpec in this case). I must say though, at this point, the level of frustration mixed with the disappointing lack of performance is beyond unacceptable. So far, the only value I'm seeing from TDD is a growing library of RSpec files that serve as templates for other projects/files. Which is not much more useful, maybe less useful, than the actual project code files. In reading the available literature, I notice that TDD seems to be a massive time sink up front, but pays off in the end. I'm just wondering, are there any real world examples? Does this massive frustration ever pay off in the real world? I really hope I did not miss this question somewhere else on here. I searched, but all the questions/answers are several years old at this point. It was a rare occasion when I found a developer who would say anything bad about TDD, which is why I have spent as much time on this as I have. However, I noticed that nobody seems to point to specific real-world examples. I did read one answer that said the guy debugging the code in 2011 would thank you for have a complete unit testing suite (I think that comment was made in 2008). So, I'm just wondering, after all these years, do we finally have any examples showing the payoff is real? Has anybody actually inherited or gone back to code that was designed/developed with TDD and has a complete set of unit tests and actually felt a payoff? Or did you find that you were spending so much time trying to figure out what the test was testing (and why it was important) that you just tossed out the whole mess and dug into the code?

    Read the article

  • CI tests to enforce specific development rules - good practice?

    - by KeithS
    The following is all purely hypothetical and any particular portion of it may or may not accurately describe real persons or situations, whether living, dead or just pretending. Let's say I'm a senior dev or architect in charge of a dev team working on a project. This project includes a security library for user authentication/authorization of the application under development. The library must be available for developers to edit; however, I wish to "trust but verify" that coders are not doing things that could compromise the security of the finished system, and because this isn't my only responsibility I want it to be done in an automated way. As one example, let's say I have an interface that represents a user which has been authenticated by the system's security library. The interface exposes basic user info and a list of things the user is authorized to do (so that the client app doesn't have to keep asking the server "can I do this?"), all in an immutable fashion of course. There is only one implementation of this interface in production code, and for the purposes of this post we can say that all appropriate measures have been taken to ensure that this implementation can only be used by the one part of our code that needs to be able to create concretions of the interface. The coders have been instructed that this interface and its implementation are sacrosanct and any changes must go through me. However, those are just words; the security library's source is open for editing by necessity. Any of my devs could decide that this secured, private, hash-checked implementation needs to be public so that they could do X, or alternately they could create their own implementation of this public interface in a different library, exposing the hashing algorithm that provides the secure checksum, in order to do Y. I may not be made aware of these changes so that I can beat the developer over the head for it. An attacker could then find these little nuggets in an unobfuscated library of the compiled product, and exploit it to provide fake users and/or falsely-elevated administrative permissions, bypassing the entire security system. This possibility keeps me awake for a couple of nights, and then I create an automated test that reflectively checks the codebase for types deriving from the interface, and fails if it finds any that are not exactly what and where I expect them to be. I compile this test into a project under a separate folder of the VCS that only I have rights to commit to, have CI compile it as an external library of the main project, and set it up to run as part of the CI test suite for user commits. Now, I have an automated test under my complete control that will tell me (and everyone else) if the number of implementations increases without my involvement, or an implementation that I did know about has anything new added or has its modifiers or those of its members changed. I can then investigate further, and regain the opportunity to beat developers over the head as necessary. Is this considered "reasonable" to want to do in situations like this? Am I going to be seen in a negative light for going behind my devs' backs to ensure they aren't doing something they shouldn't?

    Read the article

  • Development Approach: User Interface In or Domain Model Out?

    - by Berin Loritsch
    While I've never delivered anything using Smalltalk, my brief time playing with it has definitely left its mark. The only way to describe the experience is MVC the way it was meant to be. Essentially, all the heavy lifting for your application is done in the business objects (or domain model if you are so inclined). The standard controls are bound to the business objects in some way. For example, a text box is mapped to an object's field (the field itself is an object so it's easy to do). A button would mapped to a method. This is all done with a very simple and natural API. We don't have to think about binding objects, etc. It just works. Yet, in many newer languages and APIs you are forced to think from the outside in. First with C++ and MFC, and now with C# and WPF, Microsoft has gotten it's developer world hooked on GUI builders where you build your application by implementing event handlers. Java Swing development isn't so different, only you are writing the code to instantiate the controls on the form yourself. For some projects, there may never even be a domain model--just event handlers. I've been in and around this model for most of my carreer. Each way forces you to think differently. With the Smalltalk approach, your domain is smart while your GUI is dumb. With the default VisualStudio approach, your GUI is smart while your domain model (if it exists) is rather anemic. Many developers that I work with see value in the Smalltalk approach, and try to shoehorn that approach into the VisualStudio environment. WPF has some dynamic binding features that makes it possible; but there are limitations. Inevitably some code that belongs in the domain model ends up in the GUI classes. So, which way do you design/develop your code? Why? GUI first. User interaction is paramount. Domain first. I need to make sure the system is correct before we put a UI on it. There's pros and cons for either approach. Domain model fits in there with crystal cathedrals and pie in the sky. GUI fits in there with quick and dirty (sometimes really dirty). And for an added bonus: How do you make sure the code is maintainable?

    Read the article

  • How do I SSH tunnel using PuTTY or SecureCRT through gateway/proxy to development server?

    - by DAE51D
    We have some unix boxes setup in a way that to get to the development box via ssh, you have to ssh into a 'user@jumpoff' box first. There is no direct connection allowed on 'dev' via ssh from anywhere but 'jumpoff'. Furthermore, only key exchange is allowed on both servers. And you always login to the development box as 'build@dev'. It's painful to always do that hopping. I know this can be done with SOCKS or a Tunnel or something... I have setup a FreeBSD VM and I can get things to work awesome using unix ssh tools. Basically all I do is make sure my vm's ~/.ssh/id_rsa.pub key is on both jumpoff and dev and use this ~/.ssh/config file: # Development Server Host ext-dev # this must be a resolvable name for "dev" from Jumpoff Hostname 1.2.3.4 User build IdentityFile ~/.ssh/id_rsa # The Jumpoff Server Host ext Hostname 1.1.1.1 User daevid Port 22 IdentityFile ~/.ssh/id_rsa # This must come below all of the above Host ext-* ProxyCommand ssh ext nc $(echo '%h'|cut -d- -f2-) 22 Then I just simply type "ssh ext-dev" and I'm in like Flynn. The problem is I can't get this same thing to work using either PuTTY or SecureCRT -- and to be honest I've not found any tutorials that really walk me through it. I see many on setting up some kind of proxy tunnel for Firefox, but it doesn't seem to be the same concept. I've been messing with various trial and error most all day and nothing has worked (obviously) and I'm at the end of my ssh knowledge and Google searching. I found this link which seemed to be perfect, but it doesn't work for me. The "Master" connects fine, but the "client" portion doesn't connect. It tells me, the remote system refused the connection. http://www.vandyke.com/support/tips/socksproxy.html I've got the VM, PuTTY and SecureCRT all using the same public/private key pairs to make things consistent and easier to debug. Does anyone have a straight up example of how to do this in Windows?

    Read the article

  • How to choose the right PHP Framework for web development?

    - by liliwang
    Hi! I'm a PHP Pro, but haven't use any PHP framework till now, so I have no clue on how to choose a PHP framework. Do you have some tips to help me choose the/a right PHP Framework? I want a stable and secure PHP framework for Projects with about 400 hours development time. It should be possible to use the framework on Shared-Hosting-Webservers. I don't need some AJAX support (I'm using extJS). It would be nice if the framework supports Rapid Application Development and object-relational mapping. Also some of the standard-functions (Authentification, form validation) would be nice. Caching would be a useful, but isn't needed. Needs for a PHP framework: Shared-Hosting-Webserver-Support for Projects between 200 und 400 hours work Developing Modell "Rapid Application Development" supported object-relational mapping supported If possible: Caching Already finished Modules (e.g. Authentification, form validation, ..) Easy to learn Which PHP framework is the right one I am seeking for?

    Read the article

  • Do You Develop Your PL/SQL Directly in the Database?

    - by thatjeffsmith
    I know this sounds like a REALLY weird question for many of you. Let me make one thing clear right away though, I am NOT talking about creating and replacing PLSQL objects directly into a production environment. Do we really need to talk about developers in production again? No, what I am talking about is a developer doing their work from start to finish in a development database. These are generally available to a development team for building the next and greatest version of your databases and database applications. And of course you are using a third party source control system, right? Last week I was in Tampa, FL presenting at the monthly Suncoast Oracle User’s Group meeting. Had a wonderful time, great questions and back-and-forth. My favorite heckler was there, @oraclenered, AKA Chet Justice.  I was in the middle of talking about how it’s better to do your PLSQL work in the Procedure Editor when Chet pipes up - Don’t do it that way, that’s wrong Just press play to edit the PLSQL directly in the database Or something along those lines. I didn’t get what the heck he was talking about. I had been showing how the Procedure Editor gives you much better feedback and support when working with PLSQL. After a few back-and-forths I got to what Chet’s main objection was, and again I’m going to paraphrase: You should develop offline in your SQL worksheet. Don’t do anything in the database until it’s done. I didn’t understand. Were developers expected to be able to internalize and mentally model the PL/SQL engine, see where their errors were, etc in these offline scripts? No, please give Chet more credit than that. What is the ideal Oracle Development Environment? If I were back in the ‘real world’ of database development, I would do all of my development outside of the ‘dev’ instance. My development process looks a little something like this: Do I have a program that already does something like this – copy and paste Has some smart person already written something like this – copy and paste Start typing in the white-screen-of-panic and bungle along until I get something that half-works Tweek, debug, test until I have fooled my subconscious into thinking that it’s ‘good’ As you might understand, I don’t want my co-workers to see the evolution of my code. It would seriously freak them out and I probably wouldn’t have a job anymore (don’t remind me that I already worked myself out of development.) So here’s what I like to do: Run a Local Instance of Oracle on my Machine and Develop My Code Privately I take a copy of development – that’s what source control is for afterall – and run it where no one else can see it. I now get to be my own DBA. If I need a trace – no problem. If I want to run an ASH report, no worries. If I need to create a directory or run some DataPump jobs, that’s all on me. Now when I get my code ‘up to snuff,’ then I will check it into source control and compile it into the official development instance. So my teammates suddenly go from seeing no program, to a mostly complete program. Is this right? If not, it doesn’t seem wrong to me. And after talking to Chet in the car on the way to the local cigar bar, it seems that he’s of the same opinion. So what’s so wrong with coding directly into a development instance? I think ‘wrong’ is a bit strong here. But there are a few pitfalls that you might want to look out for. A few come to mind – and I’m sure Chet could add many more as my memory fails me at the moment. But here goes: Development instance isn’t properly backed up – would hate to lose that work Development is wiped once a week and copied over from Prod – don’t laugh Someone clobbers your code You accidentally on purpose clobber someone else’s code The more developers you have in a single fish pond, the greater chance something ‘bad’ will happen This Isn’t One of Those Posts Where I Tell You What You Should Be Doing I realize many shops won’t be open to allowing developers to stage their own local copies of Oracle. But I would at least be aware that many of your developers are probably doing this anyway – with or without your tacit approval. SQL Developer can do local file tracking, but you should be using Source Control too! I will say that I think it’s imperative that you control your source code outside the database, even if your development team is comprised of a single developer. Store your source code in a file, and control that file in something like Subversion. You would be shocked at the number of teams that do not use a source control system. I know I continue to be shocked no matter how many times I meet another team running by the seat-of-their-pants. I’d love to hear how your development process works. And of course I want to know how SQL Developer and the rest of our tools can better support your processes. And one last thing, if you want a fun and interactive presentation experience, be sure to have Chet in the room

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Is SugarCRM really adequate for custom development (or adequate at all)? [closed]

    - by dukeofgaming
    Have you used SugarCRM for custom development successfully?, if so, have you done it programmatically or through the Module Builder? Were you successful? If not, why? I used SugarCRM for a project about two years ago, I ran into errors from the very installation, having to hack the actual installation file to deploy the software in the server and other erros that I can't recall now. Two years after, I'm picking it up for a project once again. I'm feeling like I should have developed the whole thing from scratch myself. Some examples: I couldn't install it in the server (again). I had to install it locally, then copy the files and database over to the server and manually edit the config file. Constantly getting deployment errors from the module builder. One reason is SugarCRM keeps creating a record in the upgrade_history table for a file that does not exist, I keep deleting such record and it keeps coming back corrupt. I get other deployment errors, but have not figured them out. then I have to rollback all files and database to try again. I deleted a custom module with relationships, the relationships stayed in the other modules and cannot be deleted anymore, PHP warnings all over the place. Quick create for custom modules does not appear, hack needed. Its whole cache directory is a joke, permanent data/files are stored there. The module builder interface disappears required fields. Edit the wrong thing, module builder won't deploy again, then pray Quick Repair and/or Rebuild Relationships do the trick. My impression of SugarCRM now is that, regardless of its pretty exterior and apparent functionality, it is a very low quality piece of software. This even scared me more: http://amplicate.com/hate/sugarcrm; a quote: I wis this info had been available when I tried to implement it 2 years ago... I searched high and low and the only info I found was positive. Yes, it's a piece of crap. The community edition was full of bugs... nothing worked. Essentially I got fired for implementing it. I'm glad though, because now I work for myself, am much happier and make more money... so, I should really thank SugarCRM for sucking so much I guess! I figured that perhaps some of you have had similar experiences, and have either sticked with SugarCRM or moved on to another solution. I'm very interested in knowing what your resolutions were -or your current situations are- to make up my own mind, since the project I'm working on is long term and I'm feeling SugarCRM will be more an obstacle than an aid. After further failed attempts to continue using this software I continued to stumble upon dead-ends when using the module editor, I could only recover from this errors by using version control. We are now moving on to a custom implementation using Symfony; perhaps if we were using it with its out-of-the-box modules we would have sticked with it.

    Read the article

  • 3 Incredibly Useful Projects to jump-start your Kinect Development.

    - by mbcrump
    I’ve been playing with the Kinect SDK Beta for the past few days and have noticed a few projects on CodePlex worth checking out. I decided to blog about them to help spread awareness. If you want to learn more about Kinect SDK then you check out my”Busy Developer’s Guide to the Kinect SDK Beta”. Let’s get started:   KinectContrib is a set of VS2010 Templates that will help you get started building a Kinect project very quickly. Once you have it installed you will have the option to select the following Templates: KinectDepth KinectSkeleton KinectVideo Please note that KinectContrib requires the Kinect for Windows SDK beta to be installed. Kinect Templates after installing the Template Pack. The reference to Microsoft.Research.Kinect is added automatically.  Here is a sample of the code for the MainWindow.xaml in the “Video” template: <Window x:Class="KinectVideoApplication1.MainWindow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="480" Width="640"> <Grid> <Image Name="videoImage"/> </Grid> </Window> and MainWindow.xaml.cs using System; using System.Windows; using System.Windows.Media; using System.Windows.Media.Imaging; using Microsoft.Research.Kinect.Nui; namespace KinectVideoApplication1 { public partial class MainWindow : Window { //Instantiate the Kinect runtime. Required to initialize the device. //IMPORTANT NOTE: You can pass the device ID here, in case more than one Kinect device is connected. Runtime runtime = new Runtime(); public MainWindow() { InitializeComponent(); //Runtime initialization is handled when the window is opened. When the window //is closed, the runtime MUST be unitialized. this.Loaded += new RoutedEventHandler(MainWindow_Loaded); this.Unloaded += new RoutedEventHandler(MainWindow_Unloaded); //Handle the content obtained from the video camera, once received. runtime.VideoFrameReady += new EventHandler<Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs>(runtime_VideoFrameReady); } void MainWindow_Unloaded(object sender, RoutedEventArgs e) { runtime.Uninitialize(); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { //Since only a color video stream is needed, RuntimeOptions.UseColor is used. runtime.Initialize(Microsoft.Research.Kinect.Nui.RuntimeOptions.UseColor); //You can adjust the resolution here. runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } void runtime_VideoFrameReady(object sender, Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs e) { PlanarImage image = e.ImageFrame.Image; BitmapSource source = BitmapSource.Create(image.Width, image.Height, 96, 96, PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel); videoImage.Source = source; } } } You will find this template pack is very handy especially for those new to Kinect Development.   Next up is The Coding4Fun Kinect Toolkit which contains extension methods and a WPF control to help you develop with the Kinect SDK. After downloading the package simply add a reference to the .dll using either the WPF or WinForms version. Now you will have access to several methods that can help you save an image: (for example) For a full list of extension methods and properties, please visit the site at http://c4fkinect.codeplex.com/. Kinductor – This is a great application for just learning how to use the Kinect SDK. The project uses MVVM Light and is a great start for those looking how to structure their first Kinect Application. Conclusion: Things are already getting easier for those working with the Kinect SDK. I imagine that after a few more months we will see the SDK go out of beta and allow commercial applications to run using it. I am very excited and hope that you continue reading my blog for more Kinect, WPF and Silverlight news.  Subscribe to my feed

    Read the article

  • Basic Team Foundation Server 2010 Question - System Resource Usage?

    - by user127954
    Guys / Gals i have a real basic Team Foundation Server 2010 question. For those of you who have played around with tfs 2010 is it a lot more light weight than tfs2008 is? I remember installing all the pieces needed for TFS 2008 one one machine at work. I remember it being a pain to install (i know 2010 is supposed to be much better) We wanted to play around with it a little bit to see if it met our needs. Well it brought that machine to a screeching halt. I'm needing a source control repository for home and i thought why not just install tfs 2010 so i can get familiar with it and maybe in the future i can make a better sell to my organization and FINALLY get them to move off of Source Safe but my concern is i only have one server at home (granted i already have SQL Server installed) and don't want to buy a machine just for this purpose. I'd also like to get more familiar with CI too. Anyways, if team is going to be to heavy i'll just use subversion but i'd like to use TFS if possible. Any help would be appreciated. thanks, Ncage

    Read the article

  • Reg Gets a Job at Red Gate (and what happens behind the scenes)

    - by red(at)work
    Mr Reg Gater works at one of Cambridge’s many high-tech companies. He doesn’t love his job, but he puts up with it because... well, it could be worse. Every day he drives to work around the Red Gate roundabout, wondering what his boss is going to blame him for today, and wondering if there could be a better job out there for him. By late morning he already feels like handing his notice in. He got the hacky look from his boss for being 5 minutes late, and then they ran out of tea. Again. He goes to the local sandwich shop for lunch, and picks up a Red Gate job menu and a Book of Red Gate while he’s waiting for his order. That night, he goes along to Cambridge Geek Nights and sees some very enthusiastic Red Gaters talking about the work they do; it sounds interesting and, of all things, fun. He takes a quick look at the job vacancies on the Red Gate website, and an hour later realises he’s still there – looking at videos, photos and people profiles. He especially likes the Red Gate’s Got Talent page, and is very impressed with Simon Johnson’s marathon time. He thinks that he’d quite like to work with such awesome people. It just so happens that Red Gate recently decided that they wanted to hire another hot shot team member. Behind the scenes, the wheels were set in motion: the recruitment team met with the hiring manager to understand exactly what they’re looking for, and to decide what interview tests to do, who will do the interviews, and to kick-start any interview training those people might need. Next up, a job description and job advert were written, and the job was put on the market. Reg applies, and his CV lands in the Recruitment team’s inbox and they open it up with eager anticipation that Reg could be the next awesome new starter. He looks good, and in a jiffy they’ve arranged an interview. Reg arrives for his interview, and is greeted by a smiley receptionist. She offers him a selection of drinks and he feels instantly relaxed. A couple of interviews and an assessment later, he gets a job offer. We make his day and he makes ours by accepting, and becoming one of the 60 new starters so far this year. Behind the scenes, things start moving all over again. The HR team arranges for a “Welcome” goodie box to be whisked out to him, prepares his contract, sends an email to Information Services (Or IS for short - we’ll come back to them), keeps in touch with Reg to make sure he knows what to expect on his first day, and of course asks him to fill in the all-important wiki questionnaire so his new colleagues can start to get to know him before he even joins. Meanwhile, the IS team see an email in SupportWorks from HR. They see that Reg will be starting in the sales team in a few days’ time, and they know exactly what to do. They pull out a new machine, and within minutes have used their automated deployment software to install every piece of software that a new recruit could ever need. They also check with Reg’s new manager to see if he has any special requirements that they could help with. Reg starts and is amazed to find a fully configured machine sitting on his desk, complete with stationery and all the other tools he’ll need to do his job. He feels even more cared for after he gets a workstation assessment, and realises he’d be comfier with an ergonomic keyboard and a footstool. They arrive minutes later, just like that. His manager starts him off on his induction and sales training. Along with job-specific training, he’ll also have a buddy to help him find his feet, and loads of pre-arranged demos and introductions. Reg settles in nicely, and is great at his job. He enjoys the canteen, and regularly eats one of the 40,000 meals provided each year. He gets used to the selection of teas that are available, develops a taste for champagne launch parties, and has his fair share of the 25,000 cups of coffee downed at Red Gate towers each year. He goes along to some Feel Good Fund events, and donates a little something to charity in exchange for a turn on the chocolate fountain. He’s looking a little scruffy, so he decides to get his hair cut in between meetings, just in time for the Red Gate birthday company photo. Reg starts a new project: identifying existing customers to up-sell to new bundles. He talks with the web team to generate lists of qualifying customers who haven’t recently been sent marketing emails, and sends emails out, using a new in-house developed tool to schedule follow-up calls in CRM for the same group. The customer responds, saying they’d like to upgrade but are having a licensing problem – Reg sends the issue to Support, and it gets routed to the web team. The team identifies a workaround, and the bug gets scheduled into the next maintenance release in a fortnight’s time (hey; they got lucky). With all the new stuff Reg is working on, he realises that he’d be way more efficient if he had a third monitor. He speaks to IS and they get him one - no argument. He also needs a test machine and then some extra memory. Done. He then thinks he needs an iPad, and goes to ask for one. He gets told to stop pushing his luck. Some time later, Reg’s wife has a baby, so Reg gets 2 weeks of paid paternity leave and a bunch of flowers sent to his house. He signs up to the childcare scheme so that he doesn’t have to pay National Insurance on the first £243 of his childcare. The accounts team makes it all happen seamlessly, as they did with his Give As You Earn payments, which come out of his wages and go straight to his favorite charity. Reg’s sales career is going well. He’s grateful for the help that he gets from the product support team. How do they answer all those 900-ish support calls so effortlessly each month? He’s impressed with the patches that are sent out to customers who find “interesting behavior” in their tools, and to the customers who just must have that new feature. A little later in his career at Red Gate, Reg decides that he’d like to learn about management. He goes on some management training specially customised for Red Gate, joins the Management Book Club, and gets together with other new managers to brainstorm how to get the most out of one to one meetings with his team. Reg decides to go for a game of Foosball to celebrate his good fortune with his team, and has to wait for Finance to finish. While he’s waiting, he reflects on the wonderful time he’s had at Red Gate. He can’t put his finger on what it is exactly, but he knows he’s on to a good thing. All of the stuff that happened to Reg didn’t just happen magically. We’ve got teams of people working relentlessly behind the scenes to make sure that everyone here is comfortable, safe, well fed and caffeinated to the max.

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >