Search Results

Search found 68249 results on 2730 pages for 'sudo work'.

Page 352/2730 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Easily view a list of changes of upgraded packages.

    - by D Connors
    So, let's say I run sudo apt-get upgrade on my Lucid Lynx and it upgrades a couple of packages I'm interested in. Is there a command to run that will open some kind of info or manual that tells me what changes were made in this new version of the package? For instance, if run the apt-get upgrade and it installs a new version of empathy. Do I have to go over to their site to review the changes made in this version, or is there a quicker command line way?

    Read the article

  • Problems with Google Maps API v3 + jQuery UI Tabs

    - by Bears will eat you
    There are a number of problems, which seem to be fairly well-known, when using the Google Maps API to render a map within a jQuery UI tab. I've seen SO questions posted about similar issues (here and here, for example) but the solutions there only seem to work for v2 of the Maps API. Other references I checked out are here and here, along with pretty much everything I could dig up through Googling. I've been trying to stuff a map (using v3 of the API) into a jQuery tab with mixed results. I'm using the latest versions of everything (currently jQuery 1.3.2, jQuery UI 1.7.2, don't know about Maps). This is the markup & javascript: <body> <div id="dashtabs"> <span class="logout"> <a href="go away">Log out</a> </span> <!-- tabs --> <ul class="dashtabNavigation"> <li><a href="#first_tab" >First</a></li> <li><a href="#second_tab" >Second</a></li> <li><a href="#map_tab" >Map</a></li> </ul> <!-- tab containers --> <div id="first_tab">This is my first tab</div> <div id="second_tab">This is my second tab</div> <div id="map_tab"> <div id="map_canvas"></div> </div> </div> </body> and $(document).ready(function() { var map = null; $('#dashtabs').tabs(); $('#dashtabs').bind('tabsshow', function(event, ui) { if (ui.panel.id == 'map_tab' && !map) { map = initializeMap(); google.maps.event.trigger(map, 'resize'); } }); }); function initializeMap() { // Just some canned map for now var latlng = new google.maps.LatLng(-34.397, 150.644); var myOptions = { zoom: 8, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP }; return new google.maps.Map($('#map_canvas')[0], myOptions); } And here's what I've found that does/doesn't work (for Maps API v3): Using the off-left technique as described in the jQuery UI Tabs documentation (and in the answers to the two questions I linked) doesn't work at all. In fact, the best-functioning code uses the CSS .ui-tabs .ui-tabs-hide { display: none; } instead. The only way to get a map to display in a tab at all is to set the CSS width and height of #map_canvas to be absolute values. Changing the width and height to auto or 100% causes the map to not display at all, even if it's already been successfully rendered (using absolute width and height). I couldn't find it documented anywhere outside of the Maps API, but map.checkResize() won't work anymore. Instead, you have to fire a resize event by calling google.maps.event.trigger(map, 'resize'). If the map is not initialized inside of a function bound to a tabsshow event, the map itself is rendered correctly but the controls are not - most are just plain missing. So, here are my questions: Does anyone else have experience accomplishing this same feat? If so, how did you figure out what would actually work, since the documented tricks don't work for Maps API v3? What about loading tab content using Ajax as per the jQuery UI docs? I haven't had a chance to play around with it but my guess is that it's going to break Maps even more. What are the chances of getting it to work (or is it not worth trying)? How do I make the map fill the largest possible area? I'd like it to fill the tab and adapt to page resizes, much in the way that it's done over at maps.google.com. But, as I said, I appear to be stuck with applying only absolute width and height CSS to the map div. Sorry if this was long-winded but this might be the only documentation for Maps API v3 + jQuery tabs. Cheers!

    Read the article

  • ruby on rails configuration

    - by Themasterhimself
    Im using the following guide for getting started with rails for ubuntu 9.10. http://guides.rails.info/getting_started.html I have installed both ruby and gem. gokul@gokul-laptop:~$ ruby -v ruby 1.8.7 (2009-06-12 patchlevel 174) [i486-linux] gokul@gokul-laptop:~$ gem -v 1.3.6 gokul@gokul-laptop:~$ For rails, gokul@gokul-laptop:~$sudo gem install rails doesnt seem to give any response. so used the synaptic package manager for installing it. And it seems to have installed correctly. gokul@gokul-laptop:~$ rails Usage: /usr/bin/rails /path/to/your/app [options] Options: -r, --ruby=path Path to the Ruby binary of your choice (otherwise scripts use env, dispatchers current path). Default: /usr/bin/ruby1.8 -d, --database=name Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite2/sqlite3/frontbase/ibm_db). Default: sqlite3 -D, --with-dispatchers Add CGI/FastCGI/mod_ruby dispatches code to generated application skeleton Default: false --freeze Freeze Rails in vendor/rails from the gems generating the skeleton Default: false -m, --template=path Use an application template that lives at path (can be a filesystem path or URL). Default: (none) Rails Info: -v, --version Show the Rails version number and quit. -h, --help Show this help message and quit. General Options: -p, --pretend Run but do not make any changes. -f, --force Overwrite files that already exist. -s, --skip Skip files that already exist. -q, --quiet Suppress normal output. -t, --backtrace Debugging: show backtrace on errors. -c, --svn Modify files with subversion. (Note: svn must be in path) -g, --git Modify files with git. (Note: git must be in path) Description: The 'rails' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going. gokul@gokul-laptop:~$ app folder is created with all the proper folders. The problem starts with the following commands... gokul@gokul-laptop:~$ sudo gem install bundler [sudo] password for gokul: Successfully installed bundler-0.9.24 1 gem installed Installing ri documentation for bundler-0.9.24... Installing RDoc documentation for bundler-0.9.24... gokul@gokul-laptop:~$ bundle install Could not locate Gemfile gokul@gokul-laptop:~$ coming to the database, the default sqlite3 seems to have installed correctly. gokul@gokul-laptop:~$ sqlite3 SQLite version 3.6.16 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite The welcome aboard page is not being able to be found at (http://localhost:3000) after executing the following commands... gokul@gokul-laptop:~/Desktop$ rails blog create create app/controllers create app/helpers create app/models create app/views/layouts create config/environments create config/initializers create config/locales create db create doc create lib create lib/tasks create log create public/images create public/javascripts create public/stylesheets create script/performance create test/fixtures create test/functional create test/integration create test/performance create test/unit create vendor create vendor/plugins create tmp/sessions create tmp/sockets create tmp/cache create tmp/pids create Rakefile create README create app/controllers/application_controller.rb create app/helpers/application_helper.rb create config/database.yml create config/routes.rb create config/locales/en.yml create db/seeds.rb create config/initializers/backtrace_silencers.rb create config/initializers/inflections.rb create config/initializers/mime_types.rb create config/initializers/new_rails_defaults.rb create config/initializers/session_store.rb create config/environment.rb create config/boot.rb create config/environments/production.rb create config/environments/development.rb create config/environments/test.rb create script/about create script/console create script/dbconsole create script/destroy create script/generate create script/runner create script/server create script/plugin create script/performance/benchmarker create script/performance/profiler create test/test_helper.rb create test/performance/browsing_test.rb create public/404.html create public/422.html create public/500.html create public/index.html create public/favicon.ico create public/robots.txt create public/images/rails.png create public/javascripts/prototype.js create public/javascripts/effects.js create public/javascripts/dragdrop.js create public/javascripts/controls.js create public/javascripts/application.js create doc/README_FOR_APP create log/server.log create log/production.log create log/development.log create log/test.log gokul@gokul-laptop:~/Desktop$ cd blog gokul@gokul-laptop:~/Desktop/blog$ rake db:create (in /home/gokul/Desktop/blog) gokul@gokul-laptop:~/Desktop/blog$ rails server create create app/controllers create app/helpers create app/models create app/views/layouts create config/environments create config/initializers create config/locales create db create doc create lib create lib/tasks create log create public/images create public/javascripts create public/stylesheets create script/performance create test/fixtures create test/functional create test/integration create test/performance create test/unit create vendor create vendor/plugins create tmp/sessions create tmp/sockets create tmp/cache create tmp/pids create Rakefile create README create app/controllers/application_controller.rb create app/helpers/application_helper.rb create config/database.yml create config/routes.rb create config/locales/en.yml create db/seeds.rb create config/initializers/backtrace_silencers.rb create config/initializers/inflections.rb create config/initializers/mime_types.rb create config/initializers/new_rails_defaults.rb create config/initializers/session_store.rb create config/environment.rb create config/boot.rb create config/environments/production.rb create config/environments/development.rb create config/environments/test.rb create script/about create script/console create script/dbconsole create script/destroy create script/generate create script/runner create script/server create script/plugin create script/performance/benchmarker create script/performance/profiler create test/test_helper.rb create test/performance/browsing_test.rb create public/404.html create public/422.html create public/500.html create public/index.html create public/favicon.ico create public/robots.txt create public/images/rails.png create public/javascripts/prototype.js create public/javascripts/effects.js create public/javascripts/dragdrop.js create public/javascripts/controls.js create public/javascripts/application.js create doc/README_FOR_APP create log/server.log create log/production.log create log/development.log create log/test.log gokul@gokul-laptop:~/Desktop/blog$ hope some one can help me with this...

    Read the article

  • Does there exist an "idea checkout system" on the Internet?

    - by TimeSpace Traveller
    Greetings. I would like to ask the following question: is there anything on the Internet like an "idea checkout" system? Situation: I'm a software developer. Since my current job has started 2 years ago, my mentor at that time has pointed me to the open source world. I have only put little time to look at some of the open source projects, let alone any contribution. However, it is my wish to start developing something outside of the work. Well, except a little problem. I don't know what to develop! It is not about the technical knowledge; the problem is that, I am not a creative person. I am very good at analytical thinking, as well as debugging skills. When being told by my work partners to develop a solution, I could get it done without a problem. However, outside of work, I have no idea what to develop. When I look at the Internet, it seems that so many people have already been developing on so many interesting stuff, making me wonder what I could develop, so that I would not reinvent something already existed. That starts to make me wonder. On the Internet, is there anything like an "idea checkout" system or society? For example, some people would throw in as a software idea, and the system would keep it as an "inventory"; later, a potential software developer would "check out" the idea, just a how people would check out a book from the library. Then, the developer would check the "idea" back in, with a certain kind of work-in-progress or developed software, thus becoming an open-source project. I have just noticed that here at stackoverflow, there is a "Project-Ideas" tag, so perhaps that can provide me some ideas on what to develop; still, my wonder is about a system that people would provide ideas, and people would check out ideas to develop / implement into actual solution. Is there such a system or society existing anywhere on the Internet? Any input is welcome! Thank you very much. Update: Thank you for everyone who has answered my question. Certainly, "getting idea" is part of my problem; as a software developer, however, I'm concerned more than just "getting idea". What I am concerned more, as I have commented, is about the existence of such an idea exchanging ecosystem, capable to initiate open-source projects. I'll put an example here. Say, person A has an idea of music search program, but not search by the attributes of the music (composer, singer, publisher, lyrics, etc.); instead, he wants a program (and a database) to search a piece of music by melody. Very often, people only remember a piece of music by its melody, not even the name of the music (e.g. the music he wants was only once heard in a bookstore, but the melody just gets stuck in his head!). In order to find that piece, normally he would just need to blindly search for it, and spent a long time to do so. A search by melody would enable person A to find the piece much quicker. However, he would not want to personally work on it, not just because of the complexity (he is not a musician and/or programmer, knowing almost nothing about music systems in computer, search algorithms, etc.), but also legal issues (RIAA??), thus he would just like to keep the idea at some place, and let other people to work on that. Now, a developer (person B) may be at the same stage as I am right now, wishing to find something to develop, but not having an idea. With the idea exchanging ecosystem, person B will search, and somehow discover person A's music search idea, and feeling interested enough to work on it. So he "checks out" the idea, start working on it (at least a skeleton), and checks back in with the progress. An open-source project starts from here, fulfilling person A's wish, and person B's programming desire. The above is just an example, because there are already such systems exist on the Internet, but it illustrates what I think about the idea exchange system in my mind. My main concern is about idea exchanging ecosystem, not at personal and unorganized level, but at a semi-organized protocol that's specifically for software developers, having actual projects coming out as the fruits. Not about "projects", but about "ideas and product of ideas". Hopefully that would clear up some of the original idea of this question. Any input is welcome; in fact, I would like to hear as many people as possible how everyone thinks about this. Thank you very much!

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • An Xml Serializable PropertyBag Dictionary Class for .NET

    - by Rick Strahl
    I don't know about you but I frequently need property bags in my applications to store and possibly cache arbitrary data. Dictionary<T,V> works well for this although I always seem to be hunting for a more specific generic type that provides a string key based dictionary. There's string dictionary, but it only works with strings. There's Hashset<T> but it uses the actual values as keys. In most key value pair situations for me string is key value to work off. Dictionary<T,V> works well enough, but there are some issues with serialization of dictionaries in .NET. The .NET framework doesn't do well serializing IDictionary objects out of the box. The XmlSerializer doesn't support serialization of IDictionary via it's default serialization, and while the DataContractSerializer does support IDictionary serialization it produces some pretty atrocious XML. What doesn't work? First off Dictionary serialization with the Xml Serializer doesn't work so the following fails: [TestMethod] public void DictionaryXmlSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXml(bag)); } public string ToXml(object obj) { if (obj == null) return null; StringWriter sw = new StringWriter(); XmlSerializer ser = new XmlSerializer(obj.GetType()); ser.Serialize(sw, obj); return sw.ToString(); } The error you get with this is: System.NotSupportedException: The type System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not supported because it implements IDictionary. Got it! BTW, the same is true with binary serialization. Running the same code above against the DataContractSerializer does work: [TestMethod] public void DictionaryDataContextSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXmlDcs(bag)); } public string ToXmlDcs(object value, bool throwExceptions = false) { var ser = new DataContractSerializer(value.GetType(), null, int.MaxValue, true, false, null); MemoryStream ms = new MemoryStream(); ser.WriteObject(ms, value); return Encoding.UTF8.GetString(ms.ToArray(), 0, (int)ms.Length); } This DOES work but produces some pretty heinous XML (formatted with line breaks and indentation here): <ArrayOfKeyValueOfstringanyType xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <KeyValueOfstringanyType> <Key>key</Key> <Value i:type="a:string" xmlns:a="http://www.w3.org/2001/XMLSchema">Value</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key2</Key> <Value i:type="a:decimal" xmlns:a="http://www.w3.org/2001/XMLSchema">100.10</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key3</Key> <Value i:type="a:guid" xmlns:a="http://schemas.microsoft.com/2003/10/Serialization/">2cd46d2a-a636-4af4-979b-e834d39b6d37</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key4</Key> <Value i:type="a:dateTime" xmlns:a="http://www.w3.org/2001/XMLSchema">2011-09-19T17:17:05.4406999-07:00</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key5</Key> <Value i:type="a:boolean" xmlns:a="http://www.w3.org/2001/XMLSchema">true</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key7</Key> <Value i:type="a:base64Binary" xmlns:a="http://www.w3.org/2001/XMLSchema">Ki1C</Value> </KeyValueOfstringanyType> </ArrayOfKeyValueOfstringanyType> Ouch! That seriously hurts the eye! :-) Worse though it's extremely verbose with all those repetitive namespace declarations. It's good to know that it works in a pinch, but for a human readable/editable solution or something lightweight to store in a database it's not quite ideal. Why should I care? As a little background, in one of my applications I have a need for a flexible property bag that is used on a free form database field on an otherwise static entity. Basically what I have is a standard database record to which arbitrary properties can be added in an XML based string field. I intend to expose those arbitrary properties as a collection from field data stored in XML. The concept is pretty simple: When loading write the data to the collection, when the data is saved serialize the data into an XML string and store it into the database. When reading the data pick up the XML and if the collection on the entity is accessed automatically deserialize the XML into the Dictionary. (I'll talk more about this in another post). While the DataContext Serializer would work, it's verbosity is problematic both for size of the generated XML strings and the fact that users can manually edit this XML based property data in an advanced mode. A clean(er) layout certainly would be preferable and more user friendly. Custom XMLSerialization with a PropertyBag Class So… after a bunch of experimentation with different serialization formats I decided to create a custom PropertyBag class that provides for a serializable Dictionary. It's basically a custom Dictionary<TType,TValue> implementation with the keys always set as string keys. The result are PropertyBag<TValue> and PropertyBag (which defaults to the object type for values). The PropertyBag<TType> and PropertyBag classes provide these features: Subclassed from Dictionary<T,V> Implements IXmlSerializable with a cleanish XML format ToXml() and FromXml() methods to export and import to and from XML strings Static CreateFromXml() method to create an instance It's simple enough as it's merely a Dictionary<string,object> subclass but that supports serialization to a - what I think at least - cleaner XML format. The class is super simple to use: [TestMethod] public void PropertyBagTwoWayObjectSerializationTest() { var bag = new PropertyBag(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42,45,66 } ); bag.Add("Key8", null); bag.Add("Key9", new ComplexObject() { Name = "Rick", Entered = DateTime.Now, Count = 10 }); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag["key"] as string == "Value"); Assert.IsInstanceOfType( bag["Key3"], typeof(Guid)); Assert.IsNull(bag["Key8"]); //Assert.IsNull(bag["Key10"]); Assert.IsInstanceOfType(bag["Key9"], typeof(ComplexObject)); } This uses the PropertyBag class which uses a PropertyBag<string,object> - which means it returns untyped values of type object. I suspect for me this will be the most common scenario as I'd want to store arbitrary values in the PropertyBag rather than one specific type. The same code with a strongly typed PropertyBag<decimal> looks like this: [TestMethod] public void PropertyBagTwoWayValueTypeSerializationTest() { var bag = new PropertyBag<decimal>(); bag.Add("key", 10M); bag.Add("Key1", 100.10M); bag.Add("Key2", 200.10M); bag.Add("Key3", 300.10M); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag.Get("Key1") == 100.10M); Assert.IsTrue(bag.Get("Key3") == 300.10M); } and produces typed results of type decimal. The types can be either value or reference types the combination of which actually proved to be a little more tricky than anticipated due to null and specific string value checks required - getting the generic typing right required use of default(T) and Convert.ChangeType() to trick the compiler into playing nice. Of course the whole raison d'etre for this class is the XML serialization. You can see in the code above that we're doing a .ToXml() and .FromXml() to serialize to and from string. The XML produced for the first example looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value>Value</value> </item> <item> <key>Key2</key> <value type="decimal">100.10</value> </item> <item> <key>Key3</key> <value type="___System.Guid"> <guid>f7a92032-0c6d-4e9d-9950-b15ff7cd207d</guid> </value> </item> <item> <key>Key4</key> <value type="datetime">2011-09-26T17:45:58.5789578-10:00</value> </item> <item> <key>Key5</key> <value type="boolean">true</value> </item> <item> <key>Key7</key> <value type="base64Binary">Ki1C</value> </item> <item> <key>Key8</key> <value type="nil" /> </item> <item> <key>Key9</key> <value type="___Westwind.Tools.Tests.PropertyBagTest+ComplexObject"> <ComplexObject> <Name>Rick</Name> <Entered>2011-09-26T17:45:58.5789578-10:00</Entered> <Count>10</Count> </ComplexObject> </value> </item> </properties>   The format is a bit cleaner than the DataContractSerializer. Each item is serialized into <key> <value> pairs. If the value is a string no type information is written. Since string tends to be the most common type this saves space and serialization processing. All other types are attributed. Simple types are mapped to XML types so things like decimal, datetime, boolean and base64Binary are encoded using their Xml type values. All other types are embedded with a hokey format that describes the .NET type preceded by a three underscores and then are encoded using the XmlSerializer. You can see this best above in the ComplexObject encoding. For custom types this isn't pretty either, but it's more concise than the DCS and it works as long as you're serializing back and forth between .NET clients at least. The XML generated from the second example that uses PropertyBag<decimal> looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value type="decimal">10</value> </item> <item> <key>Key1</key> <value type="decimal">100.10</value> </item> <item> <key>Key2</key> <value type="decimal">200.10</value> </item> <item> <key>Key3</key> <value type="decimal">300.10</value> </item> </properties>   How does it work As I mentioned there's nothing fancy about this solution - it's little more than a subclass of Dictionary<T,V> that implements custom Xml Serialization and a couple of helper methods that facilitate getting the XML in and out of the class more easily. But it's proven very handy for a number of projects for me where dynamic data storage is required. Here's the code: /// <summary> /// Creates a serializable string/object dictionary that is XML serializable /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> [XmlRoot("properties")] public class PropertyBag : PropertyBag<object> { /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml">Serialize</param> /// <returns></returns> public static PropertyBag CreateFromXml(string xml) { var bag = new PropertyBag(); bag.FromXml(xml); return bag; } } /// <summary> /// Creates a serializable string for generic types that is XML serializable. /// /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> /// <typeparam name="TValue">Must be a reference type. For value types use type object</typeparam> [XmlRoot("properties")] public class PropertyBag<TValue> : Dictionary<string, TValue>, IXmlSerializable { /// <summary> /// Not implemented - this means no schema information is passed /// so this won't work with ASMX/WCF services. /// </summary> /// <returns></returns> public System.Xml.Schema.XmlSchema GetSchema() { return null; } /// <summary> /// Serializes the dictionary to XML. Keys are /// serialized to element names and values as /// element values. An xml type attribute is embedded /// for each serialized element - a .NET type /// element is embedded for each complex type and /// prefixed with three underscores. /// </summary> /// <param name="writer"></param> public void WriteXml(System.Xml.XmlWriter writer) { foreach (string key in this.Keys) { TValue value = this[key]; Type type = null; if (value != null) type = value.GetType(); writer.WriteStartElement("item"); writer.WriteStartElement("key"); writer.WriteString(key as string); writer.WriteEndElement(); writer.WriteStartElement("value"); string xmlType = XmlUtils.MapTypeToXmlType(type); bool isCustom = false; // Type information attribute if not string if (value == null) { writer.WriteAttributeString("type", "nil"); } else if (!string.IsNullOrEmpty(xmlType)) { if (xmlType != "string") { writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } } else { isCustom = true; xmlType = "___" + value.GetType().FullName; writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } // Actual deserialization if (!isCustom) { if (value != null) writer.WriteValue(value); } else { XmlSerializer ser = new XmlSerializer(value.GetType()); ser.Serialize(writer, value); } writer.WriteEndElement(); // value writer.WriteEndElement(); // item } } /// <summary> /// Reads the custom serialized format /// </summary> /// <param name="reader"></param> public void ReadXml(System.Xml.XmlReader reader) { this.Clear(); while (reader.Read()) { if (reader.NodeType == XmlNodeType.Element && reader.Name == "key") { string xmlType = null; string name = reader.ReadElementContentAsString(); // item element reader.ReadToNextSibling("value"); if (reader.MoveToNextAttribute()) xmlType = reader.Value; reader.MoveToContent(); TValue value; if (xmlType == "nil") value = default(TValue); // null else if (string.IsNullOrEmpty(xmlType)) { // value is a string or object and we can assign TValue to value string strval = reader.ReadElementContentAsString(); value = (TValue) Convert.ChangeType(strval, typeof(TValue)); } else if (xmlType.StartsWith("___")) { while (reader.Read() && reader.NodeType != XmlNodeType.Element) { } Type type = ReflectionUtils.GetTypeFromName(xmlType.Substring(3)); //value = reader.ReadElementContentAs(type,null); XmlSerializer ser = new XmlSerializer(type); value = (TValue)ser.Deserialize(reader); } else value = (TValue)reader.ReadElementContentAs(XmlUtils.MapXmlTypeToType(xmlType), null); this.Add(name, value); } } } /// <summary> /// Serializes this dictionary to an XML string /// </summary> /// <returns>XML String or Null if it fails</returns> public string ToXml() { string xml = null; SerializationUtils.SerializeObject(this, out xml); return xml; } /// <summary> /// Deserializes from an XML string /// </summary> /// <param name="xml"></param> /// <returns>true or false</returns> public bool FromXml(string xml) { this.Clear(); // if xml string is empty we return an empty dictionary if (string.IsNullOrEmpty(xml)) return true; var result = SerializationUtils.DeSerializeObject(xml, this.GetType()) as PropertyBag<TValue>; if (result != null) { foreach (var item in result) { this.Add(item.Key, item.Value); } } else // null is a failure return false; return true; } /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml"></param> /// <returns></returns> public static PropertyBag<TValue> CreateFromXml(string xml) { var bag = new PropertyBag<TValue>(); bag.FromXml(xml); return bag; } } } The code uses a couple of small helper classes SerializationUtils and XmlUtils for mapping Xml types to and from .NET, both of which are from the WestWind,Utilities project (which is the same project where PropertyBag lives) from the West Wind Web Toolkit. The code implements ReadXml and WriteXml for the IXmlSerializable implementation using old school XmlReaders and XmlWriters (because it's pretty simple stuff - no need for XLinq here). Then there are two helper methods .ToXml() and .FromXml() that basically allow your code to easily convert between XML and a PropertyBag object. In my code that's what I use to actually to persist to and from the entity XML property during .Load() and .Save() operations. It's sweet to be able to have a string key dictionary and then be able to turn around with 1 line of code to persist the whole thing to XML and back. Hopefully some of you will find this class as useful as I've found it. It's a simple solution to a common requirement in my applications and I've used the hell out of it in the  short time since I created it. Resources You can find the complete code for the two classes plus the helpers in the Subversion repository for Westwind.Utilities. You can grab the source files from there or download the whole project. You can also grab the full Westwind.Utilities assembly from NuGet and add it to your project if that's easier for you. PropertyBag Source Code SerializationUtils and XmlUtils Westwind.Utilities Assembly on NuGet (add from Visual Studio) © Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tor : Stuck at Connecting to Relay Directory

    - by Ghassan
    i have never ever worked with tor before. the company where i work allows us to have access to any site we wish. nonetheless as of the the beginning of this month, they installed a proxy server to filter which sites can be accessed and which ones cant. the filter isnt only on URLS, but IPS as well, even hexa IPS wont work. so after some research, i decided to use tor, the first day i installed it, everything went smooth and i was accessing any website i wish. just 2day, everything stopped. i try 2 start vidalia, it gets stuck at Connecting to Relay Directory. i work on windows 7 platform. Please help me out! thanks in advance.

    Read the article

  • Zantaz's EAS exchange archive product doesn't integrate with Outlook in Windows 7

    - by Chris Farmer
    I work at a place that uses Exchange with Outlook for mail, and they also use a product from Zantaz called "EAS" which does server-side archiving of old messages. One of the artifacts of this archival is that email attachments are missing from the archived messages when viewed in Outlook. EAS has a client tool that plugs into Outlook that enables easy retrieval of those archived messages and their attachments, but it doesn't seem to work when installed on Windows 7. I have no direct evidence of this, other than that it simply doesn't work on my Windows 7 machine, and some of our network support staff seem to corroborate this. The symptom is that Outlook seems to know nothing of the existence of this EAS app. The EAS app also has a system tray icon which is there and offers some minimal functionality, but the real goodness is the Outlook integration, which I sorely miss. So, my question is this: does anyone here know whether it's possible to coerce this EAS product into working correctly in Windows 7?

    Read the article

  • SSL Certificate error: verify error:num=20:unable to get local issuer certificate

    - by Brian
    I've been trying to get an SSL connection to an LDAPS server (Active Directory) to work, but keep having problems. I tried using this: openssl s_client -connect the.server.edu:3269 With the following result: verify error:num=20:unable to get local issuer certificate I thought, OK, well server's an old production server a few years old. Maybe the CA isn't present. I then pulled the certificate from the output into a pem file and tried: openssl s_client -CAfile mycert.pem -connect the.server.edu:3269 And that didn't work either. What am I missing? Shouldn't that ALWAYS work?

    Read the article

  • IIS 7.5 Custom HTTP Response Headers Not Working

    - by Craig
    Trying to setup custom HTTP Response Headers on a new install of IIS7.5 on Windows Server 2008 R2 Standard and they are not working. Default headers work fine (X-Powered-By, etc...). Modifying default header values work (ie. change X-Powered-By to ASP.NET2). Modifying default header names cause header to stop being output (ie. Change X-Powered-By to X-Powered-By2). Site in question is a test site with a single html page. Custom headers also don't work on ASP.NET 2.0 site. I've tried setting the headers at the global level and at the site level to no effect.

    Read the article

  • Tunnel into Sonicwall VPN while on Sonicwall wifi?

    - by Patrick Harrington
    Hey all, I am able to hit my company's VPN while I am at home using a dedicated IP with no issue. When I am at work, the VPN we use (a Sonicwall router/VPN/wifi access point), I can get outside internet fine, but am unable to connect to the VPN. I know that the wifi puts me on a different subnet, and when I try to connect to the normal VPN IP it won't work, and a traceroute just times out. Any suggestions? Might there be an internal IP I need to hit while here at work?

    Read the article

  • MinGW tar compression problem

    - by Shiftbit
    I am unable to get the Mingw tar to work with compress files. It does not filter through the proper compression utility. However, tar will work if I manually uncompress the file first. I have tried in both the MSYS shell and Windows cmd. Has anyone had this problem or is it a MinGW bug? For example, this does not work: C:\Users\home\Desktop>tar -tzf wdiff-0.5.tar.gz tar: Cannot use compressed or remote archives tar: Error is not recoverable: exiting now C:\Users\home\Desktop>tar -t -Zgzip -f wdiff-0.5.tar.gz tar: Cannot use compressed or remote archives tar: Error is not recoverable: exiting now C:\Users\home\Desktop>tar -tf wdiff-0.5.tar.gz tar: Hmm, this doesn't look like a tar archive tar: Skipping to next file header tar: Only read 6732 bytes from archive wdiff-0.5.tar.gz tar: Error is not recoverable: exiting now However, this works: gzip -d wdiff-0.5.tar.gz tar -tf wdiff-0.5.tar

    Read the article

  • Text Expander broken upon Snow Leopard Upgrade

    - by bLee
    I've been using Text Expander for years. However, I found out today it doesn't work on my newly upgraded (to Snow Leopard) Macbook Pro anymore. I know that there are a lot of applications that are not compatible with Snow Leopard, and developers around the world are working on them to work again on our beautiful macs. My questions are: 1. Is TextExpander supposed to NOT work on Snow Leopard? 2. What is a good substitution to replace TextExpander? or any updates from TextExpander developers?

    Read the article

  • Bugzilla with SSL

    - by Antitribu
    I have an install of Bugzilla I'm trying to implement SSL on. When I go into the parameters screen and edit SSLBASE adding in the full url: https://foo.com/bugzilla/ the editparams.cgi times out on loading and I have the following error in the apache log [Tue Mar 30 19:29:39 2010] [error] [client xxx.xx.xx.xx ] (70007)The timeout specified has expired: ap_content_length_filter: apr_bucket_read() failed, referer: http://foo.com/bugzilla/editparams.cgi On install I also received this error: WARNING: You need to set the max_allowed_packet parameter in your MySQL configuration to at least 3276750. Currently it is set to 3275776. You can set this parameter in the [mysqld] section of your MySQL configuration file. How can I force this to work? Editing other parameters (eg urlbase) work fine. The SSL site is setup and direct requests to it eg https://foo.com/bugzilla work correctly. Any ideas? Ty

    Read the article

  • SysAdmin Career Question: Internal or Client Based

    - by Malnizzle
    ServerFault Community, It seems there are two positions SysAdmins find themselves in, either you are working for a non-IT services based single client (your employer) and providing in-house IT support or you work for a company who provides out sourced IT services to multiple clients. Right now I work for a company who does the latter, and I often consider how nice it would be doing the in-house side of things, to just have one network I am focused on and instead of feeling like I have a dozen bosses between clients and internal management, I would just have one set of management and people to appease. There is also the technical aspect of every client wanting something different, and having to manage numerous different technology platforms, or trying to force clients into using the technologies we prefer, neither situation is enjoyable. Is this just "the grass is greener on the other side" syndrome, or is there some legitimacy to the the stress of client based IT work compared to being an in-house IT guy? Thanks!

    Read the article

  • Need to open links in new tab in ie8 !

    - by BHOdevelopper
    I have a sidebar (right sided iframe) and when i click on a link in it, it opens a new window in IE8, (in firefox it open a new tab). What do i need to do to open links in IE8 in a new tab. I already set the Tools-Internet Options-Settings- 'Always open pop-ups in a new tab' and 'A new tab in the current window' open in new tab but still doesn't work. My links are pretty simple, what am i missing ? exemple: text. Also some site are saying to register Regsvr32 actxprxy.dll to fix this problem, still doesn't work. And i want this to work with a simple click, no 'right-click-open in new tab' option. I also hope i won't get the 'can't change how ie8 works' answer. ;)

    Read the article

  • openvz and iptables

    - by rizen
    http://wiki.openvz.org/Setting_up_an_iptables_firewall mentions to load xt_state before starting a container in order to run iptables in containers. so I ran modprobe xt_state on the host and restarted the container and it worked great. To make this persist I added xt_state to /etc/modules. The problem is when I restart the physical node the containers iptables wont work unless I manually restart the container, at which point it'll work again. lsmod shows that xt_state is loaded. anyone know why my containers iptables won't work until I manually restart the container?

    Read the article

  • How do I run a non-service program as a service on Windows 2008 Server?

    - by Lasse V. Karlsen
    I found this page that tells me how to set up Windows Live Sync as a background service on Windows 2003 Server, unfortunately the resource kit tools for 2003 that are mentioned does not work on 2008 server. http://mswhs.freeforums.org/windows-live-sync-as-a-service-on-whs-t623.html Also, apparently there is no resource kit tools downloadable for Windows 2008 Server that I can find. Perhaps someone has a link to the relevant tools? (INSTSRV.EXE and SRVANY.EXE.) Using just plain SC.EXE doesn't work as I assume that the program is then required to be a normal service, and not just any executable. What other options do I have? Can I use the task scheduler on 2008 server to start the WindowsLiveSync executable, will that work? I need the executable to stay running even after I've logged off from the server.

    Read the article

  • Opensuse 11.1 KDE4 Dropbox autostart issue

    - by OpensuseNewbie
    I am using KDE4 with the lastest version of opensuse, and I want to autostart everytime I turn on my computer. I followed this website: http://antrix.net/journal/techtalk/dropbox%5Fkde.html I got dropbox to work and everything is fine. I created a symlink to the dropboxd file to my ~/.kde4/AUtostart, but does not work. The command line I used to create the symlink is: ln -s ~/.dropbox-dist/dropdoxd ~/.kde4/Autostart/ I've check the symlink itself to see where it "pointed to" in it's properties, and it was the right file. The drobox-dist folder is in my /home/"username", and the dropboxd does work. I've tried using "sudo ln -s ~/.dropbox-dist/dropdoxd ~/.kde4/Autostart/" but no symlink was created. I've checked other blogs and they all say the same things to make dropbox to autostart up, yet it is not working. Does anyone know what's wrong?

    Read the article

  • Zantaz's EAS exchange archive product doesn't integrate with Outlook in Windows 7

    - by Chris Farmer
    I work at a place that uses Exchange with Outlook for mail, and they also use a product from Zantaz called "EAS" which does server-side archiving of old messages. One of the artifacts of this archival is that email attachments are missing from the archived messages when viewed in Outlook. EAS has a client tool that plugs into Outlook that enables easy retrieval of those archived messages and their attachments, but it doesn't seem to work when installed on Windows 7. I have no direct evidence of this, other than that it simply doesn't work on my Windows 7 machine, and some of our network support staff seem to corroborate this. The symptom is that Outlook seems to know nothing of the existence of this EAS app. The EAS app also has a system tray icon which is there and offers some minimal functionality, but the real goodness is the Outlook integration, which I sorely miss. So, my question is this: does anyone here know whether it's possible to coerce this EAS product into working correctly in Windows 7?

    Read the article

  • Receiving SSL certificate errors only from some clients

    - by Nico M
    I am receiving SSL certificate errors from Chrome (latest version (23.0.1271.52 beta-m) and Internet Explorer 6 (not used) on my home desktop machine (Windows XP SP2). In Firefox, it works fine on this PC. My laptop and work desktop (both Windows 7) work fine. Most SSL website checking sites report that the certificate and chain up to the root CA are setup correctly, but I have come across 2 that that say I have an invalid certificate but don't give much information on what part is failing. I know it used to work properly on this desktop (in Chrome and IE) in the past, but I'm not sure what has changed that is causing the site to fail in these browsers. Can anyone provide any assistance? This is driving me nuts! Screenshot of error:

    Read the article

  • Networked Parallel Port in Linux / KVM / QEMU

    - by korkman
    What I want to use is the "-parallel" tcp or udp option from KVM / QEMU, but I don't seem to find any server for this client. I don't serve a printer but a hardware dongle. I checked ser2net, which does provide "/dev/lp0" sharing, but it doesn't seem to work for KVM / QEMU. I suspect KVM / QEMU requires "/dev/parport0". I did rmmod lp, modprobe ppdev, linked ser2net to parport0, but it didn't work out. Perhaps ser2net is not suited for this. I tried socat as well, and I tried netcat. No success. Does anyone know any KVM / QEMU compatible parallel port server? Or did any of netcat, socat or ser2net work for you?

    Read the article

  • how to properly enable drag-and-drop in Windows 7 (with iTunes in particular)

    - by user38795
    On two computers with Windows 7 at work and at home with the same version of iTunes (10.1.0.56) at work I can drag-and-drop files into iPod, and I can not at home. At home only adding them first through a dialog box to a Library and only after that drag-and-dropping into iPod (or iPad) works. Initially I thought the problem is in iTunes, but looks like it's the same version, so most likely it's the Windows 7 config parameters. It's 64-bit W7, and I have admin privileges in both cases. The only difference I could think of is it's an enterprise version at work and ultimate at home. What should I change at home to enable that drag-and-drop-ing files into the application?

    Read the article

  • How to Deploy a Directory or WAR in TOMCAT6 using ANT?

    - by Hitesh
    I want to deploy directory which is extraction of .war file using ANT in Tomcat6. I have build.xml like <property name="WAR_PATH" value="E:/18-06-2013/TEST"/> <property name="mgr.context.path" value="/FOUR"/> <property name="url" value="http://localhost:8080/manager"/> <property name="username" value="tomcat"/> <property name="password" value="password"/> <target name="deploy" description="Install web application" > <deploy url="${url}" username="${username}" password="${password}" path="${mgr.context.path}" war="file:${WAR_PATH}"/ But when i run the ANT(build.xml) script i get error something like java.io.IOException: too many bytes written at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:2632) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65> Same script work properly when i try to deploy .war file But ANT(build.xml) script not work properly in case directory. I have also try to deploy directory using HTTP command it work properly.

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >