Search Results

Search found 29250 results on 1170 pages for 'good dumps cvv'.

Page 128/1170 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Tired of Downloading Your Entire Schema? How About a Partial Download?

    - by user702295
    Proceed to document 1448266.1, Demantra Non Invasive Partial Schema Export Utility Extracting Partial Schema Dumps for Analysis.  In this document you will find instructions and 2 SQL scripts.  Give it a try.  If you have a problem, feel free to post to this BLOG or the community located at: https://communities.oracle.com/portal/server.pt/community/demantra_solutions/231 Regards!   Jeff

    Read the article

  • Dump debugging with Parallel Stacks

    One of the areas where we fixed many bugs for Beta2 in our parallel debugging windows is with regards to managed dump debugging. So it was really cool to see Tess use the Parallel Stacks window in that scenario in her video demo with Scott.Other than the neat ability to open managed dumps in VS2010, Parallel Stacks was the only debugging feature she needed for diagnosing the issue! Check out the video, definitely worth 10 minutes of your time. Comments about this post welcome at the original blog.

    Read the article

  • Why is multithreading often preferred for improving performance?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approaches here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that manages the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about multi-threading when they want to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's in fact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async approach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • Why C++ people loves multithreading when it comes to performances?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approach here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that maanges the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about concurrency when they wont to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's infact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async aproach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • Trying to find resources to learn how to test software [closed]

    - by Davek804
    First off, yes this is a general question, and I'd be perfectly happy to move this to another portion of SE, but I didn't see a more fitting sub. Basically, I am hoping a more experienced QA tester can come along and really fill in some basics for me. So far, websites seem to be sparse in terms of explaining languages involved, basic practices, etc. So, I'm sorry in advance if this is too general, but towards the end of this post I ask some specific questions if it's just absolutely unacceptable to speak in general terms. I just landed a position as Junior Systems and QA Engineer with a social media startup. Their QA and testing is almost nonexistent, so if I do a good job, I imagine I'll find a lot of bugs and have a secure role in the business. I'm pretty good with the systems aspect of my role, but I need to learn more about the QA and testing aspects. We run hardware that's touchscreen based - the user can use and interact with the devices. So, in terms of my QA role, in the short term, I need to build scripts to test the hardware/software as a 'user' to try to uncover bugs. First off, what language should these scripts be written in? Does anyone have some examples? What about the longer term 'automated testing'? I'm familiar with regression testing as the developer adds in new features, sure, but the 50,000 other types of testing, not so much. Most of our hardware runs dotnet/C# code, with some of the servers running Java - but I don't expect to need to run tests on the Java side at this point. I hope to meet with one developer today and try to get a good idea of the output from the hardware so that I can 'mock' this data that gets sent to servers, to try to bugtest. Eventually, we will be moving the hardware to be closer to where I live and work, so that I can test virtually and on real hardware. So a lot of the bugs we're dealing with now are like this: the Local Server, which kiosks report their data to gets updated from the kiosks, but the remote server does not. Or, vis versa when the user registers on a kiosk, the remote server updates but the local server does not. But yeah, without much more detail, I imagine a lot of this info isn't helpful. I've bought a book "How Google Tests Software", but it's really a book more about 'how their software testing is different from Microsoft'. It doesn't teach how to test so much as why their methods are better. Does anyone have a good book that I can buy? An ebook maybe? My local Barnes and Noble kinda had a terrible selection. I also figure a book from 2005 is not necessarily that good either.

    Read the article

  • You should NOT be writing jQuery in SharePoint if&hellip;

    - by Mark Rackley
    Yes… another one of these posts. What can I say? I’m a pot stirrer.. a rabble rouser *rabble rabble* jQuery in SharePoint seems to be a fairly polarizing issue with one side thinking it is the most awesome thing since Princess Leia as the slave girl in Return of the Jedi and the other half thinking it is the worst idea since Mannequin 2: On the Move. The correct answer is OF COURSE “it depends”. But what are those deciding factors that make jQuery an awesome fit or leave a bad taste in your mouth? Let’s see if I can drive the discussion here with some polarizing comments of my own… I know some of you are getting ready to leave your comments even now before reading the rest of the blog, which is great! Iron sharpens iron… These discussions hopefully open us up to understanding the entire process better and think about things in a different way. You should not be writing jQuery in SharePoint if you are not a developer… Let’s start off with my most polarizing and rant filled portion of the blog post. If you don’t know what you are doing or you don’t have a background that helps you understand the implications of what you are writing then you should not be writing jQuery in SharePoint! I truly believe that one of the biggest reasons for the jQuery haters is because of all the bad jQuery out there. If you don’t know what you are doing you can do some NASTY things! One of the best stories I’ve heard about this is from my good friend John Ferringer (@ferringer). John tells this story during our Mythbusters session we do together. One of his clients was undergoing a Denial of Service attack and they couldn’t figure out what was going on! After much searching they found that some genius jQuery developer wrote some code for an image rotator, but did not take into account what happens when there are no images to load! The code just kept hitting the servers over and over and over again which prevented anything else from getting done! Now, I’m NOT saying that I have not done the same sort of thing in the past or am immune from such mistakes. My point is that if you don’t know what you are doing, there are very REAL consequences that can have a major impact on your organization AND they will be hard to track down.  Think how happy your boss will be after you copy and pasted some jQuery from a blog without understanding what it does, it brings down the farm, AND it takes them 3 days to track it back to you.  :/ Good times will not be had. Like it or not JavaScript/jQuery is a programming language. While you .NET people sit on your high horses because your code is compiled and “runs faster” (also debatable), the rest of us will be actually getting work done and delivering solutions while you are trying to figure out why your widget won’t deploy. I can pick at that scab because I write .NET code too and speak from experience. I can do both, and do both well. So, I am not speaking from ignorance here. In JavaScript/jQuery you have variables, loops, conditionals, functions, arrays, events, and built in methods. If you are not a developer you just aren’t going to take advantage of all of that and use it correctly. Ahhh.. but there is hope! There is a lot of jQuery resources out there to help you learn and learn well! There are many experts on the subject that will gladly tell you when you are smoking crack. I just this minute saw a tweet from @cquick with a link to: “jQuery Fundamentals”. I just glanced through it and this may be a great primer for you aspiring jQuery devs. Take advantage of all the resources and become a developer! Hey, it will look awesome on your resume right? You should not be writing jQuery in SharePoint if it depends too much on client resources for a good user experience I’ve said it once and I’ll say it over and over until you understand. jQuery is executed on the client’s computer. Got it? If you are looping through hundreds of rows of data, searching through an enormous DOM, or performing many calculations it is going to take some time! AND if your user happens to be sitting on some old PC somewhere that they picked up at a garage sale their experience will be that much worse! If you can’t give the user a good experience they will not use the site. So, if jQuery is causing the user to have a bad experience, don’t use it. I sometimes go as far to say that you should NOT go to jQuery as a first option for external facing web sites because you have ZERO control over what the end user’s computer will be. You just can’t guarantee an awesome user experience all of the time. Ahhh… but you have no choice? (where have I heard that before?). Well… if you really have no choice, here are some tips to help improve the experience: Avoid screen scraping This is not 1999 and SharePoint is not an old green screen from a mainframe… so why are you treating it like it is? Screen scraping is time consuming and client intensive. Take advantage of tools like SPServices to do your data retrieval when possible. Fine tune your DOM searches A lot of time can be eaten up just searching the DOM and ignoring table rows that you don’t need. Write better jQuery to only loop through tables rows that you need, or only access specific elements you need. Take advantage of Element ID’s to return the one element you are looking for instead of looping through all the DOM over and over again. Write better jQuery Remember this is development. Think about how you can write cleaner, faster jQuery. This directly relates to the previous point of improving your DOM searches, but also when using arrays, variables and loops. Do you REALLY need to loop through that array 3 times? How can you knock it down to 2 times or even 1? When you have lots of calculations and data that you are manipulating every operation adds up. Think about how you can streamline it. Back in the old days before RAM was abundant, Cores were plentiful and dinosaurs roamed the earth, us developers had to take performance into account in everything we did. It’s a lost art that really needs to be used here. You should not be writing jQuery in SharePoint if you are sending a lot of data over the wire… Developer:  “Awesome… you can easily call SharePoint’s web services to retrieve and write data using SPServices!” Administrator: “Crap! you can easily call SharePoint’s web services to retrieve and write data using SPServices!” SPServices may indeed be the best thing that happened to SharePoint since the invention of SharePoint Saturdays by Godfather Lotter… BUT you HAVE to use it wisely! (I REFUSE to make the Spiderman reference). If you do not know what you are doing your code will bring back EVERY field and EVERY row from a list and push that over the internet with all that lovely XML wrapped around it. That can be a HUGE amount of data and will GREATLY impact performance! Calling several web service methods at the same time can cause the same problem and can negatively impact your SharePoint servers. These problems, thankfully, are not difficult to rectify if you are careful: Limit list data retrieved Use CAML to reduce the number of rows returned and limit the fields returned using ViewFields.  You should definitely be doing this regardless. If you aren’t I hope your admin thumps you upside the head. Batch large list updates You may or may not have noticed that if you try to do large updates (hundreds of rows) that the performance is either completely abysmal or it fails over half the time. You can greatly improve performance and avoid timeouts by breaking up your updates into several smaller updates. I don’t know if there is a magic number for best performance, it really depends on how much data you are sending back more than the number of rows. However, I have found that 200 rows generally works well.  Play around and find the right number for your situation. Delay Web Service calls when possible One of the cool things about jQuery and SPServices is that you can delay queries to the server until they are actually needed instead of doing them all at once. This can lead to performance improvements over DataViewWebParts and even .NET code in the right situations. So, don’t load the data until it’s needed. In some instances you may not need to retrieve the data at all, so why retrieve it ALL the time? You should not be writing jQuery in SharePoint if there is a better solution… jQuery is NOT the silver bullet in SharePoint, it is not the answer to every question, it is just another tool in the developers toolkit. I urge all developers to know what options exist out there and choose the right one! Sometimes it will be jQuery, sometimes it will be .NET,  sometimes it will be XSL, and sometimes it will be some other choice… So, when is there a better solution to jQuery? When you can’t get away from performance problems Sometimes jQuery will just give you horrible performance regardless of what you do because of unavoidable obstacles. In these situations you are going to have to figure out an alternative. Can I do it with a DVWP or do I have to crack open Visual Studio? When you need to do something that jQuery can’t do There are lots of things you can’t do in jQuery like elevate privileges, event handlers, workflows, or interact with back end systems that have no web service interface. It just can’t do everything. When it can be done faster and more efficiently another way Why are you spending time to write jQuery to do a DataViewWebPart that would take 5 minutes? Or why are you trying to implement complicated logic that would be simple to do in .NET? If your answer is that you don’t have the option, okay. BUT if you do have the option don’t reinvent the wheel! Take advantage of the other tools. The answer is not always jQuery… sorry… the kool-aid tastes good, but sweet tea is pretty awesome too. You should not be using jQuery in SharePoint if you are a moron… Let’s finish up the blog on a high note… Yes.. it’s true, I sometimes type things just to get a reaction… guess this section title might be a good example, but it feels good sometimes just to type the words that a lot of us think… So.. don’t be that guy! Another good buddy of mine that works for Microsoft told me. “I loved jQuery in SharePoint…. until I had to support it.”. He went on to explain that some user was making several web service calls on a page using jQuery and then was calling Microsoft and COMPLAINING because the page took so long to load… DUH! What do you expect to happen when you are pushing that much data over the wire and are making that many web service calls at once!! It’s one thing to write that kind of code and accept it’s just going to take a while, it’s COMPLETELY another issue to do that and then complain when it’s not lightning fast!  Someone’s gene pool needs some chlorine. So, I think this is a nice summary of the blog… DON’T be that guy… don’t be a moron. How can you stop yourself from being a moron? Ah.. glad you asked, here are some tips: Think Is jQuery the right solution to my problem? Is there a better approach? What are the implications and pitfalls of using jQuery in this situation? Search What are others doing? Does someone have a better solution? Is there a third party library that does the same thing I need? Plan Write good jQuery. Limit calculations and data sent over the wire and don’t reinvent the wheel when possible. Test Okay, it works well on your machine. Try it on others ESPECIALLY if this is for an external site. Test with empty data. Test with hundreds of rows of data. Test as many scenarios as possible. Monitor those server resources to see the impact there as well. Ask the experts As smart as you are, there are people smarter than you. Even the experts talk to each other to make sure they aren't doing something stupid. And for the MOST part they are pretty nice guys. Marc Anderson and Christophe Humbert are two guys who regularly keep me in line. Make sure you aren’t doing something stupid. Repeat So, when you think you have the best solution possible, repeat the steps above just to be safe.  Conclusion jQuery is an awesome tool and has come in handy on many occasions. I’m even teaching a 1/2 day SharePoint & jQuery workshop at the upcoming SPTechCon in Boston if you want to berate me in person. However, it’s only as awesome as the developer behind the keyboard. It IS development and has its pitfalls. Knowledge and experience are invaluable to giving the user the best experience possible.  Let’s face it, in the end, no matter our opinions, prejudices, or ego providing our clients, customers, and users with the best solution possible is what counts. Period… end of sentence…

    Read the article

  • From Bluehost to WP Engine, My WordPress Story

    - by thatjeffsmith
    This is probably the longest blog post I’ve written in a LONG time. And if you’re used to coming here for the Oracle stuff, this post is not about that. It’s about my blog, and the stuff under the hood that makes it run, AKA WordPress. If you want to skip to the juicy stuff, then use these shortcuts: My Site Slowed Down How I Moved to WP Engine How WP Engine ‘Hooked’ Me Why WP Engine? I started thatJeffSmith.com on May 28th, 2010. I had been already been blogging for several years, but a couple of really smart people I respected (Andy, Brent – thanks again!) suggested that I take ownership of my content and begin building my personal brand. I thought that was a good idea, and so I signed up for service with bluehost. Bluehost makes setting up a WordPress site very, very easy. And, they continued to be easy to work with for the past 2 years. I would even recommend them to anyone looking to host their own WordPress install/site. For $83.40, I purchased a year’s worth of service and my domain name registration – a very good value. And then last year I paid $107.40 for another year’s services. And when that year expired I paid another $190.80 for an additional two year’s service in advance. I had been up to that point, getting my money’s worth. And then, just a few weeks ago… My Site Slowed to a Crawl That spike was from an April Fool's Day Post, I think Why? Well, when I first started blogging, I had the same problem that most beginner bloggers have – not many readers. In my first year of blogging, I think the highest number of readers on a single day was about 125. I remember that day as I was very excited to break 100! Bluehost was very reliable, serving up my content with maybe a total of 3-4 outages in the past 2 years. Support was usually very prompt with answers and solutions, and I love their ‘Chat now’ technology – much nicer than message boards only or pay-to-talk phone support. In the past 6 months however, I noticed a couple of things: daily traffic was increasing – woohoo! my service was experiencing severe CPU throttling – doh! To be honest, I wasn’t aware the throttling was occuring, but I did know that the response time of my blog was starting to lag. Average load times were approaching 20-30 seconds. Not good when good sites are loading in 5 seconds or less. And just this past week, in getting ready to launch a new website for work that sucked in an RSS feed from my blog, the new page was left waiting for more than a minute. Not good! In fact my boss asked, why aren’t you blogging on Blogger? Ugh. I tried a few things to fix the problem: I paid for a premium WordPress theme – Themify’s Grido (thanks to @SQLRockstar for the heads-up) I installed a couple of WP caching plugins I read every WP optimization blog post I could get my greedy little eyes on However, at the same time I was also getting addicted to WordPress bloggers talking about all the cool things you could do with your blog. As a result I had at one point about 30 different plugins installed. WordPress runs on MySQL, and certain queries running via these plugins were starving for CPU. Plugins that would be called every page load meant that as more people clicked on my site, the more CPU I needed. I’m not stupid, so I eventually figured out that maybe less plugins was better, and was able to go down to just 20. But still, the site was running like a dog. CPU Throttling, makes MySQL wait to run a query Bluehost runs shared servers. Your site runs on the same box that several hundred (or thousand?) other services are running on. If you take more CPU than they think you should have, they will limit your service by making you stand in line for CPU, AKA ‘throttling.’ This is not bad. This business model allows them to serve many, many users for a very fair price. It works great until, well, until it doesn’t. I noticed in the last week that for every minute of service, I was being throttled between 60 and 300 seconds. If there were 5 MySQL processes running, then every single one of them were being held in check. The blog visitor notice this as their page requests would take a minute or more to be answered. Bluehost unfortunately doesn’t offer dedicated server hosting, so there was no real upgrade path for me follow and remain one of their customers. So what was I to do? Uninstall every plugin and hope the site sped up? Ask for people to take turns on my blog? I decided to spend my way out of the problem. I signed up for service with WP Engine and moved ThatJeffSmith.com The first 2 months are free, and after that it’s about $29/month to run my site on their system. My math tells me that’s a good bit more expensive than what Bluehost was charging me – to the tune of about 300% more a month. Oh, and I should just say that my blog is a personal blog even though I talk about work stuff here. I don’t get paid for blogging, I don’t sell ads, and I don’t expense the service fees – this is my personal passion. So is it worth it? In the first 4 days, it seems to be totally worth it. Load times have gone from 20-30 seconds to less than 5 seconds. A few folks have told me via Twitter that they notice faster page loads. I anticipate this will indirectly lead to more traffic as Google penalizes you in search results if your site is too slow, and of course some folks won’t even bother waiting more than 5-10 seconds. I noticed right away that writing posts, uploading pictures, and just using the WordPress dashboard in general was much more responsive. So writing is less of a chore now, which means I won’t have a good reason not to write How I Moved to WP Engine I signed up for the service and registered my domain. I then took a full export of my ‘old’ site by doing a FTP GET of all my files, then did a MySQL database backup, exported my WordPress Theme settings to a .zip file, and then finally used the WordPress ‘Export’ feature. I then used the WordPress ‘Import’ on the new site to load up my posts. Then I uploaded the theme .zip package from Themify. Then I FTP’d the ‘wp-content’ directory up to my new server using SFTP (WP Engine only supports secure FTP – good on them!) Using a temporary URL to see my new site, I was able to confirm that everything looked mostly OK – I’ll detail the challenges and issues of fixing the content next – but then it was time to ‘flip the switch.’ I updated the IP address that the DNS lookup tables use to route traffic to my new server. In a matter of minutes the DNS servers around the world were updated and it was time to see the new site! But It Was ‘Broken’ I had never moved a website before, and in my rush to update the DNS, I had changed the records without really finding out what I was supposed to do first. After re-reading the directions provided by WP Engine and following the guidance of their support engineer, I realized I had needed to set the CNAME (Alias) ‘www’ record to point to a different URL than the ‘www.thatjeffsmith.com’ entry I had set. Once corrected the site was up and running in less than a minute. Then It Was Only Mostly Broken Many of my plugins weren’t working. Apparently just ftp’ing the wp-content directory up wasn’t the proper way to re-install the plugin. I suspect file permissions or file ownership wasn’t proper. Some plug-ins were working, many had their settings wiped to the defaults, and a few just didn’t work again. I had to delete the directory of the plug-in manually via SFTP, and then use the WP Dashboard to install it from scratch. And here was my first ‘lesson’ – don’t switch the DNS records until you’ve completely tested your new site. I wasn’t able to navigate the old WP console to review my plug-in settings. Thankfully I was able to use the Wayback Machine to reverse engineer some things, and of course most plug-ins aren’t that complicated to setup to begin with. An example of one that I had to redo from scratch is the ‘Twitter @Anywhere Plus’ plugin that I use to create the form that allows folks to tweet a post they enjoyed at the end of each story. How WP Engine ‘Hooked’ Me I actually signed up with another provider first. They ranked highly in Google searches and a few Tweeps recommended them to me. But hours after signing up and I still didn’t have sever reyady, I was ready to give up on them. They offered no chat or phone support – only mail and message boards. And the message boards were rife with posts about how the service had gone downhill in the past 6 months. To their credit, they did make it easy to cancel, although I did have to do so via email as their website ‘cancel’ button was non-existent. Within minutes of activating my WP Engine account I had received my welcome message and directions on how to get started. I was able to see my staged website right away. They also did something very cool before I even got started – they looked at my existing site and told me by how much they could improve its performance. The proof is in the web pudding. I like this for a few reasons, but primarily I liked their business model. It told me they knew what they were doing, and that they were willing to put their money where their mouth was. This was further evident by their 60-day money back guarantee. And if I understand it correctly, they don’t even take your money until after that 60 day period is over. After a day, I was welcomed by the WP Engine social media team, and was given the opportunity to subscribe to their newsletter and follow their account on Twitter. I noticed their Twitter team is sure to post regular WordPress tips several times a day. It’s not just an account that’s setup for the sake of having a Twitter presence. These little things add up and give me confidence in my decision to choose them as my hosting partner. ‘Partner’ – that’s a lot nicer word than just ‘service provider,’ isn’t it? Oh, and they offered me a t-shirt. Don’t ever doubt the power of a ‘free’ t-shirt! How awesome is this e-mail, from a customer perspective? I wasn’t really expecting any of this. Exceeding expectations before I have even handed over a single dollar seems like a pretty good business plan. This is how you treat customers. Love them to death, and they reward you with loyalty. But Jeff, You Skipped a Piece Here, Why WP Engine? I found them on one of those ‘Top 10′ list posts, and pulled up their webpage. I noticed they offered a specialized service – they host WordPress installs, and that’s it. Their servers are tuned specifically for running WordPress. They had in bolded text, things like ‘INSANELY FAST. INFINITELY SCALABLE.’ and ‘LIGHTNING SPEED.’ And then they offered insurance against hackers and they took care of automatic backups and restores. The only drawbacks I have noticed so far relate to plugins I used that have been ‘blacklisted.’ In order to guarantee that ‘lightning’ speed, they have banned the use of the CPU-suckiest plugins. One of those is the ‘Related Posts’ plugin. So if you are a subscriber and are reading this in your email, you’ll notice there’s no links back to my blog to continue reading other related stories. Since that referral traffic is very small single-digit for my site, I decided that I’m OK with that. I’d rather have the warp-speed page loads. Again, I think that will lead to higher traffic down the road. In 50+ days I will need to decide if WP Engine is a permanent solution. I’ll be sure to update this post when that time comes and let y’all know how it turns out.

    Read the article

  • PASS: Bylaw Changes

    - by Bill Graziano
    While you’re reading this, a post should be going up on the PASS blog on the plans to change our bylaws.  You should be able to find our old bylaws, our proposed bylaws and a red-lined version of the changes.  We plan to listen to feedback until March 31st.  At that point we’ll decide whether to vote on these changes or take other action. The executive summary is that we’re adding a restriction to prevent more than two people from the same company on the Board and eliminating the Board’s Officer Appointment Committee to have Officers directly elected by the Board.  This second change better matches how officer elections have been conducted in the past. The Gritty Details Our scope was to change bylaws to match how PASS actually works and tackle a limited set of issues.  Changing the bylaws is hard.  We’ve been working on these changes since the March board meeting last year.  At that meeting we met and talked through the issues we wanted to address.  In years past the Board has tried to come up with language and then we’ve discussed and negotiated to get to the result.  In March, we gave HQ guidance on what we wanted and asked them to come up with a starting point.  Hannes worked on building us an initial set of changes that we could work our way through.  Discussing changes like this over email is difficult wasn’t very productive.  We do a much better job on this at the in-person Board meetings.  Unfortunately there are only 2 or 3 of those a year. In August we met in Nashville and spent time discussing the changes.  That was also the day after we released the slate for the 2010 election. The discussion around that colored what we talked about in terms of these changes.  We talked very briefly at the Summit and again reviewed and revised the changes at the Board meeting in January.  This is the result of those changes and discussions. We made numerous small changes to clean up language and make wording more clear.  We also made two big changes. Director Employment Restrictions The first is that only two people from the same company can serve on the Board at the same time.  The actual language in section VI.3 reads: A maximum of two (2) Directors who are employed by, or who are joint owners or partners in, the same for-profit venture, company, organization, or other legal entity, may concurrently serve on the PASS Board of Directors at any time. The definition of “employed” is at the sole discretion of the Board. And what a mess this turns out to be in practice.  Our membership is a hodgepodge of interlocking relationships.  Let’s say three Board members get together and start a blog service for SQL Server bloggers.  It’s technically for-profit.  Let’s assume it makes $8 in the first year.  Does that trigger this clause?  (Technically yes.)  We had a horrible time trying to write language that covered everything.  All the sample bylaws that we found were just as vague as this. That led to the third clause in this section.  The first sentence reads: The Board of Directors reserves the right, strictly on a case-by-case basis, to overrule the requirements of Section VI.3 by majority decision for any single Director’s conflict of employment. We needed some way to handle the trivial issues and exercise some judgment.  It seems like a public vote is the best way.  This discloses the relationship and gets each Board member on record on the issue.   In practice I think this clause will rarely be used.  I think this entire section will only be invoked for actual employment issues and not for small side projects.  In either case we have the mechanisms in place to handle it in a public, transparent way. That’s the first and third clauses.  The second clause says that if your situation changes and you fall afoul of this restriction you need to notify the Board.  The clause further states that if this new job means a Board members violates the “two-per-company” rule the Board may request their resignation.  The Board can also  allow the person to continue serving with a majority vote.  I think this will also take some judgment.  Consider a person switching jobs that leads to three people from the same company.  I’m very likely to ask for someone to resign if all three are two weeks into a two year term.  I’m unlikely to ask anyone to resign if one is two weeks away from ending their term.  In either case, the decision will be a public vote that we can be held accountable for. One concern that was raised was whether this would affect someone choosing to accept a job.  I think that’s a choice for them to make.  PASS is clearly stating its intent that only two directors from any one organization should serve at any time.  Once these bylaws are approved, this policy should not come as a surprise to any potential or current Board members considering a job change.  This clause isn’t perfect.  The biggest hole is business relationships that aren’t defined above.  Let’s say that two employees from company “X” serve on the Board.  What happens if I accept a full-time consulting contract with that company?  Let’s assume I’m working directly for one of the two existing Board members.  That doesn’t violate section VI.3.  But I think it’s clearly the kind of relationship we’d like to prevent.  Unfortunately that was even harder to write than what we have now.  I fully expect that in the next revision of the bylaws we’ll address this.  It just didn’t make it into this one. Officer Elections The officer election process received a slightly different rewrite.  Our goal was to codify in the bylaws the actual process we used to elect the officers.  The officers are the President, Executive Vice-President (EVP) and Vice-President of Marketing.  The Immediate Past President (IPP) is also an officer but isn’t elected.  The IPP serves in that role for two years after completing their term as President.  We do that for continuity’s sake.  Some organizations have a President-elect that serves for one or two years.  The group that founded PASS chose to have an IPP. When I started on the Board, the Nominating Committee (NomCom) selected the slate for the at-large directors and the slate for the officers.  There was always one candidate for each officer position.  It wasn’t really an election so much as the NomCom decided who the next person would be for each officer position.  Behind the scenes the Board worked to select the best people for the role. In June 2009 that process was changed to bring it line with what actually happens.  An Officer Appointment Committee was created that was a subset of the Board.  That committee would take time to interview the candidates and present a slate to the Board for approval.  The majority vote of the Board would determine the officers for the next two years.  In practice the Board itself interviewed the candidates and conducted the elections.  That means it was time to change the bylaws again. Section VII.2 and VII.3 spell out the process used to select the officers.  We use the phrase “Officer Appointment” to separate it from the Director election but the end result is that the Board elects the officers.  Section VII.3 starts: Officers shall be appointed bi-annually by a majority of all the voting members of the Board of Directors. Everything else revolves around that sentence.  We use the word appoint but they truly are elected.  There are details in the bylaws for term limits, minimum requirements for President (1 prior term as an officer), tie breakers and filling vacancies. In practice we will have an election for President, then an election for EVP and then an election for VP Marketing.  That means that losing candidates will be able to fall down the ladder and run for the next open position.  Another point to note is that officers aren’t at-large directors.  That means if a current sitting officer loses all three elections they are off the Board.  Having Board member votes public will help with the transparency of this approach. This process has a number of positive and negatives.  The biggest concern I expect to hear is that our members don’t directly choose the officers.  I’m going to try and list all the positives and negatives of this approach. Many non-profits value continuity and are slower to change than a business.  On the plus side this promotes that.  On the negative side this promotes that.  If we change too slowly the members complain that we aren’t responsive.  If we change too quickly we make mistakes and fail at various things.  We’ve been criticized for both of those lately so I’m not entirely sure where to draw the line.  My rough assumption to this point is that we’re going too slow on governance and too quickly on becoming “more than a Summit.”  This approach creates competition in the officer elections.  If you are an at-large director there is no consequence to losing an election.  If you are an officer the only way to stay on the Board is to win an officer election or an at-large election.  If you are an officer and lose an election you can always run for the next office down.  This makes it very easy for multiple people to contest an election. There is value in a person moving through the officer positions up to the Presidency.  Having the Board select the officers promotes this.  The down side is that it takes a LOT of time to get to the Presidency.  We’ve had good people struggle with burnout.  We’ve had lots of discussion around this.  The process as we’ve described it here makes it possible for someone to move quickly through the ranks but doesn’t prevent people from working their way up through each role. We talked long and hard about having the officers elected by the members.  We had a self-imposed deadline to complete these changes prior to elections this summer. The other challenge was that our original goal was to make the bylaws reflect our actual process rather than create a new one.  I believe we accomplished this goal. We ran out of time to consider this option in the detail it needs.  Having member elections for officers needs a number of problems solved.  We would need a way for candidates to fall through the election.  This is what promotes competition.  Without this few people would risk an election and we’ll be back to one candidate per slot.  We need to do this without having multiple elections.  We may be able to copy what other organizations are doing but I was surprised at how little I could find on other organizations.  We also need a way for people that lose an officer election to win an at-large election.  Otherwise we’ll have very little competition for officers. This brings me to an area that I think we as a Board haven’t done a good job.  We haven’t built a strong process to tell you who is doing a good job and who isn’t.  This is a double-edged sword.  I don’t want to highlight Board members that are failing.  That’s not a good way to get people to volunteer and run for the Board.  But I also need a way let the members make an informed choice about who is doing a good job and would make a good officer.  Encouraging Board members to blog, publishing minutes and making votes public helps in that regard but isn’t the final answer.  I don’t know what the final answer is yet.  I do know that the Board members themselves are uniquely positioned to know which other Board members are doing good work.  They know who speaks up in meetings, who works to build consensus, who has good ideas and who works with the members.  What I Could Do Better I’ve learned a lot writing this about how we communicated with our members.  The next time we revise the bylaws I’d do a few things differently.  The biggest change would be to provide better documentation.  The March 2009 minutes provide a very detailed look into what changes we wanted to make to the bylaws.  Looking back, I’m a little surprised at how closely they matched our final changes and covered the various arguments.  If you just read those you’d get 90% of what we eventually changed.  Nearly everything else was just details around implementation.  I’d also consider publishing a scope document defining exactly what we were doing any why.  I think it really helped that we had a limited, defined goal in mind.  I don’t think we did a good job communicating that goal outside the meeting minutes though. That said, I wish I’d blogged more after the August and January meeting.  I think it would have helped more people to know that this change was coming and to be ready for it. Conclusion These changes address two big concerns that the Board had.  First, it prevents a single organization from dominating the Board.  Second, it codifies and clearly spells out how officers are elected.  This is the process that was previously followed but it was somewhat murky.  These changes bring clarity to this and clearly explain the process the Board will follow. We’re going to listen to feedback until March 31st.  At that time we’ll decide whether to approve these changes.  I’m also assuming that we’ll start another round of changes in the next year or two.  Are there other issues in the bylaws that we should tackle in the future?

    Read the article

  • Oracle 11gR2: NLS_CHARACTERSET accidentally removed with an UPDATE-Query

    - by Marco Nätlitz
    Hi folks, I have a fresh installation of Oracle 11gr2_x64 on CentOS. After the installation I wanted to get productive and started to import my dumps. One of the dumps caused characterset error so I tried to change the systems character-set to the one specified in the dump. I ran a statement like this: UPDATE nls_database_parameters SET parameter='WS....' WHERE parameter=’NLS_CHARACTERSET’; As you can see: I have written the value of the character-set in the parameter column instead of the value column. I guess I was just too much thinking about the problem instead of checking what I am typing there. After the query the parameter "NLS_CHARACTERSET" is gone and the server reports that the characterset is "(null)". I want to put the "NLS_CHARACTERSET" paramater back in the table but don't know how. If I try to do something like this INSERT INTO nls_database_parameters (PARAMETERS, VALUE) VALUES ("NLS_CHARACTERSET", "AL32UTF8"); I get the error: Fehler bei Befehlszeile:1 Spalte:84 Fehlerbericht: *Cause: SQL-Fehler: ORA-00984: Spalte hier nicht zulässig *Action: 00984. 00000 - "column not allowed here" Sorry that the error message is in German but it contains the Oracle error code. Do you have any idea how I can fix that? Thanks and best regards Marco

    Read the article

  • Automate ripping TV show DVDs

    - by skarface
    RipIt & Handbrake do a really good job of ripping and compressing. For normal "single main feature" DVDs I have a good workflow, and for the most part handbrake does a good job of figuring out what the main title is. The process for ripping a DVD that has multiple episodes of something kinda sucks. Has anyone made any progress on automating (or at least simplifying) the process of getting show-s01e01.avi from a DVD?

    Read the article

  • What are some of the best wireless routers for a price-conscious home power-user?

    - by Alain
    I'm extremely dissatisfied with the 'popular' choice for routers in homes and small offices. They are expensive (upwards of 60$), lack a great deal of useful configuration options, and seem to need to be restarted quite often. (Linksys comes to mind). I've been on the market for a good router lately, and slowly collecting a set of requirements I feel good routers should meet. Maximum number of TCP/IP connections. - This isn't something I see any routers advertise, but in terms of supporting torrent applications, I've been screwed by routers that support less than 20 here. From what I understand a fairly standard number is 200, but there are not so expensive routers that support thousands. Router configuration menu - Most have standard menu's that let you set up basic things like your wireless network encryption settings, uPnP, and maybe even DMZ (demilitarized zones). An absolute requirement for me, however, are routers with good enough firmware to support: Explicit Port forwarding Assigning static local ips to specific mac addresses, or at least Port forwarding by MAC address Port, IP and MAC filtering Dynamic DNS service for home users who want to set up a server but have a dynamic IP Traffic shaping (ideally) - giving priority to packets from certain machines or over certain ports. Strong wireless signal - If getting a reliable signal requires me to be so close to the router that I can connect an Ethernet cable, it's not good enough. As many Ethernet ports as possible. - Because I want to be able to switch from console gaming to PC gaming without visiting my router. So far, the best thing I've stumbled upon (in the bargain bin at staples) was a 20$ retail plus router. It was meant to be the cheapest alternative until I could find something better to purchase online, but I was actually blown away by the firmware capabilities. It supports defining reserved bandwidth for certain network traffic, dynamic DNS, reserving local IPs for specific MAC addresses, etc. At 2 am when my roommate is killing our Internet with their torrents, I can limit their bandwidth without outright blacklisting them. I have, however, met serious limitations when it comes to network traffic between local machines. It claims a 300Mbps connection, but I have trouble streaming videos from my PC to my console or other laptops wirelessly. It has a meltdown and needs to be reset once in a while (no more than a couple times a month), and it's got a 200 connection limit. There 4 Ethernet ports in the back but I'm pretty sure the first doesn't work. So some great answers to this question would be: Any metrics you use to compare routers, and requirements you have for new candidates. The best routers you've found for supporting home servers, file management systems, high volume torrent traffic, good price/feature ratio, etc. Good configuration advice (aside from 'use Ethernet whenever possible') Thanks for your feedback and experiences!

    Read the article

  • Portable voice recorder

    - by Quintin Par
    Where should I look for if I need a good voice recorder? Can someone point me to good reviews? I remember seeing this Olymbus model sometime back, but not sure if it’s the best. Also it seem be to be a bit pricy... Should not be expensive USB Really good battery power Bluetooth Small size English Translator

    Read the article

  • Monitor for HD video editing

    - by Kato
    I have been researching for days and nights on a good monitor to buy for a Mac Pro with an ATI Radeon 2600 XT (256mb). It will be used extensively for HD video editing (1080p) and photo editing, and likely also digital/3D animation next year(a lot of FCP + CS4). I am a student, so money is a little bit of an issue, but I want something that I'll be able to use semi-professionally after I'm done school, and am willing to finance something if it is worth the cost. I'm HOPING for something under $1000 though. The IPS Ultrasharps from Dell seem to be getting good reviews from other video editors. Accurate colour correction is a concern for me (hopefully something that covers Adobe spectrum), as well as a decent response time, HD resolutions, and DVI port. Also something with good gradient/definition in black areas, as this is difficult for editing on most LCDs. 1X1 pixel, brightness, good DVD playback etc. Hopefully this is not impossible to find for under $2000!

    Read the article

  • cPanel web server redundancy advice?

    - by crgnz
    At present I operate a (reasonably low volume) web-hosting service with a Centos 5.3 server running cPanel/WHM. I would like to implement a level of redundancy such that in the event of server failure, I can restore service with a minimum of effort in less than 60 minutes. I also want to setup a secondary DNS that cPanel will replicate with. My current idea is to kill two birds with one stone by: My current server is called "www1" Purchase an identical server (HP DL360 G4) with mirrored disks. Call this server "www2" Install Centos 5.4 (or perhaps I should install 5.3 to be identical with www1) Install cPanel/WHM on this server and fully license it Setup www1 and www2 cPanel to replicate DNS with each other Setup a nightly replication script that does the following: a) rsync's the /home directory from www1 to www2 b) dumps all MySQL databases on www1 and copies them to a temp folder (with root access only) on www2 c) triggers a script to run on www2 that restores the MySQL dumps Thus each night a fully working copy of all the websites and MySQL databases is copied to www2. I do not have enough knowledge of MySQL replication to understand if it works safely and transparently with cPanel. Thus I propose the mysql dump/copy/restore due to not knowing any better! In the event that www1 dies a horrible death, I envisage that I could login to www2, change the IP addresses to those that www1 had, and presto, the websites are available again. The advantage of this idea is that it is fairly simple and "low tech" and thus does not require an expert sysadmin to setup and monitor (I am NOT an expert sysadmin) The disadvantage of this idea is that up to a full days worth of data changes would be lost. I think this would be acceptable to the sorts of customers I host at the moment. The other disadvantage would be having to pay for a full cPanel license, but I am comfortable with that cost, so for now all I want to discuss are technical considerations. Is this a sound scheme?

    Read the article

  • Voice command in non-english Windows 7 Home Premium

    - by Cesar
    Is there a good software to add voice command to my Windows 7 machine? English voice command would work for me, but I am unable to get it via OS both because I use a pt-BR Windows 7, and becuse I only have the Home Premium version. Upgrading is too expensive for me. Do you recommend a good alternative. It could be either free or paid, as long it's good.

    Read the article

  • Batch convert divx to iPhone format

    - by Kelsey
    I am looking for free software to do batch conversions of divx video files to iPhone format. I have read the thread: http://superuser.com/questions/5784/looking-to-convert-video-to-iphone-format Handbrake works good for single files but it has very little customization with regards to files names and the batch functionality is not very good (or at least I can't get it to work very easily). Can anyone recommend a good batch converter? A script for Handbrake to do a batch for all in a specific directory would be useful even.

    Read the article

  • Transparent, unicode X terminal not tied to a Desktop Environment?

    - by jamuraa
    I've been looking for this for a while now and I just haven't been able to find one. The last few that I used were: aterm - this one was fast and had good transparency support, but it doesn't support Unicode at all as far as I can tell. The dependency graph is also reasonable. gnome-terminal - was good, and had good transparency support plus unicode, but it pulls in about everything in gnome, and I don't use anything else in gnome. It was also somewhat slow (noticable lag in updating at times) and wouldn't use fonts that I wanted. Eterm - same thing as aterm, good dependencies and transparency but no unicode. Does anyone have suggestions, or will I be stuck with gnome-terminal's dependencies and slowness?

    Read the article

  • What are the pros of switching DNS names with a database server hardware upgrade?

    - by wilbbe01
    When we upgrade to new hardware at work we usually increment a number in the DNS name. For example. We have a server called database-2, that is slated to become database-3 in the coming days. I haven't been able to find a good reason why this is good behavior. To me the work of trying to catch all end user machines, as well as all servers dependent on the database server is far riskier than simply moving the database and ip/name with it to the new hardware. A little over a year ago we spent several months of requests coming in, as infrequent users began using software that needed to be updated to point to a new DNS name. I am struggling to find answers as to why this is a good practice. So the question. Why is using DNS names as a "server hardware version identifier" a good idea? What am I overlooking? Thanks much.

    Read the article

  • Diagramming Software for a Developer/Designer

    - by Craig Walker
    For a long time I've been looking for a good diagramming/vector-based drawing program that meets my needs as a developer. I'd like to: Draw database diagrams Draw flow charts Draw object-modeling diagrams (UML being the standard) Draw other free-form diagrams (basically boxes & arrows with the occasional clipart) Draw mockups of user interfaces and web pages EDIT: I want good-looking electronic-format diagrams that I can show to 3rd parties, not just something for my own internal use. EDIT 2: I'm also looking for Windows software, although I'm toying with the idea of switching to Mac, so a really good Mac-only product might get me to switch. Basically I need a good vector graphic program (with decent grouping, connecting lines, and ideally auto-routing). I'd prefer a diagramming tool that can also be used for drawing (for the UI mockups) rather than a drawing tool that can also be used for diagrams. I've tried Visio on several occasions, and every time I've been disappointed. The interface always seems to get in my way at some point. It's pretty close to what I want, and the latest version (I got the trail from MS) seems to be better than previous ones in terms of usability, but I really don't want to plunk down that sort of cash for a mediocre product. I've tried Dia and Inkscape, and while initially promising and with the right price tag, I found both of them to be lacking in several ways (including some recurring bugs). I've toyed with getting Adobe Illustrator, but I've never used it before, and I have a feeling that it wouldn't handle the diagramming aspect very well, and I don't want to buy a copy just to find out it doesn't meet my needs. So far, the product that I've had the most success with is, sadly, OpenOffice Draw. It's free of course (which lowers my expectations and thus improves my view of it) and its usability is pretty good, but in the end I'd like something more suited to diagramming. I'm willing to spend real money (in the $500-$1K range) for a really good piece of software if it does everything I want it to. The front runner is of course Visio but I'm hoping for more. Does anybody have any recommendations? CONCLUSION: @dlamblin had the most informative post, but the part I gained the most from was his/her (and others) mention of OmniGraffle, not Gliffy. I gave Gliffy a try, and it seemed neet for occational use, but since it's a Flash app (note: not AJAX as dlamblin mentioned) it's still a bit of a pain to use (no keyboard shortcuts for copy/paste was pretty much a deal breaker for me). I also tried SmartDraw, but it had 3-strikes-you're-out against it: The trial period was only 7 days long. It used some nonstandard (and visually jarring) GUI widget toolkit for its UI. At the very least it makes me suspicious (how do I know it will actually work & support the standard Windows features?) It crashed on me early into my trial. OmniGraffle looks like exactly what I want... except that it's Mac-only (so I couldn't give it a try). However, it got good reviews from my Mac-owning coworker, and I hope to try it on a friend's Mac soon. If it's good enough then I might spring for a new MacBook.

    Read the article

  • Database version control resources

    - by Wes McClure
    In the process of creating my own DB VCS tool tsqlmigrations.codeplex.com I ran into several good resources to help guide me along the way in reviewing existing offerings and in concepts that would be needed in a good DB VCS.  This is my list of helpful links that others can use to understand some of the concepts and some of the tools in existence.  In the next few posts I will try to explain how I used these to create TSqlMigrations.   Blogs entries Three rules for database work - K. Scott Allen http://odetocode.com/blogs/scott/archive/2008/01/30/three-rules-for-database-work.aspx Versioning databases - the baseline http://odetocode.com/blogs/scott/archive/2008/01/31/versioning-databases-the-baseline.aspx Versioning databases - change scripts http://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-change-scripts.aspx Versioning databases - views, stored procedures and the like http://odetocode.com/blogs/scott/archive/2008/02/02/versioning-databases-views-stored-procedures-and-the-like.aspx Versioning databases - branching and merging http://odetocode.com/blogs/scott/archive/2008/02/03/versioning-databases-branching-and-merging.aspx Evolutionary Database Design - Martin Fowler http://martinfowler.com/articles/evodb.html Are database migration frameworks worth the effort? - Good challenges http://www.ridgway.co.za/archive/2009/01/03/are-database-migration-frameworks-worth-the-effort.aspx Continuous Integration (in general) http://martinfowler.com/articles/continuousIntegration.html http://martinfowler.com/articles/originalContinuousIntegration.html Is Your Database Under Version Control? http://www.codinghorror.com/blog/archives/000743.html 11 Tools for Database Versioning http://secretgeek.net/dbcontrol.asp How to do database source control and builds http://mikehadlow.blogspot.com/2006/09/how-to-do-database-source-control-and.html .Net Database Migration Tool Roundup http://flux88.com/blog/net-database-migration-tool-roundup/ Books Book Description Refactoring Databases: Evolutionary Database Design Martin Fowler signature series on refactoring databases. Book site: http://databaserefactoring.com/ Recipes for Continuous Database Integration: Evolutionary Database Development (Digital Short Cut) A good question/answer layout of common problems and solutions with database version control. http://www.informit.com/store/product.aspx?isbn=032150206X

    Read the article

  • Are programmers bad testers?

    - by jhsowter
    I know this sounds a lot like other questions which have already being asked, but it is actually slightly different. It seems to be generally considered that programmers are not good at performing the role of testing an application. For example: Joel on Software - Top Five (Wrong) Reasons You Don't Have Testers (emphasis mine) Don't even think of trying to tell college CS graduates that they can come work for you, but "everyone has to do a stint in QA for a while before moving on to code". I've seen a lot of this. Programmers do not make good testers, and you'll lose a good programmer, who is a lot harder to replace. And in this question, one of the most popular answers says (again, my emphasis): Developers can be testers, but they shouldn't be testers. Developers tend to unintentionally/unconciously avoid to use the application in a way that might break it. That's because they wrote it and mostly test it in the way it should be used. So the question is are programmers bad at testing? What evidence or arguments are there to support this conclusion? Are programmers only bad at testing their own code? Is there any evidence to suggest that programmers are actually good at testing? What do I mean by "testing?" I do not mean unit testing or anything that is considered part of the methodology used by the software team to write software. I mean some kind of quality assurance method that is used after the code has been built and deployed to whatever that software team would call the "test environment."

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >