Search Results

Search found 1309 results on 53 pages for 'jeff miller'.

Page 11/53 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Simple method for reliably detecting code in text?

    - by Jeff Atwood
    GMail has this feature where it will warn you if you try to send an email that it thinks might have an attachment. Because GMail detected the string see the attached in the email, but no actual attachment, it warns me with an OK / Cancel dialog when I click the Send button. We have a related problem on Stack Overflow. That is, when a user enters a post like this one: my problem is I need to change the database but I don't won't to create a new connection. example: DataSet dsMasterInfo = new DataSet(); Database db = DatabaseFactory.CreateDatabase("ConnectionString"); DbCommand dbCommand = db.GetStoredProcCommand("uspGetMasterName"); This user did not format their code as code! That is, they didn't indent by 4 spaces per Markdown, or use the code button (or the keyboard shortcut ctrl+k) which does that for them. Thus, our system is accreting a lot of edits where people have to go in and manually format code for people that are somehow unable to figure this out. This leads to a lot of bellyaching. We've improved the editor help several times, but short of driving over to the user's house and pressing the correct buttons on their keyboard for them, we're at a loss to see what to do next. That's why we are considering a Google GMail style warning: Did you mean to post code? You wrote stuff that we think looks like code, but you didn't format it as code by indenting 4 spaces, using the toolbar code button or the ctrl+k code formatting command. However, presenting this warning requires us to detect the presence of what we think is unformatted code in a question. What is a simple, semi-reliable way of doing this? Per Markdown, code is always indented by 4 spaces or within backticks, so anything correctly formatted can be discarded from the check immediately. This is only a warning and it will only apply to low-reputation users asking their first questions (or providing their first answers), so some false positives are OK, so long as they are about 5% or less. Questions on Stack Overflow can be in any language, though we can realistically limit our check to, say, the "big ten" languages. Per the tags page that would be C#, Java, PHP, JavaScript, Objective-C, C, C++, Python, Ruby. Use the Stack Overflow creative commons data dump to audit your potential solution (or just pick a few questions in the top 10 tags on Stack Overflow) and see how it does. Pseudocode is fine, but we use c# if you want to be extra friendly. The simpler the better (so long as it works). KISS! If your solution requires us to attempt to compile posts in 10 different compilers, or an army of people to manually train a bayesian inference engine, that's ... not exactly what we had in mind.

    Read the article

  • OOP for unit testing : The good, the bad and the ugly

    - by Jeff
    I have recently read Miško Hevery's pdf guide to writing testable code in which its stated that you should limit your classes instanciations in your constructors. I understand that its what you should do because it allow you to easily mock you objects that are send as parameters to your class. But when it comes to writing actual code, i often end up with things like that (exemple is in PHP using Zend Framework but I think it's self explanatory) : class Some_class { private $_data; private $_options; private $_locale; public function __construct($data, $options = null) { $this->_data = $data; if ($options != null) { $this->_options = $options; } $this->_init(); } private function _init() { if(isset($this->_options['locale'])) { $locale = $this->_options['locale']; if ($locale instanceof Zend_Locale) { $this->_locale = $locale; } elseif (Zend_Locale::isLocale($locale)) { $this->_locale = new Zend_Locale($locale); } else { $this->_locale = new Zend_Locale(); } } } } Acording to my understanding of Miško Hevery's guide, i shouldn't instanciate the Zend_Local in my class but push it through the constructor (Which can be done through the options array in my example). I am wondering what would be the best practice to get the most flexibility for unittesing this code and aswell, if I want to move away from Zend Framework. Thanks in advance

    Read the article

  • Open source adventures with... wait for it... Microsoft

    - by Jeff
    Last week, Microsoft announced that it was going to open source the rest of the ASP.NET MVC Web stack. The core MVC framework has been open source for a long time now, but the other pieces around it are also now out in the wild. Not only that, but it's not what I call "big bang" open source, where you release the source with each version. No, they're actually committing in real time to a public repository. They're also taking contributions where it makes sense. If that weren't exciting enough, CodePlex, which used to be a part of the team I was on, has been re-org'd to a different part of the company where it is getting the love and attention (and apparently money) that it deserves. For a period of several months, I lobbied to get a PM gig with that product, but got nowhere. A year and a half later, I'm happy to see it finally treated right. In any case, I found a bug in Razor, the rendering engine, before the beta came out. I informally sent the bug info to some people, but it wasn't fixed for the beta. Now, with the project being developed in the open, I was able to submit the issue, and went back and forth with the developer who wrote the code (I met him once at a meet up in Bellevue, I think), and he committed a fix. I tried it a day later, and the bug was gone. There's a lot to learn from all of this. That open source software is surprisingly efficient and often of high quality is one part of it. For me the win is that it demonstrates how open and collaborative processes, as light as possible, lead to better software. In other words, even if this were a project being developed internally, at a bank or something, getting stakeholders involved early and giving people the ability to respond leads to awesomeness. While there is always a place for big thinking, experience has shown time and time again that trying to figure everything out up front takes too long, and rarely meets expectations. This is a lesson that probably half of Microsoft has yet to learn, including the team I was on before I split. It's the reason that team still hasn't shipped anything to general availability. But I've seen what an open and iterative development style can do for teams, at Microsoft and other places that I've worked. When you can have a conversation with people, and take ideas and turn them into code quickly, you're winning. So why don't people like winning? I think there are a lot of reasons, and they can generally be categorized into fear, skepticism and bad experiences. I can't give the Web stack teams enough credit. Not only did they dream big, but they changed a culture that often seems immovable and hopelessly stuck. This is a very public example of this culture change, but it's starting to happen at every scale in Microsoft. It's really interesting to see in a company that has been written off as dead the last decade.

    Read the article

  • SQL ADO.NET shortcut extensions (old school!)

    - by Jeff
    As much as I love me some ORM's (I've used LINQ to SQL quite a bit, and for the MSDN/TechNet Profile and Forums we're using NHibernate more and more), there are times when it's appropriate, and in some ways more simple, to just throw up so old school ADO.NET connections, commands, readers and such. It still feels like a pain though to new up all the stuff, make sure it's closed, blah blah blah. It's pretty much the least favorite task of writing data access code. To minimize the pain, I have a set of extension methods that I like to use that drastically reduce the code you have to write. Here they are... public static void Using(this SqlConnection connection, Action<SqlConnection> action) {     connection.Open();     action(connection);     connection.Close(); } public static SqlCommand Command(this SqlConnection connection, string sql){    var command = new SqlCommand(sql, connection);    return command;}public static SqlCommand AddParameter(this SqlCommand command, string parameterName, object value){    command.Parameters.AddWithValue(parameterName, value);    return command;}public static object ExecuteAndReturnIdentity(this SqlCommand command){    if (command.Connection == null)        throw new Exception("SqlCommand has no connection.");    command.ExecuteNonQuery();    command.Parameters.Clear();    command.CommandText = "SELECT @@IDENTITY";    var result = command.ExecuteScalar();    return result;}public static SqlDataReader ReadOne(this SqlDataReader reader, Action<SqlDataReader> action){    if (reader.Read())        action(reader);    reader.Close();    return reader;}public static SqlDataReader ReadAll(this SqlDataReader reader, Action<SqlDataReader> action){    while (reader.Read())        action(reader);    reader.Close();    return reader;} It has been awhile since I've really revisited these, so you will likely find opportunity for further optimization. The bottom line here is that you can chain together a bunch of these methods to make a much more concise database call, in terms of the code on your screen, anyway. Here are some examples: public Dictionary<string, string> Get(){    var dictionary = new Dictionary<string, string>();    _sqlHelper.GetConnection().Using(connection =>        connection.Command("SELECT Setting, [Value] FROM Settings")            .ExecuteReader()            .ReadAll(r => dictionary.Add(r.GetString(0), r.GetString(1))));    return dictionary;} or... public void ChangeName(User user, string newName){    _sqlHelper.GetConnection().Using(connection =>         connection.Command("UPDATE Users SET Name = @Name WHERE UserID = @UserID")            .AddParameter("@Name", newName)            .AddParameter("@UserID", user.UserID)            .ExecuteNonQuery());} The _sqlHelper.GetConnection() is just some other code that gets a connection object for you. You might have an even cleaner way to take that step out entirely. This looks more fluent, and the real magic sauce for me is the reader bits where you can put any kind of arbitrary method in there to iterate over the results.

    Read the article

  • Webcast: New Features of Solaris 11.1 and Solaris Cluster 4.1

    - by Jeff Victor
    If you missed last week's webcast of the new features in Oracle Solaris 11.1 you can view the recording. The speakers discuss changes that improve performance and scalability, particularly for Oracle DB, and many other enhancements. New features include Optimized Shared Memory (improves DB startup time), accelerated kernel locks (improves Oracle RAC performance and scalability), virtual memory improvements, a DTrace data collecter in the DB, Zones installed on Shared Storage (simplifies migration), Data Center Bridging, and Edge Virtual Bridging. To view the archived webcast, you must register and use the URL that you receive in e-mail.

    Read the article

  • local install of wp site brought down from host - home page is ok but other pages redirect to wamp config page

    - by jeff
    local install of wp site brought down from host - home page is ok but other pages redirect to wamp config page. I got all local files from host to www dir under local wamp. I got database from host and loaded to new local db and used this tool to adjust site_on_web.com to "localhost/site_on_local" now the home page works great and can login to admin page but when click on reservations page and others of site then site just goes to the wamp server config page even though the url shows correctly as localhost/site_on_local/reservations my htaccess file is this # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress and rewrite-module is checked in the php-apache-apache modules setting. now when I uncheck the rewrite-module is checked in the php-apache-apache modules setting or I clear out the whole htaccess file then the pages just goto Not Found The requested URL /ritas041214/about-ritas/ was not found on this server. Please help as I am unsure now about my process to move local site up and down and be able to make it work and without this I am lost...

    Read the article

  • Recruiters intentionally present one good candidate for an available job

    - by Jeff O
    Maybe they do it without realizing. The recruiter's goal is to fill the job as soon as possible. I even think they feel it is in their best interest that the candidate be qualified, so I'm not trying to knock recruiters. Aren't they better off presenting 3 candidates, but one clearly stands out? The last thing they want from their client is a need to extend the interview process because they can't decide. If the client doesn't like any of them, you just bring on your next good candidate. This way they hedge their bet a little. Any experience, insight or ever heard of a head-hunter admit this? Does it make sense? There has to be a reason why the choose such unqualified people. I've seen jobs posted that clearly state they want someone with a CS degree and the recruiter doesn't take it literally. I don't have a CS degree or Java experience and still they think I'm a possible fit.

    Read the article

  • Why to avoid SELECT * from tables in your Views

    - by Jeff Smith
    -- clean up any messes left over from before: if OBJECT_ID('AllTeams') is not null  drop view AllTeams go if OBJECT_ID('Teams') is not null  drop table Teams go -- sample table: create table Teams (  id int primary key,  City varchar(20),  TeamName varchar(20) ) go -- sample data: insert into Teams (id, City, TeamName ) select 1,'Boston','Red Sox' union all select 2,'New York','Yankees' go create view AllTeams as  select * from Teams go select * from AllTeams --Results: -- --id          City                 TeamName ------------- -------------------- -------------------- --1           Boston               Red Sox --2           New York             Yankees -- Now, add a new column to the Teams table: alter table Teams add League varchar(10) go -- put some data in there: update Teams set League='AL' -- run it again select * from AllTeams --Results: -- --id          City                 TeamName ------------- -------------------- -------------------- --1           Boston               Red Sox --2           New York             Yankees -- Notice that League is not displayed! -- Here's an even worse scenario, when the table gets altered in ways beyond adding columns: drop table Teams go -- recreate table putting the League column before the City: -- (i.e., simulate re-ordering and/or inserting a column) create table Teams (  id int primary key,  League varchar(10),  City varchar(20),  TeamName varchar(20) ) go -- put in some data: insert into Teams (id,League,City,TeamName) select 1,'AL','Boston','Red Sox' union all select 2,'AL','New York','Yankees' -- Now, Select again for our view: select * from AllTeams --Results: -- --id          City       TeamName ------------- ---------- -------------------- --1           AL         Boston --2           AL         New York -- The column labeled "City" in the View is actually the League, and the column labelled TeamName is actually the City! go -- clean up: drop view AllTeams drop table Teams

    Read the article

  • Embedded Model Designing -- top down or bottom up?

    - by Jeff
    I am trying to learn RoR and develop a webapp. I have a few models I have thought of for this app, and they are fairly embedded. For example (please excuse my lack of RoR syntax): Model: textbook title:string type:string has_many: chapters Model: chapter content:text has_one: review_section Model: review_section title:string has_many: questions has_many: answers , through :questions Model: questions ... Model: answers ... My question is, with the example I gave, should I start at the top model (textbook) or the bottom most (answers)?

    Read the article

  • Rocky Mountain Tech Trifecta v3.0

    - by Jeff Certain
    The Rocky Mountain Tech Trifecta is an annual event held in Denver in late February or early March. The last couple of these have been amazing events, with great speakers like Beth Massi, Scott Hanselman, David Yack, Kathleen Dollard, Ben Hoelting, Paul Nielsen… need I go on? Registration is open at http://www.rmtechtrifecta.com. The speaker list hasn’t been finalized, but it’s sure to be another great event. Don’t miss it!

    Read the article

  • .NET development on a Retina MacBook Pro with Windows 8

    - by Jeff
    I remember sitting in Building 5 at Microsoft with some of my coworkers, when one of them came in with a shiny new 11” MacBook Air. It was nearly two years ago, and we found it pretty odd that the OEM’s building Windows machines sucked at industrial design in a way that defied logic. While Dell and HP were in a race to the bottom building commodity crap, Apple was staying out of the low-end market completely, and focusing on better design. In the process, they managed to build machines people actually wanted, and maintain an insanely high margin in the process. I stopped buying the commodity crap and custom builds in 2006, when Apple went Intel. As a .NET guy, I was still in it for Microsoft’s stack of development tools, which I found awesome, but had back to back crappy laptops from HP and Dell. After that original 15” MacBook Pro, I also had a Mac Pro tower (that I sold after three years for $1,500!), a 27” iMac, and my favorite, a 17” MacBook Pro (the unibody style) with an SSD added from OWC. The 17” was a little much to carry around because it was heavy, but it sure was nice getting as much as eight hours of battery life, and the screen was amazing. When the rumors started about a 15” model with a “retina” screen inspired by the Air, I made up my mind I wanted one, and ordered it the day it came out. I sold my 17”, after three years, for $750 to a friend who is really enjoying it. I got the base model with the upgrade to 16 gigs of RAM. It feels solid for being so thin, and if you’ve used the third generation iPad or the newer iPhone, you’ll be just as thrilled with the screen resolution. I’m typically getting just over six hours of battery life while running a VM, but Parallels 8 allegedly makes some power improvements, so we’ll see what happens. (It was just released today.) The nice thing about VM’s are that you can run more than one at a time. Primarily I run the Windows 8 VM with four cores (the laptop is quad-core, but has 8 logical cores due to hyperthreading or whatever Intel calls it) and 8 gigs of RAM. I also have a Windows Server 2008 R2 VM I spin up when I need to test stuff in a “real” server environment, and I give it two cores and 4 gigs of RAM. The Windows 8 VM spins up in about 8 seconds. Visual Studio 2012 takes a few more seconds, but count part of that as the “ReSharper tax” as it does its startup magic. The real beauty, the thing I looked most forward to, is that beautifully crisp C# text. Consolas has never looked as good as it does at 10pt. as it does on this display. You know how it looks great at 80pt. when conference speakers demo stuff on a projector? Think that sharpness, only tiny. It’s just gorgeous. Beyond that, everything is just so responsive and fast. Builds of large projects happen in seconds, hundreds of unit tests run in seconds… you just don’t spend a lot of time waiting for stuff. It’s kind of painful to go back to my 27” iMac (which would be better if I put an SSD in it before its third birthday). Are there negatives? A few minor issues, yes. As is the case with OS X, not everything scales right. You’ll see some weirdness at times with splash screens and icons and such. Chrome’s text rendering (in Windows) is apparently not aware of how to deal with higher DPI’s, so text is fuzzy (the OS X version is super sharp, however). You’ll also have to do some fiddling with keyboard settings to use the Windows 8 keyboard shortcuts. Overall, it’s as close to a no-compromise development experience as I’ve ever had. I’m not even going to bother with Boot Camp because the VM route already exceeds my expectations. You definitely get what you pay for. If this one also lasts three years and I can turn around and sell it, it’s worth it for something I use every day.

    Read the article

  • Developer – Cross-Platform: Fact or Fiction?

    - by Pinal Dave
    This is a guest blog post by Jeff McVeigh. Jeff McVeigh is the general manager of Performance Client and Visual Computing within Intel’s Developer Products Division. His team is responsible for the development and delivery of leading software products for performance-centric application developers spanning Android*, Windows*, and OS* X operating systems. During his 17-year career at Intel, Jeff has held various technical and management positions in the fields of media, graphics, and validation. He also served as the technical assistant to Intel’s CTO. He holds 20 patents and a Ph.D. in electrical and computer engineering from Carnegie Mellon University. It’s not a homogenous world. We all know it. I have a Windows* desktop, a MacBook Air*, an Android phone, and my kids are 100% Apple. We used to have 2.5 kids, now we have 2.5 devices. And we all agree that diversity is great, unless you’re a developer trying to prioritize the limited hours in the day. Then it’s a series of trade-offs. Do we become brand loyalists for Google or Apple or Microsoft? Do we specialize on phones and tablets or still consider the 300M+ PC shipments a year when we make our decisions on where to spend our time and resources? We weigh the platform options, monetization opportunities, APIs, and distribution models. Too often, I see developers choose one platform, or write to the lowest common denominator, which limits their reach and market success. But who wants to be ?me too”? Cross-platform coding is possible in some environments, for some applications, for some level of innovation—but it’s not all-inclusive, yet. There are some tricks of the trade to develop cross-platform, including using languages and environments that ?run everywhere.” HTML5 is today’s answer for web-enabled platforms. However, it’s not a panacea, especially if your app requires the ultimate performance or native UI look and feel. There are other cross-platform frameworks that address the presentation layer of your application. But for those apps that have a preponderance of native code (e.g., highly-tuned C/C++ loops), there aren’t tons of solutions today to help with code reuse across these platforms using consistent tools and libraries. As we move forward with interim solutions, they’ll improve and become more robust, based, in no small part, on our input. What’s your answer to the cross-platform challenge? Are you fully invested in HTML5 now? What are your barriers? What’s your vision to navigate the cross-platform landscape?  Here is the link where you can head next and learn more about how to answer the questions I have asked: https://software.intel.com/en-us Republished with permission from here. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Intel

    Read the article

  • POP Forums will be at Mix!

    - by Jeff
    If you've never been to Mix, you're missing out on what is arguably one of the best conferences that Microsoft does. I'm not just saying that because I work here... I felt that way before, having been to most of them. The breadth of people and disciplines make it a really exciting event that pushes it well beyond the "Redmond bubble," as I like to call it. You should go.In any case, there's an Open Source Fest happening the night before Mix starts, on Monday, from 6 to 9 p.m. There will be people there representing a ton of great projects, some as enormous as Umbraco, as well as people doing SDK's, controls and other neat stuff. Best of all, you get to vote for your favorites. Unless your favorite is Orchard, because Microsoft is sponsoring that directly. Or if it's POP Forums, not because Microsoft is sponsoring it, but because that's where I work in my day job. No prizes for me! Come by and say hello. I think the app will be nearly final by then, and it's already running on MouseZoom, one of my little side projects.The quality and diversity of open source projects around the Microsoft stack just keeps getting better. Our platform is also pretty great at running stuff we don't make. This will be a pretty exciting Mix. Can't wait!

    Read the article

  • Is it time to drop Courier from your monospace font stacks?

    - by Jeff
    I've been fine-tuning my font stacks lately and was wondering if it's safe to drop Courier from my monospace font stack yet? Would you feel comfortable dropping it? Of course, monospace is my final fallback. Note 1: OS Testbed: WinXP, WinVista, Win7, iPhone, iPad Based on my research, these browsers now substitute Courier New for Courier by default: IE9+ Chrome 2+ Firefox 10+ Safari 3.1+ iDevices Note 2: The default "font-family: monospace;" renders as Courier New in every browser I've tested, from IE6 through the latest iPhone/iPad devices. EDIT: One exception is Opera 12, which renders Consolas on Win. Opera 10 renders Courier New. Note 3: I've noticed that Courier refuses to render with any font smoothing (anti-aliasing) in any browser I've tested, regardless of system and/or browser display settings. Probably because it's an old bitmap font. This could be because of my system setup, however.

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • Website misclassified by websense

    - by Jeff Atwood
    I received the following email from a user of one of our websites: This morning I tried to log into example.com and I was blocked by websense at work because it is considered a "social networking" site or something. I assume the websense filter is maintained by a central location, so I'm hoping that by letting you guys know you can get it unblocked. per Wikipedia, Websense is web filtering or Internet content-control software. This means one (or more) of our sites is being miscategorized by websense as "social networking" and thus disallowed for access at any workplace that uses websense to control what websites their users can and cannot access during work hours. (I know, they are monsters!) How do we dispute this websense classification error, as our websites should generally be considered "information technology" and never "social networking"? How do we know what category websense has put our sites in, so we can pro-actively make sure they're not wrong?

    Read the article

  • How do you demo software with No UI in the Sprint Review?

    - by Jeff Martin
    We are doing agile software development, basically following Scrum. We are trying to do sprint reviews but finding it difficult. Our software is doing a lot of data processing and the stories often are about changing various rules around this. What are some options for demoing the changes that occurred in the sprint when there isn't a UI or visible workflow change, but instead the change is a subtle business rule on a processing job that can take 10s of minutes or even a couple of hours?

    Read the article

  • Page appears indexed in Google but not findable for any search terms?

    - by Jeff Atwood
    (Note that I am going to use screenshots here because I suspect writing about this will change the behavior over time.) If you do a Google search for uiviewcontroller best practices either with or without the quotes, you end up with results like this: Note that none of these pages resolve to the actual Stack Overflow question containing those words in the title. They resolve to either a) sites that are mirroring our creative commons data and correctly pointing back to the source question without nofollow, as properly specified by our attribution requirements or b) our own internal links to the question, but not the actual question itself. The actual page with the title ... Custom UIView and UIViewController best practices? ... does exist at this URL ... http://stackoverflow.com/questions/3300183/custom-uiview-and-uiviewcontroller-best-practices ... and apparently it is present in Google's index! But why does it not appear when we search for uiviewcontroller best practices ? We know that Google contains this page in its index Our search terms match the title of the question Stack Overflow has much higher pagerank than the other sites that are mirroring this question under Creative Commons I don't get it. What are we doing wrong here?

    Read the article

  • Standard ratio of cookies to "visitors"?

    - by Jeff Atwood
    As noted in a recent blog post, We see a large discrepancy between Google Analytics "visitors" and Quantcast "visitors". Also, for reasons we have never figured out, Google Analytics just gets larger numbers than Quantcast. Right now GA is showing more visitors (15 million) on stackoverflow.com alone than Quantcast sees on the whole network (14 million): Why? I don’t know. Either Google Analytics loses cookies sometimes, or Quantcast misses visitors. Counting is an inexact science. We think this is because Quantcast uses a more conservative ratio of cookies-to-visitors. Whereas Google Analytics might consider every cookie a "visitor", Quantcast will only consider every 1.24 cookies a "visitor". This makes sense to me, as people may access our sites from multiple computers, multiple browsers, etcetera. I have two closely related questions: Is there an accepted standard ratio of cookies to visitors? This is obviously an inexact science, but is there any emerging rule of thumb? Is there any more accurate way to count "visitors" to a website other than relying on browser cookies? Or is this just always going to be kind of a best-effort estimation crapshoot no matter how you measure it?

    Read the article

  • Renewed

    - by Jeff Certain
    I just got a nice little e-mail from Microsoft. Despite the timing, it’s not an April Fools joke… I’ve been renewed as an MVP for another year. Congrats to all the other MVPs being renewed today.

    Read the article

  • AJI Software is now a Microsoft Gold Application Lifecycle Management (ALM) Partner

    - by Jeff Julian
    Our team at AJI Software has been hard at work over the past year on certifications and projects that has allowed us to reach Gold Partner status in the Microsoft Partner Program.  We have focused on providing services that not only assist in custom software development, but process analysis and mentoring.  I definitely want to thank each one of our team members for all their work.  We are currently the only Microsoft Gold ALM Partner for a 500 mile radius around Kansas City. If you or your team is in need of assistance with Team Foundation Server, Agile Processes, Scrum Mentoring, or just a process/team assessment, please feel free to give us a call.  We also have practices focused on SharePoint, Mobile development (iOS, Android, Windows Mobile), and custom software development with .NET.  Technorati Tags: Gold Partner,ALM,Scrum,TFS,AJI Software

    Read the article

  • Update

    - by Jeff Certain
    This blog has been pretty quiet for a year now. There's a few reasons for that. Probably the biggest reason is that I view this as a space where I talk about .NET things. Or software development. While I've been doing the latter for the past year, I haven't been doing the former.Yes, I took a trip to the dark side. I started with Ning 11 months ago, in Palo Alto, CA. I had the chance to work with an incredibly talented group of software engineers... in PHP and Java.That was definitely an eye-opening experience, in terms of technology, process, and culture. It was also a pretty good example of how acquisitions can get interesting. I'll talk more about this, I'm sure.Last week, I started with a company called Dynamic Signal. I'm a "Back End Engineer" now. Also a very talented team of people, and I'm delighted to be working with them. We're a Microsoft shop. After a year away, I'm very happy to be back. Coming back to .NET is an easy transition, and one that has me being fairly productive straight out of the gate.(Some of you may have noticed, my last post was more than a year ago. Yes, it's safe to infer that I didn't get renewed as an MVP. Fair deal; I didn't do nearly as much this year as I have in the past. I'll be starting to speak again shortly, and hope to be re-awarded soon.)At any rate, now that I'm back in the .NET space, you can expect to hear more from me soon!

    Read the article

  • Is there such thing as a "theory of system integration"?

    - by Jeff
    There is a plethora of different programs, servers, and in general technologies in use in organizations today. We, programmers, have lots of different tools at our disposal to help solve various different data, and communication challenges in an organization. Does anyone know if anyone has done an serious thinking about how systems are integrated? Let me give an example: Hypothetically, let's say I own a company that makes specialized suits a'la Iron Man. In the area of production, I have CAD tools, machining tools, payroll, project management, and asset management tools to name a few. I also have nice design space, where designers show off their designs on big displays, some touch, some traditional. Oh, and I also have one of these new fangled LEED Platinum buildings and it has number of different computer controlled systems, like smart window shutters that close when people are in the room, a HVAC system that adjusts depending on the number of people in the building, etc. What I want to know is if anyone has done any scientific work on trying to figure out how to hook all these pieces together, so that say my access control system is hooked to my payroll system, and my phone system allowing my never to swipe a time card, and to have my phone follow me throughout the building. This problem is also more than a technology challenge. Every technology implementation enables certain human behaviours, so the human must also be considered as a part of the system. Has anyone done any work in how effectively weave these components together? FYI: I am not trying to build a system. I want to know if anyone has thoroughly studied the process of doing a large integration project, how they develop their requirements, how they studied the human behaviors, etc.

    Read the article

  • Know a little of a lot or a lot of a little? [closed]

    - by Jeff V
    Possible Duplicate: Is it better to specialize in a single field I like, or expand into other fields to broaden my horizons? My buddy and I who have been programming for 13 years or so were talking this morning and a question that came up was is it better to know a little of a lot (i.e. web, desktop, VB.Net, C#, jQuery, PHP, Java etc.) or is it better to know a lot of a little (meaning expert in something). The context of this question is what makes someone a senior programmer? Is it someone that has been around the block a few times and has been in many different situations or one that is locked in to a specific technology that is super knowledgeable in that one technology? I see pro's and con's of both scenarios.. Just wondering what others thought.

    Read the article

  • A good way to build a game loop in OpenGL

    - by Jeff
    I'm currently beginning to learn OpenGL at school, and I've started making a simple game the other day (on my own, not for school). I'm using freeglut, and am building it in C, so for my game loop I had really just been using a function I made passed to glutIdleFunc to update all the drawing and physics in one pass. This was fine for simple animations that I didn't care too much about the frame rate, but since the game is mostly physics based, I really want to (need to) tie down how fast it's updating. So my first attempt was to have my function I pass to glutIdleFunc (myIdle()) to keep track of how much time has passed since the previous call to it, and update the physics (and currently graphics) every so many milliseconds. I used timeGetTime() to do this (by using <windows.h>). And this got me to thinking, is using the idle function really a good way of going about the game loop? My question is, what is a better way to implement the game loop in OpenGL? Should I avoid using the idle function?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >