Search Results

Search found 1181 results on 48 pages for 'jeff decker'.

Page 10/48 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • How do you demo software with No UI in the Sprint Review?

    - by Jeff Martin
    We are doing agile software development, basically following Scrum. We are trying to do sprint reviews but finding it difficult. Our software is doing a lot of data processing and the stories often are about changing various rules around this. What are some options for demoing the changes that occurred in the sprint when there isn't a UI or visible workflow change, but instead the change is a subtle business rule on a processing job that can take 10s of minutes or even a couple of hours?

    Read the article

  • Standard ratio of cookies to "visitors"?

    - by Jeff Atwood
    As noted in a recent blog post, We see a large discrepancy between Google Analytics "visitors" and Quantcast "visitors". Also, for reasons we have never figured out, Google Analytics just gets larger numbers than Quantcast. Right now GA is showing more visitors (15 million) on stackoverflow.com alone than Quantcast sees on the whole network (14 million): Why? I don’t know. Either Google Analytics loses cookies sometimes, or Quantcast misses visitors. Counting is an inexact science. We think this is because Quantcast uses a more conservative ratio of cookies-to-visitors. Whereas Google Analytics might consider every cookie a "visitor", Quantcast will only consider every 1.24 cookies a "visitor". This makes sense to me, as people may access our sites from multiple computers, multiple browsers, etcetera. I have two closely related questions: Is there an accepted standard ratio of cookies to visitors? This is obviously an inexact science, but is there any emerging rule of thumb? Is there any more accurate way to count "visitors" to a website other than relying on browser cookies? Or is this just always going to be kind of a best-effort estimation crapshoot no matter how you measure it?

    Read the article

  • Update

    - by Jeff Certain
    This blog has been pretty quiet for a year now. There's a few reasons for that. Probably the biggest reason is that I view this as a space where I talk about .NET things. Or software development. While I've been doing the latter for the past year, I haven't been doing the former.Yes, I took a trip to the dark side. I started with Ning 11 months ago, in Palo Alto, CA. I had the chance to work with an incredibly talented group of software engineers... in PHP and Java.That was definitely an eye-opening experience, in terms of technology, process, and culture. It was also a pretty good example of how acquisitions can get interesting. I'll talk more about this, I'm sure.Last week, I started with a company called Dynamic Signal. I'm a "Back End Engineer" now. Also a very talented team of people, and I'm delighted to be working with them. We're a Microsoft shop. After a year away, I'm very happy to be back. Coming back to .NET is an easy transition, and one that has me being fairly productive straight out of the gate.(Some of you may have noticed, my last post was more than a year ago. Yes, it's safe to infer that I didn't get renewed as an MVP. Fair deal; I didn't do nearly as much this year as I have in the past. I'll be starting to speak again shortly, and hope to be re-awarded soon.)At any rate, now that I'm back in the .NET space, you can expect to hear more from me soon!

    Read the article

  • Renewed

    - by Jeff Certain
    I just got a nice little e-mail from Microsoft. Despite the timing, it’s not an April Fools joke… I’ve been renewed as an MVP for another year. Congrats to all the other MVPs being renewed today.

    Read the article

  • AJI Software is now a Microsoft Gold Application Lifecycle Management (ALM) Partner

    - by Jeff Julian
    Our team at AJI Software has been hard at work over the past year on certifications and projects that has allowed us to reach Gold Partner status in the Microsoft Partner Program.  We have focused on providing services that not only assist in custom software development, but process analysis and mentoring.  I definitely want to thank each one of our team members for all their work.  We are currently the only Microsoft Gold ALM Partner for a 500 mile radius around Kansas City. If you or your team is in need of assistance with Team Foundation Server, Agile Processes, Scrum Mentoring, or just a process/team assessment, please feel free to give us a call.  We also have practices focused on SharePoint, Mobile development (iOS, Android, Windows Mobile), and custom software development with .NET.  Technorati Tags: Gold Partner,ALM,Scrum,TFS,AJI Software

    Read the article

  • Is there such thing as a "theory of system integration"?

    - by Jeff
    There is a plethora of different programs, servers, and in general technologies in use in organizations today. We, programmers, have lots of different tools at our disposal to help solve various different data, and communication challenges in an organization. Does anyone know if anyone has done an serious thinking about how systems are integrated? Let me give an example: Hypothetically, let's say I own a company that makes specialized suits a'la Iron Man. In the area of production, I have CAD tools, machining tools, payroll, project management, and asset management tools to name a few. I also have nice design space, where designers show off their designs on big displays, some touch, some traditional. Oh, and I also have one of these new fangled LEED Platinum buildings and it has number of different computer controlled systems, like smart window shutters that close when people are in the room, a HVAC system that adjusts depending on the number of people in the building, etc. What I want to know is if anyone has done any scientific work on trying to figure out how to hook all these pieces together, so that say my access control system is hooked to my payroll system, and my phone system allowing my never to swipe a time card, and to have my phone follow me throughout the building. This problem is also more than a technology challenge. Every technology implementation enables certain human behaviours, so the human must also be considered as a part of the system. Has anyone done any work in how effectively weave these components together? FYI: I am not trying to build a system. I want to know if anyone has thoroughly studied the process of doing a large integration project, how they develop their requirements, how they studied the human behaviors, etc.

    Read the article

  • Know a little of a lot or a lot of a little? [closed]

    - by Jeff V
    Possible Duplicate: Is it better to specialize in a single field I like, or expand into other fields to broaden my horizons? My buddy and I who have been programming for 13 years or so were talking this morning and a question that came up was is it better to know a little of a lot (i.e. web, desktop, VB.Net, C#, jQuery, PHP, Java etc.) or is it better to know a lot of a little (meaning expert in something). The context of this question is what makes someone a senior programmer? Is it someone that has been around the block a few times and has been in many different situations or one that is locked in to a specific technology that is super knowledgeable in that one technology? I see pro's and con's of both scenarios.. Just wondering what others thought.

    Read the article

  • A good way to build a game loop in OpenGL

    - by Jeff
    I'm currently beginning to learn OpenGL at school, and I've started making a simple game the other day (on my own, not for school). I'm using freeglut, and am building it in C, so for my game loop I had really just been using a function I made passed to glutIdleFunc to update all the drawing and physics in one pass. This was fine for simple animations that I didn't care too much about the frame rate, but since the game is mostly physics based, I really want to (need to) tie down how fast it's updating. So my first attempt was to have my function I pass to glutIdleFunc (myIdle()) to keep track of how much time has passed since the previous call to it, and update the physics (and currently graphics) every so many milliseconds. I used timeGetTime() to do this (by using <windows.h>). And this got me to thinking, is using the idle function really a good way of going about the game loop? My question is, what is a better way to implement the game loop in OpenGL? Should I avoid using the idle function?

    Read the article

  • Automated Website Testing/Sanity/Quality

    - by Jeff
    I am thinking about building a tool that starts from the root of a webpage and traverses the entire website gathering a list of resources such as CSS/HTML/Javascript files and then runs CSS/Javascript Lint + HTML Validator + Broken Link Finder. Before I start building something like this, I was wondering if this exists already? Thanks. I already searched Google quite a bit and couldn't find much.

    Read the article

  • Where can I find the supported way to deploy hadoop on precise?

    - by Jeff McCarrell
    I want to set up a small (6 node) hadoop/hive/pig cluster. I see the work in the juju space on charms; however, the current status of deploying a single charm per node will not work for me. I see ServerTeam Hadoop which talks about re-packaging the bigtop packages. The cloudera CDH3 installation guide talks about Maverick and Lucid, but not precise. What am I missing? Is there a straight forward way to deploy hadoop/hive/pig on 6 nodes that does not involve building from tarballs?

    Read the article

  • Rebuilding CoasterBuzz, Part IV: Dependency injection, it's what's for breakfast

    - by Jeff
    (Repost from my personal blog.) This is another post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch soon. More: Part I: Evolution, and death to WCF Part II: Hot data objects Part III: The architecture using the "Web stack of love" If anything generally good for the craft has come out of the rise of ASP.NET MVC, it's that people are more likely to use dependency injection, and loosely couple the pieces parts of their applications. A lot of the emphasis on coding this way has been to facilitate unit testing, and that's awesome. Unit testing makes me feel a lot less like a hack, and a lot more confident in what I'm doing. Dependency injection is pretty straight forward. It says, "Given an instance of this class, I need instances of other classes, defined not by their concrete implementations, but their interfaces." Probably the first place a developer exercises this in when having a class talk to some kind of data repository. For a very simple example, pretend the FooService has to get some Foo. It looks like this: public class FooService {    public FooService(IFooRepository fooRepo)    {       _fooRepo = fooRepo;    }    private readonly IFooRepository _fooRepo;    public Foo GetMeFoo()    {       return _fooRepo.FooFromDatabase();    } } When we need the FooService, we ask the dependency container to get it for us. It says, "You'll need an IFooRepository in that, so let me see what that's mapped to, and put it in there for you." Why is this good for you? It's good because your FooService doesn't know or care about how you get some foo. You can stub out what the methods and properties on a fake IFooRepository might return, and test just the FooService. I don't want to get too far into unit testing, but it's the most commonly cited reason to use DI containers in MVC. What I wanted to mention is how there's another benefit in a project like mine, where I have to glue together a bunch of stuff. For example, when I have someone sign up for a new account on CoasterBuzz, I'm actually using POP Forums' new account mailer, which composes a bunch of text that includes a link to verify your account. The thing is, I want to use custom text and some other logic that's specific to CoasterBuzz. To accomplish this, I make a new class that inherits from the forum's NewAccountMailer, and override some stuff. Easy enough. Then I use Ninject, the DI container I'm using, to unbind the forum's implementation, and substitute my own. Ninject uses something called a NinjectModule to bind interfaces to concrete implementations. The forum has its own module, and then the CoasterBuzz module is loaded second. The CB module has two lines of code to swap out the mailer implementation: Unbind<PopForums.Email.INewAccountMailer>(); Bind<PopForums.Email.INewAccountMailer>().To<CbNewAccountMailer>(); Piece of cake! Now, when code asks the DI container for an INewAccountMailer, it gets my custom implementation instead. This is a lot easier to deal with than some of the alternatives. I could do some copy-paste, but then I'm not using well-tested code from the forum. I could write stuff from scratch, but then I'm throwing away a bunch of logic I've already written (in this case, stuff around e-mail, e-mail settings, mail delivery failures). There are other places where the DI container comes in handy. For example, CoasterBuzz does a number of custom things with user profiles, and special content for paid members. It uses the forum as the core piece to managing users, so I can ask the container to get me instances of classes that do user lookups, for example, and have zero care about how the forum handles database calls, configuration, etc. What a great world to live in, compared to ten years ago. Sure, the primary interest in DI is around the "separation of concerns" and facilitating unit testing, but as your library grows and you use more open source, it starts to be the glue that pulls everything together.

    Read the article

  • New Feature! Automatic Categories for Geekswithblogs.net

    - by Jeff Julian
    One of the features we have been working on is a way to categorize posts without the need of all our bloggers getting on the same page with what categories we have and making them select the categories.  Johnny Kauffman, one of our team members at AJI Software, developed what we call the Sherlock Project over the past few months.  Sherlock is a category suggestion engine based on the content within the posts.  Now, after a post is published, Sherlock will investigate the content and come up with the suggested categories that content fits in.  This will now allow you to go to the specific topics you are interested in and see all the related posts. This is just the beginning, so many more opportunities will arise now that we have our content organized.  One of the first features I will be adding is RSS feeds for each category and sub category.  If you are into ALM, we will have a feed for that! I hope you enjoy these and the engine will continue to get better as we start testing the data.  I hope you are as excited about this as I am :D.  Technorati Tags: Geekswithblogs.net,Categories,Sherlock

    Read the article

  • Rebuilding CoasterBuzz, Part II: Hot data objects

    - by Jeff
    This is the second post, originally from my personal blog, in a series about rebuilding one of my Web sites, which has been around for 12 years. More: Part I: Evolution, and death to WCF After the rush to get moving on stuff, I temporarily lost interest. I went almost two weeks without touching the project, in part because the next thing on my backlog was doing up a bunch of administrative pages. So boring. Unfortunately, because most of the site's content is user-generated, you need some facilities for editing data. CoasterBuzz has a database full of amusement parks and roller coasters. The entities enjoy the relationships that you would expect, though they're further defined by "instances" of a coaster, to define one that has moved between parks as one, with different names and operational dates. And of course, there are pictures and news items, too. It's not horribly complex, except when you have to account for a name change and display just the newest name. In all previous versions, data access was straight SQL. As so much of the old code was rooted in 2003, with some changes in 2008, there wasn't much in the way of ORM frameworks going on then. Let me rephrase that, I mostly wasn't interested in ORM's. Since that time, I used a little LINQ to SQL in some projects, and a whole bunch of nHibernate while at Microsoft. Through all of that experience, I have to admit that these frameworks are often a bigger pain in the ass than not. They're great for basic crud operations, but when you start having all kinds of exotic relationships, they get difficult, and generate all kinds of weird SQL under the covers. The black box can quickly turn into a black hole. Sometimes you end up having to build all kinds of new expertise to do things "right" with a framework. Still, despite my reservations, I used the newer version of Entity Framework, with the "code first" modeling, in a science project and I really liked it. Since it's just a right-click away with NuGet, I figured I'd give it a shot here. My initial effort was spent defining the context class, which requires a bit of work because I deviate quite a bit from the conventions that EF uses, starting with table names. Then throw some partial querying of certain tables (where you'll find image data), and you're splitting tables across several objects (navigation properties). I won't go into the details, because these are all things that are well documented around the Internet, but there was a minor learning curve there. The basics of reading data using EF are fantastic. For example, a roller coaster object has a park associated with it, as well as a number of instances (if it was ever relocated), and there also might be a big banner image for it. This is stupid easy to use because it takes one line of code in your repository class, and by the time you pass it to the view, you have a rich object graph that has everything you need to display stuff. Likewise, editing simple data is also, well, simple. For this goodness, thank the ASP.NET MVC framework. The UpdateModel() method on the controllers is very elegant. Remember the old days of assigning all kinds of properties to objects in your Webforms code-behind? What a time consuming mess that used to be. Even if you're not using an ORM tool, having hydrated objects come off the wire is such a time saver. Not everything is easy, though. When you have to persist a complex graph of objects, particularly if they were composed in the user interface with all kinds of AJAX elements and list boxes, it's not just a simple matter of submitting the form. There were a few instances where I ended up going back to "old-fashioned" SQL just in the interest of time. It's not that I couldn't do what I needed with EF, it's just that the efficiency, both my own and that of the generated SQL, wasn't good. Since EF context objects expose a database connection object, you can use that to do the old school ADO.NET stuff you've done for a decade. Using various extension methods from POP Forums' data project, it was a breeze. You just have to stick to your decision, in this case. When you start messing with SQL directly, you can't go back in the same code to messing with entities because EF doesn't know what you're changing. Not really a big deal. There are a number of take-aways from using EF. The first is that you write a lot less code, which has always been a desired outcome of ORM's. The other lesson, and I particularly learned this the hard way working on the MSDN forums back in the day, is that trying to retrofit an ORM framework into an existing schema isn't fun at all. The CoasterBuzz database isn't bad, but there are design decisions I'd make differently if I were starting from scratch. Now that I have some of this stuff done, I feel like I can start to move on to the more interesting things on the backlog. There's a lot to do, but at least it's fun stuff, and not more forms that will be used infrequently.

    Read the article

  • Ubuntu reports low battery capacity on my Dell Vostro

    - by Jeff
    I have a Dell Vostro 1500. Before I wiped Windows XP off my hard drive in 2009, I had a full ~7 hrs battery capacity. I installed Ubuntu 9, and the capacity immediately dropped to about 27% (and has since decreased to about 11%). I couldn't figure out what to do, so I've just lived with the 20-30 minute battery life ever since. I upgraded to Ubuntu 10, and the issue remained. I wiped my hard drive clean again and installed Ubuntu 11, and the issue still remains. I tried what they told me in the forum here, but it didn't do anything. Is it possible for a battery to suddenly lose most of its capacity?? Or is there a bug in the power management software?

    Read the article

  • Best Ruby Git library?

    - by Jeff Welling
    Which is the best Git library in Ruby to use? Git, Grit, Rugged, Other? Background: I'm the current maintainer of TicGit-ng which is a distributed offline ticket system built on git, and I've read and heard over and over again that Grit is the one I should use because it supersedes the Git gem, but there seems to be either a lack of documentation or a lack of features because myself and others have failed in trying to switch from the deprecated-but-functional Git to the newer Grit gem.

    Read the article

  • Podcast with AJI about iOS development coming from a .NET background

    - by Tim Hibbard
    I talked with Jeff and John from AJI Software the other day about developing for the iOS platform. We chatted about learning Xcode and Objective-C, provisioning devices and the app publishing process. We all have a .NET background and made lots of comparisons between the two platforms/ecosystems/fanbois. They even let me throw in a plug for Christian Radio Locator. Jeff was my first contact with the Kansas City .NET community. It was probably about 10 years ago. He pushed me to talk more (and rescued me from my first talk that bombed) and blog more. One time a group of us took a 16 hour car trip to South Carolina for a code camp and live podcasted the whole thing. Good times.Listen to the show Click here to subscribe to more AJI Reports in the future.

    Read the article

  • Why can't mythtv backend see my HDHomeRun tuner?

    - by Jeff
    Ran a fresh install of Mythbuntu 12.04.1 32 bit. Went through the backend setup and was able to see my HDHomeRun Prime (HDHR3) device, scan, and detect channels. Cannot get to live TV on frontend. Backend log seems to indicate it's not detecting my tuner, even though I configured it in the setup. Sep 8 15:03:33 Dimension-8300 mythbackend[1668]: E TVRecEvent dtvmultiplex.cpp:325 (ParseTuningParams) DTVMux: ParseTuningParams -- Unknown tuner type = 0x2000 Sep 8 15:03:33 Dimension-8300 mythbackend[1668]: E TVRecEvent dtvchannel.cpp:308 (SetChannelByString) DTVChan(192.168.1.12-0): SetChannelByString(3_1): Failed to initialize multiplex options Sep 8 15:03:33 Dimension-8300 mythbackend[1668]: E TVRecEvent tv_rec.cpp:3681 (TuningFrequency) TVRec(1): Failed to set channel to 3_1. Reverting to kState_None

    Read the article

  • Deliberate Practice

    - by Jeff Foster
    It’s easy to assume, as software engineers, that there is little need to “practice” writing code. After all, we write code all day long! Just by writing a little each day, we’re constantly learning and getting better, right? Unfortunately, that’s just not true. Of course, developers do improve with experience. Each time we encounter a problem we’re more likely to avoid it next time. If we’re in a team that deploys software early and often, we hone and improve the deployment process each time we practice it. However, not all practice makes perfect. To develop true expertise requires a particular type of practice, deliberate practice, the only goal of which is to make us better programmers. Everyday software development has other constraints and goals, not least the pressure to deliver. We rarely get the chance in the course of a “sprint” to experiment with potential solutions that are outside our current comfort zone. However, if we believe that software is a craft then it’s our duty to strive continuously to raise the standard of software development. This requires specific and sustained efforts to get better at something we currently can’t do well (from Harvard Business Review July/August 2007). One interesting way to introduce deliberate practice, in a sustainable way, is the code kata. The term kata derives from martial arts and refers to a set of movements practiced either solo or in pairs. One of the better-known examples is the Bowling Game kata by Bob Martin, the goal of which is simply to write some code to do the scoring for 10-pin bowling. It sounds too easy, right? What could we possibly learn from such a simple example? Trust me, though, that it’s not as simple as five minutes of typing and a solution. Of course, we can reach a solution in a short time, but the important thing about code katas is that we explore each technique fully and in a controlled way. We tackle the same problem multiple times, using different techniques and making different decisions, understanding the ramifications of each one, and exploring edge cases. The short feedback loop optimizes opportunities to learn. Another good example is Conway’s Game of Life. It’s a simple problem to solve, but try solving it in a functional style. If you’re used to mutability, solving the problem without mutating state will push you outside of your comfort zone. Similarly, if you try to solve it with the focus of “tell-don’t-ask“, how will the responsibilities of each object change? As software engineers, we don’t get enough opportunities to explore new ideas. In the middle of a development cycle, we can’t suddenly start experimenting on the team’s code base. Code katas offer an opportunity to explore new techniques in a safe environment. If you’re still skeptical, my challenge to you is simply to try it out. Convince a willing colleague to pair with you and work through a kata or two. It only takes an hour and I’m willing to bet you learn a few new things each time. The next step is to make it a sustainable team practice. Start with an hour every Friday afternoon (after all who wants to commit code to production just before they leave for the weekend?) for month and see how that works out. Finally, consider signing up for the Global Day of Code Retreat. It’s like a daylong code kata, it’s on December 8th and there’s probably an event in your area!

    Read the article

  • Brightness controls doesn't work on a MacBook Pro 5.5

    - by Jeff Labonte
    I recently installed Ubuntu on my MacBook pro 5.5 (mid 2009). I have a problem with the brightness control. The thing is, when I try to reduce the brightness of my display which would help my battery life dramaticlly is doesnt work. I tried to use the system preference but no succes. I tried to look of it changes something if disconnect the computer from the charge I the screen will dimm but once against I failed. I tried many things such as pommed or Many other little things that I have had read on forums.

    Read the article

  • open source database project

    - by Jeff V
    What is the best way to build an open source database? I would like to build a database of all vehicles and the related maintenance information (i.e Oil Weight, Quantity, Tire Pressure, Windshield wipers etc). Currently this information is fragmented or just not put on line in an open way. Once collection began I would want to import into a DB and then be able to distribute freely. Is there a process (site or group) that I can start gathering this information in a reliable and verifiable way? Is there any issues that I should watch out for?

    Read the article

  • How to diagnose Ubuntu CPU spikes / IO wait?

    - by Jeff Welling
    I'm using Ubuntu and every couple minutes it goes unresponsive for a half second to a full second, which isn't normally a problem but makes trying to code extremely frustrating when your trying to hit backspace or navigate the code and nothing is happening. The problem is, the freezes are so brief that top doesn't have time to show me what is spiking the CPU (assuming something is, but I don't know what else could cause this). Does anyone know how to troubleshoot this performance issue? Edit: I've tried login in with Gnome Classic (No Effects) instead of Unity but it still freezes up every once in awhile. Edit: The CPU graph doesn't seem to be showing any actual spikes so it seems you were right and my original diagnosis of CPU spikes being the problem was incorrect, I now suspect IO wait. I don't recall this happening for the brief few weeks I had Windows 7 Starter running on it though, which leads me to believe it isn't (just?) the hardware.. is there anything I can tweak to improve this? I'm using an Acer Aspire One D257, with Ubuntu 11.10. Edit: Output of dmesg is at http://paste.ubuntu.com/1060054/ and kern.log is at http://paste.ubuntu.com/1060055/

    Read the article

  • New Look for Geekswithblogs.net Homepage

    - by Jeff Julian
    I wanted to alert everyone to the new look of the Geekswithblogs.net Community Page.  I removed the tabs, cleaned up the posts and fonts, replaced the logo with our brighter logo, and mucked with the CSS and HTML to drive a smaller footprint.  With this update, the homepage is now HALF THE SIZE in KBs!  I still have some more AJAX calls I want to implement to make the footprint even smaller. Let me know what you think.  I feel it is easier to read through the posts now.

    Read the article

  • Bummer | Visual Studio 2012 Error on Web Publish&ndash;July Update

    - by Jeff Julian
    Always a bummer when you update a product and something stops working.  I am hoping it is an installation issue, but each time I go to run “Publish..” in my Web Application, the publish works, but Visual Studio 2012 crashes.  I just noticed this beginning after I ran the Visual Studio 2012 RC July Updates. Can someone else give it a go and see if they see the same problem?  I am using File System publishing. Technorati Tags: Visual Studio 2012 RC,Error

    Read the article

  • Learn Many Languages

    - by Jeff Foster
    My previous blog, Deliberate Practice, discussed the need for developers to “sharpen their pencil” continually, by setting aside time to learn how to tackle problems in different ways. However, the Sapir-Whorf hypothesis, a contested and somewhat-controversial concept from language theory, seems to hold reasonably true when applied to programming languages. It states that: “The structure of a language affects the ways in which its speakers conceptualize their world.” If you’re constrained by a single programming language, the one that dominates your day job, then you only have the tools of that language at your disposal to think about and solve a problem. For example, if you’ve only ever worked with Java, you would never think of passing a function to a method. A good developer needs to learn many languages. You may never deploy them in production, you may never ship code with them, but by learning a new language, you’ll have new ideas that will transfer to your current “day-job” language. With the abundant choices in programming languages, how does one choose which to learn? Alan Perlis sums it up best. “A language that doesn‘t affect the way you think about programming is not worth knowing“ With that in mind, here’s a selection of languages that I think are worth learning and that have certainly changed the way I think about tackling programming problems. Clojure Clojure is a Lisp-based language running on the Java Virtual Machine. The unique property of Lisp is homoiconicity, which means that a Lisp program is a Lisp data structure, and vice-versa. Since we can treat Lisp programs as Lisp data structures, we can write our code generation in the same style as our code. This gives Lisp a uniquely powerful macro system, and makes it ideal for implementing domain specific languages. Clojure also makes software transactional memory a first-class citizen, giving us a new approach to concurrency and dealing with the problems of shared state. Haskell Haskell is a strongly typed, functional programming language. Haskell’s type system is far richer than C# or Java, and allows us to push more of our application logic to compile-time safety. If it compiles, it usually works! Haskell is also a lazy language – we can work with infinite data structures. For example, in a board game we can generate the complete game tree, even if there are billions of possibilities, because the values are computed only as they are needed. Erlang Erlang is a functional language with a strong emphasis on reliability. Erlang’s approach to concurrency uses message passing instead of shared variables, with strong support from both the language itself and the virtual machine. Processes are extremely lightweight, and garbage collection doesn’t require all processes to be paused at the same time, making it feasible for a single program to use millions of processes at once, all without the mental overhead of managing shared state. The Benefits of Multilingualism By studying new languages, even if you won’t ever get the chance to use them in production, you will find yourself open to new ideas and ways of coding in your main language. For example, studying Haskell has taught me that you can do so much more with types and has changed my programming style in C#. A type represents some state a program should have, and a type should not be able to represent an invalid state. I often find myself refactoring methods like this… void SomeMethod(bool doThis, bool doThat) { if (!(doThis ^ doThat)) throw new ArgumentException(“At least one arg should be true”); if (doThis) DoThis(); if (doThat) DoThat(); } …into a type-based solution, like this: enum Action { DoThis, DoThat, Both }; void SomeMethod(Action action) { if (action == Action.DoThis || action == Action.Both) DoThis(); if (action == Action.DoThat || action == Action.Both) DoThat(); } At this point, I’ve removed the runtime exception in favor of a compile-time check. This is a trivial example, but is just one of many ideas that I’ve taken from one language and implemented in another.

    Read the article

  • Installing Ubuntu Server 12.04 as a software RAID 1 mirror fails to boot

    - by Jeff Atwood
    I'm installing a few new Ubuntu Server 12.04 LTS servers, and they have two 512 GB SSDs. I want them to use software RAID 1 mirroring, so I was following this document religiously step by step: https://help.ubuntu.com/12.04/serverguide/advanced-installation.html To summarize the above official documentation: to set up a software RAID 1 mirror in Ubuntu Server, you choose manual partitioning during the setup, and do this on each drive: "swap" partition of roughly RAM size "physical volume for RAID" partition for remaining drive size After that, you set up the RAID 1 mirror using the RAID partitions on drive A and B, make it ext4 and containing the root filesystem partition. Setup continues from there just fine. One caveat: I was completely unable to select the "physical volume for RAID" as bootable. When I tried to do that in setup, it had no effect: I could press enter on the "make bootable" option all day long and nothing would ever change. However, after install successfully completes, I have a big problem: the system won't boot! I get Reboot and Select proper boot device or Insert Boot Media in selected Boot device and press a key What did I do wrong? Why can't I mark that "physical volume for RAID" partition bootable during Ubuntu Server setup? Is there some way for me to make the physical volumes for RAID bootable after the fact, perhaps from a live CD or something?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >