Search Results

Search found 3797 results on 152 pages for 'talk'.

Page 32/152 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • From 0 to MVP in 4 weeks

    - by fatherjack
    You may know from my previous posts that I have just started a local SQL Server User Group. 3 weeks ago there was no such group within 100 miles and then we had a meeting. Now, in eight days time, there is going to be a second meeting and I am very excited to be able to say that we will be having an MVP speaker for one of the sessions. Aaron Nelson (Blog|Twitter) made an incredibly generous offer of speaking for us on using PowerShell with SQL Server and I didn't hang around before I said "Yes...(read more)

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • Antenna Aligner Part 8: It's Alive!!!

    - by Chris George
    Finally the day has come, Antenna Aligner v1.0.1 has been uploaded to the AppStore and . "Waiting for review" .. . fast forward 7 days and much checking of emails later WOO HOO! Now what? So I set my facebook page to go live  https://www.facebook.com/AntennaAligner, and started by sending messages to my mates that have iphones! Amazingly a few of them bought it! Similarly some of my colleagues were also kind enough to support me and downloaded it too! Unfortunately the only way I knew they had bought is was from them telling me, as the iTunes connect data is only updated daily at about midday GMT. This is a shame, surely they could provide more granular updates throughout the day? Although I suppose once an app has been out in the wild for a while, daily updates are enough. It would, however, be nice to get a ping when you make your first sale! I would have expected more feedback on my facebook page as well, maybe I'm just expecting too much, or perhaps I've configured the page wrong. The new facebook timeline layout is just confusing, and I'm not sure it's all public, I'll check that! So please take a look and see what you think! I would love to get some more feedback/reviews/suggestions... Oh and watch out for the Android version coming soon!

    Read the article

  • The Art of Dealing with People

    Technical people generally don't easily adapt to being good salespeople. When a technical person takes on a customer-facing role as a support engineer, there are a whole lot of new skills required. Dr Petrova relates how the experience of a change in job gave her a new respect for the skills of sales and marketing.

    Read the article

  • Essential Tools for the WPF Novice

    When Michael sets out to do something, there are no half-measures; So when he set out to learn WPF, we all stand to benefit from the thorough research that he put into the task. He wondered what utility applications could assist programming in WPF. Here are the fruits of all his work.

    Read the article

  • Crime Scene Investigation: SQL Server

    - by Rodney Landrum
    “The packages are running slower in Prod than they are in Dev” My week began with this simple declaration from one of our lead BI developers, quickly followed by an emailed spreadsheet demonstrating that, over 5 executions, an extensive ETL process was running average 630 seconds faster on Dev than on Prod. The situation needed some scientific investigation to determine why the same code, the same data, the same schema would yield consistently slower results on a more powerful server. Prod had yet to be officially christened with a “Go Live” date so I had the time, and having recently been binge watching CSI: New York, I also had the inclination. An inspection of the two systems, Prod and Dev, revealed the first surprise: although Prod was indeed a “bigger” system, with double the amount of RAM of Dev, the latter actually had twice as many processor cores. On neither system did I see much sign of resources being heavily taxed, while the ETL process was running. Without any real supporting evidence, I jumped to a conclusion that my years of performance tuning should have helped me avoid, and that was that the hardware differences explained the better performance on Dev. We spent time setting up a Test system, similarly scoped to Prod except with 4 times the cores, and ported everything across. The results of our careful benchmarks left us truly bemused; the ETL process on the new server was slower than on both other systems. We burned more time tweaking server configurations, monitoring IO and network latency, several times believing we’d uncovered the smoking gun, until the results of subsequent test runs pitched us back into confusion. Finally, I decided, enough was enough. Hadn’t I learned very early in my DBA career that almost all bottlenecks were caused by code and database design, not hardware? It was time to get back to basics. With over 100 SSIS packages and hundreds of queries, each handling specific tasks such as file loads, bulk inserts, transforms, logging, and so on, the task seemed formidable. And yet, after barely an hour spent with Profiler, Extended Events, and wait statistics DMVs, I had a lead in the shape of a query that joined three tables, containing millions of rows, returned 3279 results, but performed 239K logical reads. As soon as I looked at the execution plans for the query in Dev and Test I saw the culprit, an implicit conversion warning on a join predicate field that was numeric in one table and a varchar(50) in another! I turned this information over to the BI developers who quickly resolved the data type mismatches and found and fixed “several” others as well. After the schema changes the same query with the same databases ran in under 1 second on all systems and reduced the logical reads down to fewer than 300. The analysis also revealed that on Dev, the ETL task was pulling data across a LAN, whereas Prod and Test were connected across slower WAN, in large part explaining why the same process ran slower on the latter two systems. Loading the data locally on Prod delivered a further 20% gain in performance. As we progress through our DBA careers we learn valuable lessons. Sometimes, with a project deadline looming and pressure mounting, we choose to forget them. I was close to giving into the temptation to throw more hardware at the problem. I’m pleased at least that I resisted, though I still kick myself for not looking at the code on day one. It can seem a daunting prospect to return to the fundamentals of the code so close to roll out, but with the right tools, and surprisingly little time, you can collect the evidence that reveals the true problem. It is a lesson I trust I will remember for my next 20 years as a DBA, if I’m ever again tempted to bypass the evidence.

    Read the article

  • I'm blogging again, and about time too

    - by fatherjack
    No, seriously, this one is about time. I recently had an issue in a work database where a query was giving random results, sometimes the query would return a row and other times it wouldn't. There was quite a bit of work distilling the query down to find the reason for this and I'll try to explain by demonstrating what was happening by using some sample data in a table with rather a contrived use case. Let's assume we have a table that is designed to have a start and end date for something, maybe...(read more)

    Read the article

  • Protect and Improve your Software with SmartAssembly 5

    - by Bart Read
    SmartAssembly 5 has been released. You can download a 14-day fully-functional free trial from: http://www.red-gate.com/products/smartassembly/index.htm This is the first major release since Red Gate acquired the tool last year, and our focus has mainly been on improving the quality of an already great tool. We've also simplified the licensing model so that there are now only three editions: Standard - bullet-proof protection at a bargain price, Pro - includes the SDK & custom web server...(read more)

    Read the article

  • Just interviewed, turned down, now got an email asking to chat with recruiter. No response. What should I do? [closed]

    - by Lambert
    I was turned down after two interviews by a prominent company for an internship, and only a couple days later, I was asked when I had 10-15 minutes to chat today. Of course, I loved to, so I emailed within just 10 minutes of their email and let them know what times I was available at, and asked them when the best time should be, and if I should go somewhere or expect a phone call. No reply has come from them since yesterday afternoon, the recruiter wanted to talk to me today. I don't want to lose this opportunity, but I have no way to contact the recruiter other than by email, and the recruiter hasn't responded to my emails from yesterday, even though we were supposed to talk today. What's the best thing I can do (preferably within the next few hours!) to get the job? Is that even probably why she emailed me, or was a different reason likely? Any ideas?

    Read the article

  • Smartassembly 5: it lives! Early Access builds now available

    - by Bart Read
    I'm pleased to announce that, late last week, we put out the first early access build for Smartassembly 5, Red Gate's fantastic code protection and error reporting tool, which we acquired last September. You can download it via: http://www.red-gate.com/messageboard/viewforum.php?f=116 It's obviously pretty early days, so please do not try to use this to protect a production application, but we've already done a lot of work in some key areas: We're simplifying and streamlining the licensing model (you won't see this yet, but a lot of the work on this has already been done). We've improved usability of the product, with a better menu, reordering of project settings, and better defaults. We've also fixed a load of bugs, which I'll let Alex blog about in more detail. On a slightly more trivial level, the curly braces are also no more. Over the coming weeks, we'll be adding more improvements, and starting usability tests. If you're interested in getting involved in the latter, please drop an email to [email protected].

    Read the article

  • Migrating from OCS 2007 R2 to Lync: Part 2

    In the story so far, Johan has described how to check that the migration from your OCS to Lync is supported and how to determine the requirements for the new installation This was followed by a walk-through of the preparation the Active Directory and installation of the first Lync Front End Server with a Mediation Server co-located. Now Johan tackles the merging the OCS configuration, and connection to the outsode world, followed by testing, performing and then validating the migration.

    Read the article

  • DBA in Space

    - by Neil Davidson
    Every now and then, you come across an idea that makes your heart jump and your skin tingle. That happened to me a few months ago, when Richard and Anthony pitched a small group of us an idea. "It's called DBA in Space", Richard said. I don't remember the rest of the pitch. "DBA in Space" is one of those phrases that's so simple, remarkable and clear that it sticks and it sticks hard. Sure, lots of people have done much hard, gritty work over the past few months to make it happen. Sure, there's...(read more)

    Read the article

  • Getting baseline and performance stats - the easy way.

    - by fatherjack
    OK, pretty much any DBA worth their salt has read Brent Ozar's (Blog | Twitter) blog about getting a baseline of your server's performance counters and then getting the same counters at regular intervals afterwards so that you can track performance trends and evidence how you are making your servers faster or cope with extra load without costing your boss any money for hardware upgrades. No? well, go read it now. I can wait a while as there is a great video there too...http://www.brentozar.com/archive/2006/12/dba-101-using-perfmon-for-sql-performance-tuning/,...(read more)

    Read the article

  • A Look at Exceptions in .NET Applications

    Memory dumps are a wonderful way of finding out what caused an exception in a managed .NET application, particularly if it is happening in a production application. It is when tracking exceptions in applications where you can't use Visual Studio that the techniques of using cdb and sos.dll come into their own. They may nor be skills that you need to use regularly, but at some point, they will be invaluable. Edward supplies sample memory dumps and gives you a simple introduction.

    Read the article

  • Web.NET is Closing Fast

    - by Chris Massey
    The voting for sessions has now closed, and sadly only half of the potential sessions could make it through. On the plus side, the sessions that floated to the top look great and, with the votes in, Simone and Ugo have moved right along and created a draft agenda to whet our appetites. Take a look, and let them know what you think. I’d also strongly recommend that you get ready to grab your tickets when they become available next week (specifically, September 18th), as places are going to be snapped up fast. In case you need a reminder as to why Web.NET is worth your time: Complete focus on web development Awesome sessions All-night hackathon Free (although I urge you to make a donation to help Simone and Ugo create the best possible event) Put October 20th in your calendar, and start packing. I’ve already booked my flights, and am perusing the list of hotels while I eat my lunch. Bonus Material There will be a full day of RavenDB training on Monday the 22nd of October, run by Ayende himself, and attending Web.NET will get you a 30% discount on the cost of the session.

    Read the article

  • Exceptional PowerShell DBA Pt 3 - Collation and Fragmentation

    In this final look into his everyday essentials, Laerte Junior provides some useful scripts for the DBA that use an alternative way of error-logging. He shows how to use a PowerShell script to check and, if necessary, to defragment your indexes, write data to a SQL Server table, and change the collation for a table. Being an exceptional DBA just got a little easier.

    Read the article

  • Online accounts advanced setting with Empathy (13.10)

    - by uruloke
    the new online accounts doesn't have the advanced settings as the empathy accounts had. How do i change the google server to connect to? i read here: https://wiki.gnome.org/Empathy/FAQ I can't connect to my Google Talk account Your router is probably blocking DNS SRV requests. If possible you should try to fix it. If you can't, the easiest work around is to set "talk.google.com" in the "Server" field of the advanced section of the account. So i think this might fix my problem, or maybe just an option to shift the port it connects to. and is there anyone that knows how to use join any IRC channels with Empathy? i have installed the plugin, but i don't know how to join a channel.

    Read the article

  • Got that Friday feeling?

    - by Rebecca Amos
    Saturday is just around the corner, and we’re all starting to wrap up for the weekend. If you’re the DBA that ‘Friday feeling’ might be as much about checking and preparing your SQL Servers for the next two days, as about looking forward to spending time with friends and family. Whether you’re double-checking your disaster recovery strategy, or know that it’s your turn to be on-call this weekend, it’s likely you’re preparing for the worst, just in case. The fact that you’re making these checks, and caring about both your servers and your users, means that you might be an exceptional DBA. You’re already putting in that extra effort to make other people’s lives easier. So why not take some time for your professional development and enter the Exceptional DBA Awards? If you’re looking for some inspiration for your entry, download our Judges’ Top Tips poster for advice on what the judges are looking for from this year’s entrants. Not only will you be boosting your professional development, but you could win full conference registration for the 2011 PASS Summit in Seattle (where the awards ceremony will take place), four nights' hotel accommodation, and a copy of Red Gate’s SQL DBA Bundle. So take some time out for yourself this weekend and get started on your entry: www.exceptionaldba.com

    Read the article

  • Subterranean IL: Exception handling 1

    - by Simon Cooper
    Today, I'll be starting a look at the Structured Exception Handling mechanism within the CLR. Exception handling is quite a complicated business, and, as a result, the rules governing exception handling clauses in IL are quite strict; you need to be careful when writing exception clauses in IL. Exception handlers Exception handlers are specified using a .try clause within a method definition. .try <TryStartLabel> to <TryEndLabel> <HandlerType> handler <HandlerStartLabel> to <HandlerEndLabel> As an example, a basic try/catch block would be specified like so: TryBlockStart: // ... leave.s CatchBlockEndTryBlockEnd:CatchBlockStart: // at the start of a catch block, the exception thrown is on the stack callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s CatchBlockEnd CatchBlockEnd: // method code continues... .try TryBlockStart to TryBlockEnd catch [mscorlib]System.Exception handler CatchBlockStart to CatchBlockEnd There are four different types of handler that can be specified: catch <TypeToken> This is the standard exception catch clause; you specify the object type that you want to catch (for example, [mscorlib]System.ArgumentException). Any object can be thrown as an exception, although Microsoft recommend that only classes derived from System.Exception are thrown as exceptions. filter <FilterLabel> A filter block allows you to provide custom logic to determine if a handler block should be run. This functionality is exposed in VB, but not in C#. finally A finally block executes when the try block exits, regardless of whether an exception was thrown or not. fault This is similar to a finally block, but a fault block executes only if an exception was thrown. This is not exposed in VB or C#. You can specify multiple catch or filter handling blocks in each .try, but fault and finally handlers must have their own .try clause. We'll look into why this is in later posts. Scoped exception handlers The .try syntax is quite tricky to use; it requires multiple labels, and you've got to be careful to keep separate the different exception handling sections. However, starting from .NET 2, IL allows you to use scope blocks to specify exception handlers instead. Using this syntax, the example above can be written like so: .try { // ... leave.s EndSEH}catch [mscorlib]System.Exception { callvirt instance string [mscorlib]System.Object::ToString() call void [mscorlib]System.Console::WriteLine(string) leave.s EndSEH}EndSEH:// method code continues... As you can see, this is much easier to write (and read!) than a stand-alone .try clause. Next time, I'll be looking at some of the restrictions imposed by SEH on control flow, and how the C# compiler generated exception handling clauses.

    Read the article

  • ReSharper C# Live Template for Read-Only Dependency Property and Routed Event Boilerplate

    - by Bart Read
    Following on from my previous post, where I shared a Live Template for quickly declaring a normal read-write dependency property and its associated property change event boilerplate, here's an unsurprisingly similar template for creating a read-only dependency property.        #region $PROPNAME$ Read-Only Property and Property Change Routed Event        private static readonly DependencyPropertyKey $PROPNAME$PropertyKey =                                             DependencyProperty.RegisterReadOnly(             "$PROPNAME$", typeof ( $PROPTYPE$ ), typeof ( $DECLARING_TYPE$ ),             new PropertyMetadata( $DEF_VALUE$ , On$PROPNAME$Changed ) );       public static readonly DependencyProperty $PROPNAME$Property =                                           $PROPNAME$PropertyKey.DependencyProperty;        public $PROPTYPE$ $PROPNAME$         {             get { return ( $PROPTYPE$ ) GetValue( $PROPNAME$Property ); }             private set { SetValue( $PROPNAME$PropertyKey, value ); }         }       public static readonly RoutedEvent $PROPNAME$ChangedEvent   =                                           EventManager.RegisterRoutedEvent(           "$PROPNAME$Changed",           RoutingStrategy.$ROUTINGSTRATEGY$,           typeof( RoutedPropertyChangedEventHandler< $PROPTYPE$ > ),           typeof( $DECLARING_TYPE$ ) );       public event RoutedPropertyChangedEventHandler< $PROPTYPE$ > $PROPNAME$Changed       {           add { AddHandler( $PROPNAME$ChangedEvent, value ); }           remove { RemoveHandler( $PROPNAME$ChangedEvent, value ); }       }        private static void On$PROPNAME$Changed(           DependencyObject d, DependencyPropertyChangedEventArgs e)         {             var $DECLARING_TYPE_var$ = d as $DECLARING_TYPE$;            var args = new RoutedPropertyChangedEventArgs< $PROPTYPE$ >(               ( $PROPTYPE$ ) e.OldValue,               ( $PROPTYPE$ ) e.NewValue );           args.RoutedEvent    = $DECLARING_TYPE$.$PROPNAME$ChangedEvent;           $DECLARING_TYPE_var$.RaiseEvent( args );$END$        }        #endregion The only real difference here is the addition of the DependencyPropertyKey, which allows your implementation to set the value of the dependency property without exposing the setter code to consumers of your type. You'll probably find that you create read-only dependency properties much less often than read-write properties, but this should still save you some typing when you do need to do so. Technorati Tags: resharper,live template,c#,dependency property,read-only,routed events,property change,boilerplate,wpf

    Read the article

  • Cheating on Technical Debt

    - by Tony Davis
    One bad practice guaranteed to cause dismay amongst your colleagues is passing on technical debt without full disclosure. There could only be two reasons for this. Either the developer or DBA didn’t know the difference between good and bad practices, or concealed the debt. Neither reflects well on their professional competence. Technical debt, or code debt, is a convenient term to cover all the compromises between the ideal solution and the actual solution, reflecting the reality of the pressures of commercial coding. The one time you’re guaranteed to hear one developer, or DBA, pass judgment on another is when he or she inherits their project, and is surprised by the amount of technical debt left lying around in the form of inelegant architecture, incomplete tests, confusing interface design, no documentation, and so on. It is often expedient for a Project Manager to ignore the build-up of technical debt, the cut corners, not-quite-finished features and rushed designs that mean progress is satisfyingly rapid in the short term. It’s far less satisfying for the poor person who inherits the code. Nothing sends a colder chill down the spine than the dawning realization that you’ve inherited a system crippled with performance and functional issues that will take months of pain to fix before you can even begin to make progress on any of the planned new features. It’s often hard to justify this ‘debt paying’ time to the project owners and managers. It just looks as if you are making no progress, in marked contrast to your predecessor. There can be many good reasons for allowing technical debt to build up, at least in the short term. Often, rapid prototyping is essential, there is a temporary shortfall in test resources, or the domain knowledge is incomplete. It may be necessary to hit a specific deadline with a prototype, or proof-of-concept, to explore a possible market opportunity, with planned iterations and refactoring to follow later. However, it is a crime for a developer to build up technical debt without making this clear to the project participants. He or she needs to record it explicitly. A design compromise made in to order to hit a deadline, be it an outright hack, or a decision made without time for rigorous investigation and testing, needs to be documented with the same rigor that one tracks a bug. What’s the best way to do this? Ideally, we’d have some kind of objective assessment of the level of technical debt in a software project, although that smacks of Science Fiction even as I write it. I’d be interested of hear of any methods you’ve used, but I’m sure most teams have to rely simply on the integrity of their colleagues and the clear perceptions of the project manager… Cheers, Tony.

    Read the article

  • Learn Many Languages

    - by Phil Factor
    Around twenty-five years ago, I was trying to solve the problem of recruiting suitable developers for a large business. I visited the local University (it was a Technical College then). My mission was to remind them that we were a large, local employer of technical people and to suggest that, as they were in the business of educating young people for a career in IT, we should work together. I anticipated a harmonious chat where we could suggest to them the idea of mentioning our name to some of their graduates. It didn’t go well. The academic staff displayed a degree of revulsion towards the whole topic of IT in the world of commerce that surprised me; tweed met charcoal-grey, trainers met black shoes. However, their antipathy to commerce was something we could have worked around, since few of their graduates were destined for a career as university lecturers. They asked me what sort of language skills we needed. I tried ducking the invidious task of naming computer languages, since I wanted recruits who were quick to adapt and learn, with a broad understanding of IT, including development methodologies, technologies, and data. However, they pressed the point and I ended up saying that we needed good working knowledge of C and BASIC, though FORTRAN and COBOL were, at the time, still useful. There was a ghastly silence. It was as if I’d recommended the beliefs and practices of the Bogomils of Bulgaria to a gathering of Cardinals. They stared at me severely, like owls, until the head of department broke the silence, informing me in clipped tones that they taught only Modula 2. Now, I wouldn’t blame you if at this point you hurriedly had to look up ‘Modula 2′ on Wikipedia. Based largely on Pascal, it was a specialist language for embedded systems, but I’ve never ever come across it in a commercial business application. Nevertheless, it was an excellent teaching language since it taught modules, scope control, multiprogramming and the advantages of encapsulating a set of related subprograms and data structures. As long as the course also taught how to transfer these skills to other, more useful languages, it was not necessarily a problem. I said as much, but they gleefully retorted that the biggest local employer, a defense contractor specializing in Radar and military technology, used nothing but Modula 2. “Why teach any other programming language when they will be using Modula 2 for all their working lives?” said a complacent lecturer. On hearing this, I made my excuses and left. There could be no meeting of minds. They were providing training in a specific computer language, not an education in IT. Twenty years later, I once more worked nearby and regularly passed the long-deserted ‘brownfield’ site of the erstwhile largest local employer; the end of the cold war had led to lean times for defense contractors. A digger was about to clear the rubble of the long demolished factory along with the accompanying growth of buddleia and thistles, in order to lay the infrastructure for ‘affordable housing’. Modula 2 was a distant memory. Either those employees had short working lives or they’d retrained in other languages. The University, by contrast, was thriving, but I wondered if their erstwhile graduates had ever cursed the narrow specialization of their training in IT, as they struggled with the unexpected variety of their subsequent careers.

    Read the article

  • Modularity through HTTP

    - by Michael Williamson
    As programmers, we strive for modularity in the code we write. We hope that splitting the problem up makes it easier to solve, and allows us to reuse parts of our code in other applications. Object-orientation is the most obvious of many attempts to get us closer to this ideal, and yet one of the most successful approaches is almost accidental: the web. Programming languages provide us with functions and classes, and plenty of other ways to modularize our code. This allows us to take our large problem, split it into small parts, and solve those small parts without having to worry about the whole. It also makes it easier to reason about our code. So far, so good, but now that we’ve written our small, independent module, for example to send out e-mails to my customers, we’d like to reuse it in another application. By creating DLLs, JARs or our platform’s package container of choice, we can do just that – provided our new application is on the same platform. Want to use a Java library from C#? Well, good luck – it might be possible, but it’s not going to be smooth sailing. Even if a library exists, it doesn’t mean that using it going to be a pleasant experience. Say I want to use Java to write out an XML document to an output stream. You’d imagine this would be a simple one-liner. You’d be wrong: import org.w3c.dom.*; import java.io.*; import javax.xml.transform.*; import javax.xml.transform.dom.*; import javax.xml.transform.stream.*; private static final void writeDoc(Document doc, OutputStream out) throws IOException { try { Transformer t = TransformerFactory.newInstance().newTransformer(); t.setOutputProperty(OutputKeys.DOCTYPE_SYSTEM, doc.getDoctype().getSystemId()); t.transform(new DOMSource(doc), new StreamResult(out)); } catch (TransformerException e) { throw new AssertionError(e); // Can't happen! } } Most of the time, there is a good chance somebody else has written the code before, but if nobody can understand the interface to that code, nobody’s going to use it. The result is that most of the code we write is just a variation on a theme. Despite our best efforts, we’ve fallen a little short of our ideal, but the web brings us closer. If we want to send e-mails to our customers, we could write an e-mail-sending library. More likely, we’d use an existing one for our language. Even then, we probably wouldn’t have niceties like A/B testing or DKIM signing. Alternatively, we could just fire some HTTP requests at MailChimp, and get a whole slew of features without getting anywhere near the code that implements them. The web is inherently language agnostic. So long as your language can send and receive text over HTTP, and probably parse some JSON, you’re about as well equipped as anybody. Instead of building libraries for a specific language, we can build a service that almost every language can reuse. The text-based nature of HTTP also helps to limit the complexity of the API. As SOAP will attest, you can still make a horrible mess using HTTP, but at least it is an obvious horrible mess. Complex data structures are tedious to marshal to and from text, providing a strong incentive to keep things simple. By contrast, spotting the complexities in a class hierarchy is often not as easy. HTTP doesn’t solve every problem. It probably isn’t such a good idea to use it inside an inner loop that’s executed thousands of times per second. What’s more, the HTTP approach might introduce some new problems. We often need to add a thin shim to each application that we wish to communicate over HTTP. For instance, we might need to write a small plugin in PHP if we want to integrate WordPress into our system. Suddenly, instead of a system written in one language, we’re maintaining a system with several distinct languages and platforms. Even then, we should strive to avoid re-implementing the same old thing. As programmers, we consistently underestimate both the cost of building a system and the ongoing maintenance. If we allow ourselves to integrate existing applications, even if they’re in unfamiliar languages, we save ourselves those development and maintenance costs, as well as being able to pick the best solution for our problem. Thanks to the web, HTTP is often the easiest way to get there.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >