Search Results

Search found 5749 results on 230 pages for 'miles away'.

Page 203/230 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • SSIS Reporting Pack v0.4 – Execution Report updated

    - by jamiet
    SSIS Reporting Pack is a suite of reports that I maintain at http://ssisreportingpack.codeplex.com/ that provide visualisation over the SSIS Catalog in SQL Server 2012 and attempt to add value over the reports that ship in the box. Work on the reports has stalled (my last SSIS Reporting Pack blog post was on 4th September 2011) as I’ve had rather more important things going on my life of late however I have recently checked-in a fix that couldn’t really be delayed. I discovered a problem with the Execution report that was causing the report to effectively hang, it was caused by this bit of SQL hidden away in the report definition: [generated_executables] AS (   SELECT  [new_executable].[execution_path],[new_executable].[parent_execution_path]   FROM    (           SELECT  [execution_path] = SUBSTRING([loop_iteration].[execution_path] ,1, [loop_iteration].length_exec_path - [loop_iteration].[char_index_close_square] + 1)           ,       [parent_execution_path] = SUBSTRING([loop_iteration].[execution_path] ,1, [loop_iteration].length_exec_path - [loop_iteration].[char_index_open_square])           FROM    (                   SELECT  [execution_path]                   ,       [char_index_open_square] = CHARINDEX('[',REVERSE([execution_path]),1)                   ,       [char_index_close_square] = CHARINDEX(']',REVERSE([execution_path]),1)                   ,       [length_exec_path] = LEN([execution_path])                   FROM    [exec_stats] es                   WHERE   execution_path LIKE '%\[%]%'  ESCAPE '\'                   )AS [loop_iteration]           ) AS [new_executable]   GROUP   BY [new_executable].[execution_path],[new_executable].[parent_execution_path]) It was there because SSIS does not currently treat a loop iteration as an executable yet I figured there was still value in being able to view it as such – this SQL essentially “invents” new executables for those loop iterations; its what enabled the following visualisation: where each of the three iterations of a For Each Loop called “FEL Loop over top performing regions” appear in the report. Unfortunately, as I alluded, this could under certain circumstances (most likely when there were many loop iterations) cause the report to hang as it waited for the results to be constructed and returned. The change that I have made eradicates this generation of “fake” executables and thus produces this visualisation instead: Notice that the three “children” of the For Each Loop are no longer the three iterations but actually the task (“EPT Call Data Export Package”) contained within that For Each Loop. The problem here is of course that there is no longer a visual distinction between those three iterations; I have instead made the full execution path viewable via a tooltip:   If you preferred the “old” way of presenting this information and are happy to put up with the performance degradation then I have kept the old version of the report hanging around in the reporting pack as “execution loop with iterations” however none of the other reports link to it so you will have to browse to it manually if you want to use it. Please let me know if you ARE using it – I would be very interested to hear about your experiences.   The last change to make you aware of in the execution report is that by default I no longer show OnPreValidate or OnPostValidate messages as I consider them to be superfluous and only serve to clutter up the results. If you want to put them back, well, its open source so go right ahead!   The latest release of SSIS Reporting Pack that contains all of these changes is v0.4 and can be downloaded from http://ssisreportingpack.codeplex.com/releases/view/88178   Feedback on all of the above changes would be very much appreciated. @Jamiet

    Read the article

  • The Apple iPad &ndash; I&rsquo;m gonna get it!

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Well, heck, here comes another non-techie blogpost. You know I’m a geek, so I love gadgets! I found it RATHER interesting to see all the negative news on the blogosphere about the iPad. The main bitch points are - No Multi tasking No Flash Just a bigger iPhone. So here’s the deal! My view is, the above 3 are EXACTLY what I had personally hoped for in the Apple iPad. Before the release, I had gone on the record saying - “If the Apple Tablet is able to run full fledged iTunes (so I can get rid of iTunes on my desktop, I don’t like iTunes on Windows), can browse the net, can read PDFs, and will be under $1000, I’ll buy it”. Well, so, the released iPad wasn’t exactly like my dream tablet. The biggest downer IMO was it’s inability to run full iTunes. But, really, in retrospect, I like the newly released iPad. And here is why. No Multi tasking and No Flash, means much better battery life. Frankly, I rarely multi task on my laptop/desktop .. yeah I know my OS does .. but ME – I don’t multi task, and I don’t think you do either!! As I type this blogpost, I have a few windows running behind the scenes, but they are simply waiting for me to get back to them. The only thing truly running and I am making use of, other than this blogost, is media player playing some music – which the iPad can do. Also, I am logged into IM/Email – which again, iPad can do via notifications. It does the limited multitasking I need, without chewing down on batteries. Smart thinking, precisely the reason I love the iPhone. I don’t want a bulky battery consuming machine. Lack of flash? Okay sure, I can’t see Hulu on my iPad. That’s some loss. I can see youtube. Also, per Adobe I can’t see some porn sites, which I don’t want to see on my iPad. But, Flash is heavy. Especially flash video. My dream is to see silverlight run on the iPhone and iPad. No flash = not such a big loss. Speaking of battery life – 10 hours is plenty. I haven’t been away from electricity for that long usually, so I’m okay with charging it up when it runs low. It’s really not such a big deal honestly. Finally, eBook functionality – wow! I went on the record saying, eBook readers are not for me, but seriously, the iPad is perfect for my eBook needs at least. And as far it being just a bigger iPhone? I’ve always wanted a bigger iPhone, precisely for the eBook reading experience. I love my iPhone, I love the apps on it. The only thing that sucks about the iPhone is battery life, but other than that, it is the best gadget I have ever bought! And something that runs on mobile chips, is that thin, and those newly written apps .. mail, calendar .. I am very very excited to get my iPad, which will be the 64gig 3G version. The biggest plus in an iPad ……… no contract on data. I am *hoping*, this means that I can buy a SIM card in Europe, and use the iPad here. That would be killer awesome! But hey, if I had to pick downers in the iPad, they would be - - I wish they had a 128G Version. Now that we have a good video viewing machine, I know I’d chew up space quickly.- Sync over WIFI, seriously Apple.  Both for iPhone and iPad.- 3 month wait!!- Existing iPhone users should get a discount on the iPad data plan. Comment on the article ....

    Read the article

  • Microsoft Forcing Dev/Partners Hands on Win 8 Through Certification

    - by D'Arcy Lussier
    I remember 2.5 years ago when Microsoft dropped a bomb on the Microsoft Partner community: all Gold competencies would require .NET 4 based premiere certifications (MCPD). Problem was, this gave a window of about 6 months for partners to update their employees’ certifications. At the place I was working, I put together an aggressive plan and we were able to attain the certs needed. Microsoft is always open that the certification requirements will change as the industry changes. .NET 1.0 certifications are useless here in 2012, and rightfully so they’ve been retired for a long time now. But now we’re seeing a new tactic by Microsoft – shifting gears away from certifications that speak to what industry needs and more to the Windows 8 agenda. Consider that currently the premiere development certification is the Microsoft Certified Professional Developer, which comes in three flavours – Web, Windows, and Azure. All require WCF and Data Access exams, as well as one that deals with the associated base technologies (ASP.NET, WinForms/WPF, Azure), and one that ties all three together in a solution-based exam. For Microsoft-based organizations, these skills aren’t just valid but necessary in building Microsoft applications. But the MCPD is being replaced with our old friend Microsoft Certified Solutions Developer (MCSD). So far, Microsoft has only released two types of MCSD – Web and Windows Store Apps. Windows Store Apps?! In a push to move developers to create WinRT-based applications, desktop development is now considered a second-class citizen in the eyes of Redmond. Also interesting are the language options for the exams: HTML5 and C#. Sorry VB folks, its time to embrace curly braces whether they be JavaScript or C#. Consider too the skills being assessed for the Windows Store Apps: Get your MCSD: Windows Store Apps Using HTML5 Get your MCSD: Windows Store Apps Using C# *Image Source: http://www.microsoft.com/learning/en/us/certification/mcsd-windows-store-apps.aspx Nov 21/2012 If you look at the skills being tested in each exam, you’ll find that skills like WCF and Data Access are downplayed compared to things like integrating Charms, facilitating Search, programming for the microphone and camera – all very Windows 8 focussed items. Where this becomes maddening is that Microsoft is still pushing Windows 7 with enterprise clients. According to a ZDNet article, Microsoft wants to see Windows 7 on 70% of enterprise desktops by mid 2013. Assuming they somehow meet that (its a pretty lofty goal), there’s years of traditional desktop-based development that will still be required at some level. For those thinking they’ll just write and stick with the MCPD certification, note that most exams that go towards that certification will be retired at the end of July 2013! (Read the small print). And while details haven’t been finalized, its a safe bet that MCPD certifications eventually won’t count towards Gold-level competencies in the Microsoft Partner program. What this means for Microsoft Partners and Developers is that certification for desktop development is going to be limited to Windows Store Apps unless Microsoft re-introduces a traditional desktop (WPF) based MCSD cert. Web Application Development – It’s Not All Bad There’s big changes on the web side of certification, but I actually see these changes as being for the good! Check out the new exam requirements for MCSD – Web Applications: Get your MCSD: Web Applications certification *Image Source: http://www.microsoft.com/learning/en/us/certification/cert-mcsd-web-applications.aspx Nov 21, 2012 We now *start* with HTML5, JavaScript, and CSS3! Now I’m sure that these will be slanted towards web development in IE, and I can hear designers everywhere bemoaning the CSS/IE combination. Still, I applaud Microsoft for adopting HTML5 as the go-to web technology and requiring certified developers to prove they have skills in the basics of web dev. The fact that the second exam clearly states “MVC Web Applications” shows that Web Forms is truly legacy and deprecated. That’s not to say there aren’t those out there that are still supporting or (for whatever reason) doing new dev with Web Forms, but this move by Microsoft is telling the community they better get on the MVC bandwagon if they want to stay current. Fantastic! And of course Azure needs to be here as well, and this is where the Microsoft agenda fits in. It’s no secret that there’s been a huge push in getting developers on to Azure. I don’t see this as being a bad thing either, as cloud computing (whether Azure, private, or 3rd party) is a necessary skill for developers to have here in 2012. The cynic in me realizes that the HTML5/JavaScript/CSS push wouldn’t be as prominent though if not for the Windows 8 Store App play, where HTML5 is a first class citizen (and an available language for the MCSD Windows Store App cert). In this case, the desktop developers loss is the web developers gain. Get Ready for Changes In addition to the changes in certifications, the Microsoft Partner competencies are going through changes as well. Web and Software Development are being merged into a single competency, meaning that licenses you would have received from having both as Gold are reduced. Other competencies are either being removed or changed, as are the exam requirements. In the same way that we’re seeing faster release cycles from Microsoft, so too will we see the Microsoft Partner Program and MS Certifications evolve faster than ever before. Many of us got caught in the last wave of changes, but this time we can see the wave coming – and it looks pretty big!

    Read the article

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • To My 24 Year Old Self, Wherever You Are&hellip;

    - by D'Arcy Lussier
    A decade is a milestone in one’s life, regardless of when it occurs. 2011 might seem like a weird year to mark a decade, but 2001 was a defining year for me. It marked my emergence into the technology industry, an unexpected loss of innocence, and triggered an ongoing struggle with faith and belief. Once you go through a valley, climbing the mountain and looking back over where you travelled, you can take in the entirety of the journey. Over the last 10 years I kept journals, and in this new year I took some time to review them. For those today that are me a decade ago, I share with you what I’ve gleamed from my experiences. Take it for what it’s worth, and safe travels on your own journeys through life. Life is a Performance-Based Sport Have confidence, believe you’re capable, but realize that life is a performance-based sport. Everything you get in life is based on whether you can show that you deserve it. Performance is also your best defense against personal attacks. Just make sure you know what standards you’re expected to hit and if people want to poke holes at you let them do the work of trying to find them. Sometimes performance won’t matter though. Good things will happen to bad people, and bad things to good people. What’s important is that you do the right things and ensure the good and bad even out in your own life. How you finish is just as important as how you start. Start strong, end strong. Respect is Your Most Prized Reward Respect is more important than status or ego. The formula is simple: Performing Well + Building Trust + Showing Dedication = Respect Focus on perfecting your craft and helping your team and respect will come. Life is a Team Sport Whatever aspect of your life, you can’t do it alone. You need to rely on the people around you and ensure you’re a positive aspect of their lives; even those that may be difficult or unpleasant. Avoid criticism and instead find ways to help colleagues and superiors better whatever environment you’re in (work, home, etc.). Don’t just highlight gaps and issues, but also come to the table with solutions. At the same time though, stand up for yourself and hold others accountable for the commitments they make to the team. A healthy team needs accountability. Give feedback early and often, and make it verbal. Issues should be dealt with immediately, and positives should be celebrated as they happen. Life is a Contact Sport Difficult moments will happen. Don’t run from them or shield yourself from experiencing them. Embrace them. They will further mold you and reveal who you will become. Find Your Tribe and Embrace Your Community We all need a tribe: a group of people that we gravitate to for support, guidance, wisdom, and friendship. Discover your tribe and immerse yourself in them. Don’t look for a non-existent tribe just to fill the need of belonging though that will leave you empty and bitter when they don’t meet your unrealistic expectations. Try to associate with people more experienced and more knowledgeable than you. You’ll always learn, and you’ll always remember you have much to learn. Put yourself out there, get involved with the community. Opportunities will present themselves. When we open ourselves up to be vulnerable, we also give others the chance to do the same. This helps us all to grow and help each other, it’s very important. And listen to your wife. (Easter *is* a romantic holiday btw, regardless of what you may think.) Don’t Believe Your Own Press Clippings (and by that I mean the ones you write) Until you have a track record of performance to refer to, any notions of grandeur are just that: notions. You lose your rookie status through trials and tribulations, not by the number of stamps in your passport. Be realistic about your own “experience and leadership” and be honest when you aren’t ready for something. And always remember: nobody really cares about you as much as you think they do. Don’t Let Assholes Get You Down The world isn’t evil, but there is evil in the world. Know the difference and don’t paint all people with the same brush. Do be wary of those that use personal beliefs to describe their business (i.e. “We’re a [religion] company”). What matters is the culture of the organization, and that will tell you the moral compass and what is truly valued. Don’t make someone or something a priority that only makes you an option. Life is unfair and enemies/opponents will succeed when you fail. Don’t waste your energy getting upset at this; the only one that will lose out is you. As mentioned earlier, nobody really cares about you as much as you think they do. Misc Ecclesiastes is bullshit. Everything is certainly *not* meaningless. Software development is about delivery, not the process. Having a great process means nothing if you don’t produce anything. Watch “The Weatherman” (“It’s not easy, but easy doesn’t enter into grownup life.”). Read Tony Dungee’s autobiography, even if you don’t like football, and even if you aren’t a Christian. Say no, don’t feel like you have to commit right away when someone asks you to.

    Read the article

  • Ransomware: Why This New Malware is So Dangerous and How to Protect Yourself

    - by Chris Hoffman
    Ransomware is a type of malware that tries to extort money from you. One of the nastiest examples, CryptoLocker, takes your files hostage and holds them for ransom, forcing you to pay hundreds of dollars to regain access. Most malware is no longer created by bored teenagers looking to cause some chaos. Much of the current malware is now produced by organized crime for profit and is becoming increasingly sophisticated. How Ransomware Works Not all ransomware is identical. The key thing that makes a piece of malware “ransomware” is that it attempts to extort a direct payment from you. Some ransomware may be disguised. It may function as “scareware,” displaying a pop-up that says something like “Your computer is infected, purchase this product to fix the infection” or “Your computer has been used to download illegal files, pay a fine to continue using your computer.” In other situations, ransomware may be more up-front. It may hook deep into your system, displaying a message saying that it will only go away when you pay money to the ransomware’s creators. This type of malware could be bypassed via malware removal tools or just by reinstalling Windows. Unfortunately, Ransomware is becoming more and more sophisticated. One of the latest examples, CryptoLocker, starts encrypting your personal files as soon as it gains access to your system, preventing access to the files without knowing the encryption key. CryptoLocker then displays a message informing you that your files have been locked with encryption and that you have just a few days to pay up. If you pay them $300, they’ll hand you the encryption key and you can recover your files. CryptoLocker helpfully walks you through choosing a payment method and, after paying, the criminals seem to actually give you a key that you can use to restore your files. You can never be sure that the criminals will keep their end of the deal, of course. It’s not a good idea to pay up when you’re extorted by criminals. On the other hand, businesses that lose their only copy of business-critical data may be tempted to take the risk — and it’s hard to blame them. Protecting Your Files From Ransomware This type of malware is another good example of why backups are essential. You should regularly back up files to an external hard drive or a remote file storage server. If all your copies of your files are on your computer, malware that infects your computer could encrypt them all and restrict access — or even delete them entirely. When backing up files, be sure to back up your personal files to a location where they can’t be written to or erased. For example, place them on a removable hard drive or upload them to a remote backup service like CrashPlan that would allow you to revert to previous versions of files. Don’t just store your backups on an internal hard drive or network share you have write access to. The ransomware could encrypt the files on your connected backup drive or on your network share if you have full write access. Frequent backups are also important. You wouldn’t want to lose a week’s worth of work because you only back up your files every week. This is part of the reason why automated back-up solutions are so convenient. If your files do become locked by ransomware and you don’t have the appropriate backups, you can try recovering them with ShadowExplorer. This tool accesses “Shadow Copies,” which Windows uses for System Restore — they will often contain some personal files. How to Avoid Ransomware Aside from using a proper backup strategy, you can avoid ransomware in the same way you avoid other forms of malware. CryptoLocker has been verified to arrive through email attachments, via the Java plug-in, and installed on computers that are part of the Zeus botnet. Use a good antivirus product that will attempt to stop ransomware in its tracks. Antivirus programs are never perfect and you could be infected even if you run one, but it’s an important layer of defense. Avoid running suspicious files. Ransomware can arrive in .exe files attached to emails, from illicit websites containing pirated software, or anywhere else that malware comes from. Be alert and exercise caution over the files you download and run. Keep your software updated. Using an old version of your web browser, operating system, or a browser plugin can allow malware in through open security holes. If you have Java installed, you should probably uninstall it. For more tips, read our list of important security practices you should be following. Ransomware — CryptoLocker in particular — is brutally efficient and smart. It just wants to get down to business and take your money. Holding your files hostage is an effective way to prevent removal by antivirus programs after it’s taken root, but CryptoLocker is much less scary if you have good backups. This sort of malware demonstrates the importance of backups as well as proper security practices. Unfortunately, CryptoLocker is probably a sign of things to come — it’s the kind of malware we’ll likely be seeing more of in the future.     

    Read the article

  • Agile Testing Days 2012 – My First Conference!

    - by Chris George
    I’d like to give you a bit of background first… so please bear with me! In 1996, whilst studying for my final year of my degree, I applied for a job as a C++ Developer at a small software house in Hertfordshire  After bodging up the technical part of the interview I didn’t get the job, but was offered a position as a QA Engineer instead. The role sounded intriguing and the pay was pretty good so in the absence of anything else I took it. Here began my career in the world of software testing! Back then, testing/QA was often an afterthought, something that was bolted on to the development process and very much a second class citizen. Test automation was rare, and tools were basic or non-existent! The internet was just starting to take off, and whilst there might have been testing communities and resources, we were certainly not exposed to any of them. After 8 years I moved to another small company, and again didn’t find myself exposed to any of the changes that were happening in the industry. It wasn’t until I joined Red Gate in 2008 that my view of testing and software development as a whole started to expand. But it took a further 4 years for my view of testing to be totally blown open, and so the story really begins… In May 2012 I was fortunate to land the role of Head of Test Engineering. Soon after, I received an email with details for the “Agile Testi However, in my new role, I decided that it was time to bite the bullet and at least go to one conference. Perhaps I could get some new ideas to supplement and support some of the ideas I already had.ng Days” conference in Potsdam, Germany. I looked over the suggested programme and some of the talks peeked my interest. For numerous reasons I’d shied away from attending conferences in the past, one of the main ones being that I didn’t see much benefit in attending loads of talks when I could just read about stuff like that on the internet. So, on the 18th November 2012, myself and three other Red Gaters boarded a plane at Heathrow bound for Potsdam, Germany to attend Agile Testing Days 2012. Tutorial Day – “Software Testing Reloaded” We chose to do the tutorials on the 19th, I chose the one titled “Software Testing Reloaded – So you wanna actually DO something? We’ve got just the workshop for you. Now with even less powerpoint!”. With such a concise and serious title I just had to see what it was about! I nervously entered the room to be greeted by tables, chairs etc all over the place, not set out and frankly in one hell of a mess! There were a few people in there playing a game with dice. Okaaaay… this is going to be a long day! Actually the dice game was an exercise in deduction and simplification… I found it very interesting and is certainly something I’ll be using at work as a training exercise! (I won’t explain the game here cause I don’t want to let the cat out of the bag…) The tutorial consisted of several games, exploring different aspects of testing. They were all practical yet required a fair amount of thin king. Matt Heusser and Pete Walen were running the tutorial, and presented it in a very relaxed and light-hearted manner. It was really my first experience of working in small teams with testers from very different backgrounds, and it was really enjoyable. Matt & Pete were very approachable and offered advice where required whilst still making you work for the answers! One of the tasks was to devise several strategies for testing some electronic dice. The premise was that a Vegas casino wanted to use the dice to appeal to the twenty-somethings interested in tech, but needed assurance that they were as reliable and random as traditional dice. This was a very interesting and challenging exercise that forced us to challenge various assumptions, determine/clarify requirements but most of all it was frustrating because the dice made a very very irritating beeping noise. Multiple that by at least 12 dice and I was dreaming about them all that night!! Some of the main takeaways that were brilliantly demonstrated through the games were not to make assumptions, challenge requirements, and have fun testing! The tutorial lasted the whole day, but to be honest the day went very quickly! My introduction into the conference experience started very well indeed, and I would talk to both Matt and Pete several times during the 4 days. Days 1,2 & 3 will be coming soon…  

    Read the article

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • Use your own domain email and tired of SPAM? SPAMfighter FTW

    - by Dave Campbell
    I wouldn't post this if I hadn't tried it... and I paid for it myself, so don't anybody be thinking I'm reviewing something someone sent me! Long ago and far away I got very tired of local ISPs and 2nd phone lines and took the plunge and got hooked up to cable... yeah I know the 2nd phone line concept may be hard for everyone to understand, but that's how it was in 'the old days'. To avoid having to change email addresses all the time, I decided to buy a domain name, get minimal hosting, and use that for all email into the house. That way if I changed providers, all the email addresses wouldn't have to change. Of course, about a dozen domains later, I have LOTS of pop email addresses and even an exchange address to my client's server... times have changed. What also has changed is the fact that we get SPAM... 'back in the day' when I was a beta tester for the first ISP in Phoenix, someone tried sending an ad to all of us, and what he got in return for his trouble was a bunch of core dumps that locked up his email... if you don't know what a core dump is, ask your grandfather. But in today's world, we're all much more civilized than that, and as with many things, the criminals seem to have much more rights than we do, so we get inundated with email offering all sorts of wild schemes that you'd have to be brain-dead to accept, but yet... if people weren't accepting them, they'd stop sending them. I keep hoping that survival of the smartest would weed out the mental midgets that respond and then the jumk email stop, but that hasn't happened yet anymore than finding high-quality hearing aids at the checkout line of Safeway because of all the dimwits playing music too loud inside their car... but that's another whole topic and I digress. So what's the solution for all the spam? And I mean *all*... on that old personal email address, I am now getting over 150 spam messages a day! Yes I know that's why God invented the delete key, but I took it on as a challenge, and it's a matter of principle... why should I switch email addresses, or convert from [email protected] to something else, or have all my email filtered through some service just because some A-Hole somewhere has a site up trying to phish Ma & Pa Kettle (ask your grandfather about that too) out of their retirement money? Well... I got an email from my cousin the other day while I was writing yet another email rule, and there was a banner on the bottom of his email that said he was protected by SPAMfighter. SPAMfighter huh.... so I took a look at their site, and found yet one more of the supposed tools to help us. But... I read that they're a Microsoft Gold Partner... and that doesn't come lightly... so I took a gamble and here's what I found: I installed it, and had to do a couple things: 1) SPAMfighter stuffed the SPAMfighter folder into my client's exchange address... I deleted it, made a new SPAMfighter folder where I wanted it to go, then in the SPAMfighter Clients settings for Outlook, I told it to put all spam there. 2) It didn't seem to be doing anything. There's a ribbon button that you can select "Block", and I did that, wondering if I was 'training' it, but it wasn't picking up duplicates 3) I sent email to support, and wrote a post on the forum (not to self: reply to that post). By the time the folks from the home office responded, it was the next day, and first up, SPAMfighter knocked down everything that came through when Outlook opend... two thumbs up! I disabled my 'garbage collection' rule from Outlook, and told Outlook not to use the junk folder thinking it was interfering. 4) Day 2 seemed to go about like Day 1... but I hung in there. 5) Day 3 is now a whole new day... I had left Outlook open and hadn't looked at the PC since sometime late yesterday afternoon, and when I looked this morning, *every bit* of spam was in the SPAMfighter folder!! I'm a new paying customer After watching SPAMfighter work this morning, I've purchased a 1-year license, and I now can sit and watch as emails come in and disappear from my inbox into the SPAMfighter folder. No more continual tweaking of the rules. I've got SPAMfighter set to 'Very Hard' filtering... personally I'd rather pull the few real emails out of the SPAMfighter folder than pull spam out of the real folders. Yes this is simply another way of using the delete key, but you know what? ... it feels good :) Here's a screenshot of the stats after just about 48 hours of being onboard: Note that all the ones blocked by me were during Day 1 and 2... I've blocked none today, and everything is blocked. Stay in the 'Light!

    Read the article

  • Level of detail algorithm not functioning correctly

    - by Darestium
    I have been working on this problem for months; I have been creating Planet Generator of sorts, after more than 6 months of work I am no closer to finishing it then I was 4 months ago. My problem; The terrain does not subdivide in the correct locations properly, it almost seems as if there is a ghost camera next to me, and the quads subdivide based on the position of this "ghost camera". Here is a video of the broken program: http://www.youtube.com/watch?v=NF_pHeMOju8 The best example of the problem occurs around 0:36. For detail limiting, I am going for a chunked LOD approach, which subdivides the terrain based on how far you are away from it. I use a "depth table" to determine how many subdivisions should take place. void PQuad::construct_depth_table(float distance) { tree[0] = -1; for (int i = 1; i < MAX_DEPTH; i++) { tree[i] = distance; distance /= 2.0f; } } The chuncked LOD relies on the child/parent structure of quads, the depth is determined by a constant e.g: if the constant is 6, there are six levels of detail. The quads which should be drawn go through a distance test from the player to the centre of the quad. void PQuad::get_recursive(glm::vec3 player_pos, std::vector<PQuad*>& out_children) { for (size_t i = 0; i < children.size(); i++) { children[i].get_recursive(player_pos, out_children); } if (this->should_draw(player_pos) || this->depth == 0) { out_children.emplace_back(this); } } bool PQuad::should_draw(glm::vec3 player_position) { float distance = distance3(player_position, centre); if (distance < tree[depth]) { return true; } return false; } The root quad has four children which could be visualized like the following: [] [] [] [] Where each [] is a child. Each child has the same amount of children up until the detail limit, the quads which have are 6 iterations deep are leaf nodes, these nodes have no children. Each node has a corresponding Mesh, each Mesh structure has 16x16 Quad-shapes, each Mesh's Quad-shapes halves in size each detail level deeper - creating more detail. void PQuad::construct_children() { // Calculate the position of the Quad based on the parent's location calculate_position(); if (depth < (int)MAX_DEPTH) { children.reserve((int)NUM_OF_CHILDREN); for (int i = 0; i < (int)NUM_OF_CHILDREN; i++) { children.emplace_back(PQuad(this->face_direction, this->radius)); PQuad *child = &children.back(); child->set_depth(depth + 1); child->set_child_index(i); child->set_parent(this); child->construct_children(); } } else { leaf = true; } } The following function creates the vertices for each quad, I feel that it may play a role in the problem - I just can't determine what is causing the problem. void PQuad::construct_vertices(std::vector<glm::vec3> *vertices, std::vector<Color3> *colors) { vertices->reserve(quad_width * quad_height); for (int y = 0; y < quad_height; y++) { for (int x = 0; x < quad_width; x++) { switch (face_direction) { case YIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, quad_height - 1.0f, -(position.y + y * element_width))); break; case YDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, 0.0f, -(position.y + y * element_width))); break; case XIncreasing: vertices->emplace_back(glm::vec3(quad_width - 1.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case XDecreasing: vertices->emplace_back(glm::vec3(0.0f, position.y + y * element_width, -(position.x + x * element_width))); break; case ZIncreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, 0.0f)); break; case ZDecreasing: vertices->emplace_back(glm::vec3(position.x + x * element_width, position.y + y * element_width, -(quad_width - 1.0f))); break; } // Position the bottom, right, front vertex of the cube from being (0,0,0) to (-16, -16, 16) (*vertices)[vertices->size() - 1] -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); colors->emplace_back(Color3(255.0f, 255.0f, 255.0f, false)); } } switch (face_direction) { case YIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, quad_height - 1.0f, -(position.y + quad_height / 2.0f)); break; case YDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, 0.0f, -(position.y + quad_height / 2.0f)); break; case XIncreasing: this->centre = glm::vec3(quad_width - 1.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case XDecreasing: this->centre = glm::vec3(0.0f, position.y + quad_height / 2.0f, -(position.x + quad_width / 2.0f)); break; case ZIncreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, 0.0f); break; case ZDecreasing: this->centre = glm::vec3(position.x + quad_width / 2.0f, position.y + quad_height / 2.0f, -(quad_height - 1.0f)); break; } this->centre -= glm::vec3(quad_width / 2.0f, quad_width / 2.0f, -(quad_width / 2.0f)); } Any help in discovering what is causing this "subdivding in the wrong place" would be greatly appreciated.

    Read the article

  • Solaris 11.2: Functional Deprecation

    - by alanc
    In Solaris 11.1, I updated the system headers to enable use of several attributes on functions, including noreturn and printf format, to give compilers and static analyzers more information about how they are used to give better warnings when building code. In Solaris 11.2, I've gone back in and added one more attribute to a number of functions in the system headers: __attribute__((__deprecated__)). This is used to warn people building software that they’re using function calls we recommend no longer be used. While in many cases the Solaris Binary Compatibility Guarantee means we won't ever remove these functions from the system libraries, we still want to discourage their use. I made passes through both the POSIX and C standards, and some of the Solaris architecture review cases to come up with an initial list which the Solaris architecture review committee accepted to start with. This set is by no means a complete list of Obsolete function interfaces, but should be a reasonable start at functions that are well documented as deprecated and seem useful to warn developers away from. More functions may be flagged in the future as they get deprecated, or if further passes are made through our existing deprecated functions to flag more of them. Header Interface Deprecated by Alternative Documented in <door.h> door_cred(3C) PSARC/2002/188 door_ucred(3C) door_cred(3C) <kvm.h> kvm_read(3KVM), kvm_write(3KVM) PSARC/1995/186 Functions on kvm_kread(3KVM) man page kvm_read(3KVM) <stdio.h> gets(3C) ISO C99 TC3 (Removed in ISO C11), POSIX:2008/XPG7/Unix08 fgets(3C) gets(3C) man page, and just about every gets(3C) reference online from the past 25 years, since the Morris worm proved bad things happen when it’s used. <unistd.h> vfork(2) PSARC/2004/760, POSIX:2001/XPG6/Unix03 (Removed in POSIX:2008/XPG7/Unix08) posix_spawn(3C) vfork(2) man page. <utmp.h> All functions from getutent(3C) man page PSARC/1999/103 utmpx functions from getutentx(3C) man page getutent(3C) man page <varargs.h> varargs.h version of va_list typedef ANSI/ISO C89 standard <stdarg.h> varargs(3EXT) <volmgt.h> All functions PSARC/2005/672 hal(5) API volmgt_check(3VOLMGT), etc. <sys/nvpair.h> nvlist_add_boolean(3NVPAIR), nvlist_lookup_boolean(3NVPAIR) PSARC/2003/587 nvlist_add_boolean_value, nvlist_lookup_boolean_value nvlist_add_boolean(3NVPAIR) & (9F), nvlist_lookup_boolean(3NVPAIR) & (9F). <sys/processor.h> gethomelgroup(3C) PSARC/2003/034 lgrp_home(3LGRP) gethomelgroup(3C) <sys/stat_impl.h> _fxstat, _xstat, _lxstat, _xmknod PSARC/2009/657 stat(2) old functions are undocumented remains of SVR3/COFF compatibility support If the above table is cut off when viewing in the blog, try viewing this standalone copy of the table. To See or Not To See To see these warnings, you will need to be building with either gcc (versions 3.4, 4.5, 4.7, & 4.8 are available in the 11.2 package repo), or with Oracle Solaris Studio 12.4 or later (which like Solaris 11.2, is currently in beta testing). For instance, take this oversimplified (and obviously buggy) implementation of the cat command: #include <stdio.h> int main(int argc, char **argv) { char buf[80]; while (gets(buf) != NULL) puts(buf); return 0; } Compiling it with the Studio 12.4 beta compiler will produce warnings such as: % cc -V cc: Sun C 5.13 SunOS_i386 Beta 2014/03/11 % cc gets_test.c "gets_test.c", line 6: warning: "gets" is deprecated, declared in : "/usr/include/iso/stdio_iso.h", line 221 The exact warning given varies by compilers, and the compilers also have a variety of flags to either raise the warnings to errors, or silence them. Of couse, the exact form of the output is Not An Interface that can be relied on for automated parsing, just shown for example. gets(3C) is actually a special case — as noted above, it is no longer part of the C Standard Library in the C11 standard, so when compiling in C11 mode (i.e. when __STDC_VERSION__ >= 201112L), the <stdio.h> header will not provide a prototype for it, causing the compiler to complain it is unknown: % gcc -std=c11 gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: warning: implicit declaration of function ‘gets’ [-Wimplicit-function-declaration] while (gets(buf) != NULL) ^ The gets(3C) function of course is still in libc, so if you ignore the error or provide your own prototype, you can still build code that calls it, you just have to acknowledge you’re taking on the risk of doing so yourself. Solaris Studio 12.4 Beta % cc gets_test.c "gets_test.c", line 6: warning: "gets" is deprecated, declared in : "/usr/include/iso/stdio_iso.h", line 221 % cc -errwarn=E_DEPRECATED_ATT gets_test.c "gets_test.c", line 6: "gets" is deprecated, declared in : "/usr/include/iso/stdio_iso.h", line 221 cc: acomp failed for gets_test.c This warning is silenced in the 12.4 beta by cc -erroff=E_DEPRECATED_ATT No warning is currently issued by Studio 12.3 & earler releases. gcc 3.4.3 % /usr/sfw/bin/gcc gets_test.c gets_test.c: In function `main': gets_test.c:6: warning: `gets' is deprecated (declared at /usr/include/iso/stdio_iso.h:221) Warning is completely silenced with gcc -Wno-deprecated-declarations gcc 4.7.3 % /usr/gcc/4.7/bin/gcc gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: warning: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Wdeprecated-declarations] % /usr/gcc/4.7/bin/gcc -Werror=deprecated-declarations gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: error: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Werror=deprecated-declarations] cc1: some warnings being treated as errors Warning is completely silenced with gcc -Wno-deprecated-declarations gcc 4.8.2 % /usr/bin/gcc gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: warning: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Wdeprecated-declarations] while (gets(buf) != NULL) ^ % /usr/bin/gcc -Werror=deprecated-declarations gets_test.c gets_test.c: In function ‘main’: gets_test.c:6:5: error: ‘gets’ is deprecated (declared at /usr/include/iso/stdio_iso.h:221) [-Werror=deprecated-declarations] while (gets(buf) != NULL) ^ cc1: some warnings being treated as errors Warning is completely silenced with gcc -Wno-deprecated-declarations

    Read the article

  • Extra Life 2012 - The Final Plea ... Until the Next One

    - by Chris Gardner
    I thought I'd share the email stream that my friends and family get about the event.So, here we are again. We scream closer to the event, and the goal is not met.I was approached by the ghost of feral platypii past last night. Well, approached is putting it lightly. I was mugged by the ghost of platypii past last night. He reminded me, in no uncertain terms that I have only reached the midway point of my fundraising goal. He then reminded me, in even less uncertain terms, that we are one week away from the event. There were other reminders past that, but this is a family broadcast. *shudder*Now, let us be serious for a moment. The event organizers claim a personal story helps to tug heart strings, whatever those are...I've been to Children's Hospital of Birmingham. I had to take Spawn, the Latter, there to verify she was not going to die. Instead, she's just a ticking time bomb for the next generation, but I digress.While I was there, I saw things. I saw child after child after child waiting for their appointment. I saw the most sublime displays of children's art juxtaposed with hospital sterilization that I could ever possibly imagine. I saw and heard things that only occur in the nightmares of parents, and I was only in the waiting rooms.But I will never forget the 10-ish year old girl that came in for her regularly scheduled dialysis appointment ... as if it was just another Friday afternoon. She had her school books, a little snack, a book to read for pleasure, and a DVD, in case she finished her homework a little early. You know, everything you'd need for an afternoon hooked up to a huge medical machine that going to clean out all the toxins in your blood. As she entered the secured area, she warmly greeted all the doctors and nurses with the same familiarity that I would greet the staff of my favorite coffee shop as I stopped in for my morning cup of coffee.I don't know the status of that little girl. I don't know if she's healthy or, quite frankly, alive. I don't even know her name, as I only heard it in passing for the 37 seconds our paths crossed. However, I do remember being incredibly moved and touched by her upbeat attitude about the situations, and I hope that my efforts last two Octobers got her, in some way, a little comfort.And, if she is still with us, I hope we can get her a little more.=== PREVIOUS MESSAGE FOLLOWS ===Greetings (Again),If you are receiving this updated message, then you didn't feel generous the first time. Now, I tried to be nice the first time. I tried to send a simple, unobtrusive email message to get you into the spirit. Well, much like the bell ringers that I ignore in front of the Wal-Mart, you ignored me.I probably should have seen that coming...However, unlike those poor souls, I know how to contact you. And I can find out where you live. So, so, so, you better feel lucky that I'm too lazy to terrorize you people, but cause I could do it.Remember, it's not for me, it's for those poor kids... and the feral platypii.  Because, we can make more children, but platypii are hard to come by.=== ORIGINAL MESSAGE FOLLOWS ===It's that time of year again. The time when I beg you for money for charity. See, unlike those bell ringers outside Wal-Mart, I don't do it when you have ten bazillion holiday obligations...Once again, I will be enduring a 24-hour marathon of gaming to raise money for Children Hospital in Birmingham. All the money goes straight to them, and you get to tell Uncie Samuel that you're good for that money. I'd REALLY like to break $1000 this year, as I have come REALLY close for the past 2 year to doing so.This year, the event will take place on October 20th, beginning at 8 A.M. Once again, I will try to provide some web streams, etc, if you want to point and laugh (especially if I have to result to playing Dance Central at 4 AM to stay awake for the last part.)Look at it this way, I'm going to badger you about this for the next month. You might as well donate some money so you can righteously tell me to shut the Smurf up.You can place your bid at the link below. Feel free to spread the word to anyone and everyone.I thank you. The children thank you. Several breeds of feral platypus thank you. Maybe, just maybe, doing so will help you feel the love felt by re-fried beans when lovingly hugged in a warm tortilla.Enjoy your burrito.http://www.extra-life.org/participant/cgardner

    Read the article

  • YouTube Scalability Lessons

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Calibri"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }h2 { margin: 12pt 0cm 3pt; page-break-after: avoid; font-size: 14pt; font-family: "Times New Roman"; font-style: italic; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }span.Heading2Char { font-family: Calibri; font-weight: bold; font-style: italic; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Very interesting blog post by Todd Hoff at highscalability.com presenting “7 Years of YouTube Scalability Lessons in 30 min” based on a presentation from Mike Solomon, one of the original engineers at YouTube: …. The key takeaway away of the talk for me was doing a lot with really simple tools. While many teams are moving on to more complex ecosystems, YouTube really does keep it simple. They program primarily in Python, use MySQL as their database, they’ve stuck with Apache, and even new features for such a massive site start as a very simple Python program. That doesn’t mean YouTube doesn’t do cool stuff, they do, but what makes everything work together is more a philosophy or a way of doing things than technological hocus pocus. What made YouTube into one of the world’s largest websites? Read on and see... Stats @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } 4 billion Views a day 60 hours of video is uploaded every minute 350+ million devices are YouTube enabled Revenue double in 2010 The number of videos has gone up 9 orders of magnitude and the number of developers has only gone up two orders of magnitude. 1 million lines of Python code Stack @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } Python - most of the lines of code for YouTube are still in Python. Everytime you watch a YouTube video you are executing a bunch of Python code. Apache - when you think you need to get rid of it, you don’t. Apache is a real rockstar technology at YouTube because they keep it simple. Every request goes through Apache. Linux - the benefit of Linux is there’s always a way to get in and see how your system is behaving. No matter how bad your app is behaving, you can take a look at it with Linux tools like strace and tcpdump. MySQL - is used a lot. When you watch a video you are getting data from MySQL. Sometime it’s used a relational database or a blob store. It’s about tuning and making choices about how you organize your data. Vitess- a  new project released by YouTube, written in Go, it’s a frontend to MySQL. It does a lot of optimization on the fly, it rewrites queries and acts as a proxy. Currently it serves every YouTube database request. It’s RPC based. Zookeeper - a distributed lock server. It’s used for configuration. Really interesting piece of technology. Hard to use correctly so read the manual Wiseguy - a CGI servlet container. Spitfire - a templating system. It has an abstract syntax tree that let’s them do transformations to make things go faster. Serialization formats - no matter which one you use, they are all expensive. Measure. Don’t use pickle. Not a good choice. Found protocol buffers slow. They wrote their own BSON implementation, which is 10-15 time faster than the one you can download. ...Contiues. Read the blog Watch the video

    Read the article

  • How to handle updated configuration when it's already been cloned for editing

    - by alexrussell
    Really sorry about the title that probably doesn't make much sense. Hopefully I can explain myself better here as it's something that's kinda bugged me for ages, and is now becoming a pressing concern as I write a bit of software with configuration. Most software comes with default configuration options stored in the app itself, and then there's a configuration file (let's say) that a user can edit. Once created/edited for the first time, subsequent updates to the application can not (easily) modify this configuration file for fear of clobbering the user's own changes to the default configuration. So my question is, if my application adds a new configurable parameter, what's the best way to aid discoverability of the setting and allow the user (developer) to override it as nicely as possible given the following constraints: I actually don't have a canonical default config in the application per se, it's more of a 'cascading filesystem'-like affair - the config template is stored in default/config.json and when the user wishes to edit the configuration, it's copied to user/config.json. If a user config is found it is used - there is no automatic overriding of a subset of keys, the whole new file is used and that's that. If there's no user config the default config is used. When a user wishes to edit the config they run a command to 'generate' it for them (which simply copies the config.json file from the default to the user directory). There is no UI for the configuration options as it's not appropriate to the userbase (think of my software as a library or something, the users are developers, the config is done in the user/config.json file). Due to my software being library-like there's no simple way to, on updating of the software, run some tasks automatically (so any ideas of look at the current config, compare to template config, add ing missing keys) aren't appropriate. The only solution I can think of right now is to say "there's a new config setting X" in release notes, but this doesn't seem ideal to me. If you want any more information let me know. The above specifics are not actually 100% true to my situation, but they represent the problem equally well with lower complexity. If you do want specifics, however, I can explain the exact setup. Further clarification of the type of configuration I mean: think of the Atom code editor. There appears to be a default 'template' config file somewhere, but as soon as a configuration option is edited ~/.atom/config.cson is generated and the setting goes in there. From now on is Atom is updated and gets a new configuration key, this file cannot be overwritten by Atom without a lot of effort to ensure that the addition/modification of the key does not clobber. In Atom's case, because there is a GUI for editing settings, they can get away with just adding the UI for the new setting into the UI to aid 'discoverability' of the new setting. I don't have that luxury. Clarification of my constraints and what I'm actually looking for: The software I'm writing is actually a package for a larger system. This larger system is what provides the configuration, and the way it works is kinda fixed - I just do a config('some.key') kinda call and it knows to look to see if the user has a config clone and if so use it, otherwise use the default config which is part of my package. Now, while I could make my application edit the user's configuration files (there is a convention about where they're stored), it's generally not done, so I'd like to live with the constraints of the system I'm using if possible. And it's not just about discoverability either, one large concern is that the addition of a configuration key won't actually work as soon as the user has their own copy of the original template. Adding the key to the template won't make a difference as that file is never read. As such, I think this is actually quite a big flaw in the design of the configuration cascading system and thus needs to be taken up with my upstream. So, thinking about it, based on my constraints, I don't think there's going to be a good solution save for either editing the user's configuration or using a new config file every time there are updates to the default configuration. Even the release notes idea from above isn't doable as, if the user does not follow the advice, suddenly I have a config key with no value (user-defined or default). So the new question is this: what is the general way to solve the problem of having a default configuration in template config files and allowing a user to make user-specific version of these in order to override the defaults? A per-key cascade (rather than per-file cascade) where the user only specifies their overrides? In this case, what happens if a configuration value is an array - do we replace or append to the default (or, more realistically, how does the user specify whether they wish to replace or append to)? It seems like configuration is kinda hard, so how is it solved in the wild?

    Read the article

  • A Basic Thread

    - by Joe Mayo
    Most of the programs written are single-threaded, meaning that they run on the main execution thread. For various reasons such as performance, scalability, and/or responsiveness additional threads can be useful. .NET has extensive threading support, from the basic threads introduced in v1.0 to the Task Parallel Library (TPL) introduced in v4.0. To get started with threads, it's helpful to begin with the basics; starting a Thread. Why Do I Care? The scenario I'll use for needing to use a thread is writing to a file.  Sometimes, writing to a file takes a while and you don't want your user interface to lock up until the file write is done. In other words, you want the application to be responsive to the user. How Would I Go About It? The solution is to launch a new thread that performs the file write, allowing the main thread to return to the user right away.  Whenever the file writing thread completes, it will let the user know.  In the meantime, the user is free to interact with the program for other tasks. The following examples demonstrate how to do this. Show Me the Code? The code we'll use to work with threads is in the System.Threading namespace, so you'll need the following using directive at the top of the file: using System.Threading; When you run code on a thread, the code is specified via a method.  Here's the code that will execute on the thread: private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written."); } The call to Thread.Sleep(1000) delays thread execution. The parameter is specified in milliseconds, and 1000 means that this will cause the program to sleep for approximately 1 second.  This method happens to be static, but that's just part of this example, which you'll see is launched from the static Main method.  A thread could be instance or static.  Notice that the method does not have parameters and does not have a return type. As you know, the way to refer to a method is via a delegate.  There is a delegate named ThreadStart in System.Threading that refers to a method without parameters or return type, shown below: ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); I'll show you the whole program below, but the ThreadStart instance above goes in the Main method. The thread uses the ThreadStart instance, fileWriterHandlerDelegate, to specify the method to execute on the thread: Thread fileWriter = new Thread(fileWriterHandlerDelegate); As shown above, the argument type for the Thread constructor is the ThreadStart delegate type. The fileWriterHandlerDelegate argument is an instance of the ThreadStart delegate type. This creates an instance of a thread and what code will execute, but the new thread instance, fileWriter, isn't running yet. You have to explicitly start it, like this: fileWriter.Start(); Now, the code in the WriteFile method is executing on a separate thread. Meanwhile, the main thread that started the fileWriter thread continues on it's own.  You have two threads running at the same time. Okay, I'm Starting to Get Glassy Eyed. How Does it All Fit Together? The example below is the whole program, pulling all the previous bits together. It's followed by its output and an explanation. using System; using System.Threading; namespace BasicThread { class Program { static void Main() { ThreadStart fileWriterHandlerDelegate = new ThreadStart(WriteFile); Thread fileWriter = new Thread(fileWriterHandlerDelegate); Console.WriteLine("Starting FileWriter"); fileWriter.Start(); Console.WriteLine("Called FileWriter"); Console.ReadKey(); } private static void WriteFile() { Thread.Sleep(1000); Console.WriteLine("File Written"); } } } And here's the output: Starting FileWriter Called FileWriter File Written So, Why are the Printouts Backwards? The output above corresponds to Console.Writeline statements in the program, with the second and third seemingly reversed. In a single-threaded program, "File Written" would print before "Called FileWriter". However, this is a multi-threaded (2 or more threads) program.  In multi-threading, you can't make any assumptions about when a given thread will run.  In this case, I added the Sleep statement to the WriteFile method to greatly increase the chances that the message from the main thread will print first. Without the Thread.Sleep, you could run this on a system with multiple cores and/or multiple processors and potentially get different results each time. Interesting Tangent but What Should I Get Out of All This? Going back to the main point, launching the WriteFile method on a separate thread made the program more responsive.  The file writing logic ran for a while, but the main thread returned to the user, as demonstrated by the print out of "Called FileWriter".  When the file write finished, it let the user know via another print statement. This was a very efficient use of CPU resources that made for a more pleasant user experience. Joe

    Read the article

  • Seizing the Moment with Mobility

    - by Divya Malik
    Empowering people to work where they want to work is becoming more critical now with the consumerisation of technology. Employees are bringing their own devices to the workplace and expecting to be productive wherever they are. Sales people welcome the ability to run their critical business applications where they can be most effective which is typically on the road and when they are still with the customer. Oracle has invested many years of research in understanding customer's Mobile requirements. “The keys to building the best user experience were building in a lot of flexibility in ways to support sales, and being useful,” said Arin Bhowmick, Director, CRM, for the Applications UX team. “We did that by talking to and analyzing the needs of a lot of people in different roles.” The team studied real-life sales teams. “We wanted to study salespeople in context with their work,” Bhowmick said. “We studied all user types in the CRM world because we wanted to build a user interface and user experience that would cater to sales representatives, marketing managers, sales managers, and more. Not only did we do studies in our labs, but also we did studies in the field and in mobile environments because salespeople are always on the go.” Here is a recent post from Hernan Capdevila, Vice President, Oracle Fusion Apps which was featured on the Oracle Applications Blog.  Mobile devices are forcing a paradigm shift in the workplace – they’re changing the way businesses can do business and the type of cultures they can nurture. As our customers talk about their mobile needs, we hear them saying they want instant-on access to enterprise data so workers can be more effective at their jobs anywhere, anytime. They also are interested in being more cost effective from an IT point of view. The mobile revolution – with the idea of BYOD (bring your own device) – has added an interesting dynamic because previously IT was driving the employee device strategy and ecosystem. That's been turned on its head with the consumerization of IT. Now employees are figuring out how to use their personal devices for work purposes and IT has to figure out how to adapt. Blurring the Lines between Work and Personal Life My vision of where businesses will be five years from now is that our work lives and personal lives will be more interwoven together. In turn, enterprises will have to determine how to make employees’ work lives fit more into the fabric of their personal lives. And personal devices like smartphones are going to drive significant business value because they let us accomplish things very incrementally. I can be sitting on a train or in a taxi and be productive. At the end of any meeting, I can capture ideas and tasks or follow up with people in real time. Mobile devices enable this notion of seizing the moment – capitalizing on opportunities that might otherwise have slipped away because we're not connected. For the industry shapers out there, this is game changing. The lean and agile workforce is definitely the future. This notion of the board sitting down with the executive team to lay out strategic objectives for a three- to five-year plan, bringing in HR to determine how they're going to staff the strategic activities, kicking off the execution, and then revisiting the plan in three to five years to create another three- to five-year plan is yesterday's model. Businesses that continue to approach innovating in that way are in the dinosaur age. Today it's about incremental planning and incremental execution, which requires a lot of cohesion and synthesis within the workforce. There needs to be this interweaving notion within the workforce about how ideas cascade down, how people engage, how they stay connected, and how insights are shared. How to Survive and Thrive in Today’s Marketplace The notion of Facebook isn’t new. We lived it pre-Internet days with America Online and Prodigy – Facebook is just the renaissance of these services in a more viral and pervasive way. And given the trajectory of the consumerization of IT with people bringing their personal tooling to work, the enterprise has no option but to adapt. The sooner that businesses realize this from a top-down point of view the sooner that they will be able to really drive significant innovation and adapt to the marketplace. There are a small number of companies right now (I think it's closer to 20% rather than 80%, but the number is expanding) that are able to really innovate in this incremental marketplace. So from a competitive point of view, there's no choice but to be social and stay connected. By far the majority of users on Facebook and LinkedIn are mobile users – people on iPhones, smartphones, Android phones, and tablets. It's not the couch people, right? It's the on-the-go people – those people at the coffee shops. Usually when you're sitting at your desk on a big desktop computer, typically you have better things to do than to be on Facebook. This is a topic I'm extremely passionate about because I think mobile devices are game changing. Mobility delivers significant value to businesses – it also brings dramatic simplification from a functional point of view and transforms our work life experience. Hernan Capdevila Vice President, Oracle Applications Development

    Read the article

  • Namespaces are obsolete

    - by Bertrand Le Roy
    To those of us who have been around for a while, namespaces have been part of the landscape. One could even say that they have been defining the large-scale features of the landscape in question. However, something happened fairly recently that I think makes this venerable structure obsolete. Before I explain this development and why it’s a superior concept to namespaces, let me recapitulate what namespaces are and why they’ve been so good to us over the years… Namespaces are used for a few different things: Scope: a namespace delimits the portion of code where a name (for a class, sub-namespace, etc.) has the specified meaning. Namespaces are usually the highest-level scoping structures in a software package. Collision prevention: name collisions are a universal problem. Some systems, such as jQuery, wave it away, but the problem remains. Namespaces provide a reasonable approach to global uniqueness (and in some implementations such as XML, enforce it). In .NET, there are ways to relocate a namespace to avoid those rare collision cases. Hierarchy: programmers like neat little boxes, and especially boxes within boxes within boxes. For some reason. Regular human beings on the other hand, tend to think linearly, which is why the Windows explorer for example has tried in a few different ways to flatten the file system hierarchy for the user. 1 is clearly useful because we need to protect our code from bleeding effects from the rest of the application (and vice versa). A language with only global constructs may be what some of us started programming on, but it’s not desirable in any way today. 2 may not be always reasonably worth the trouble (jQuery is doing fine with its global plug-in namespace), but we still need it in many cases. One should note however that globally unique names are not the only possible implementation. In fact, they are a rather extreme solution. What we really care about is collision prevention within our application. What happens outside is irrelevant. 3 is, more than anything, an aesthetical choice. A common convention has been to encode the whole pedigree of the code into the namespace. Come to think about it, we never think we need to import “Microsoft.SqlServer.Management.Smo.Agent” and that would be very hard to remember. What we want to do is bring nHibernate into our app. And this is precisely what you’ll do with modern package managers and module loaders. I want to take the specific example of RequireJS, which is commonly used with Node. Here is how you import a module with RequireJS: var http = require("http"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This is of course importing a HTTP stack module into the code. There is no noise here. Let’s break this down. Scope (1) is provided by the one scoping mechanism in JavaScript: the closure surrounding the module’s code. Whatever scoping mechanism is provided by the language would be fine here. Collision prevention (2) is very elegantly handled. Whereas relocating is an afterthought, and an exceptional measure with namespaces, it is here on the frontline. You always relocate, using an extremely familiar pattern: variable assignment. We are very much used to managing our local variable names and any possible collision will get solved very easily by picking a different name. Wait a minute, I hear some of you say. This is only taking care of collisions on the client-side, on the left of that assignment. What if I have two libraries with the name “http”? Well, You can better qualify the path to the module, which is what the require parameter really is. As for hierarchical organization, you don’t really want that, do you? RequireJS’ module pattern does elegantly cover the bases that namespaces used to cover, but it also promotes additional good practices. First, it promotes usage of self-contained, single responsibility units of code through the closure-based, stricter scoping mechanism. Namespaces are somewhat more porous, as using/import statements can be used bi-directionally, which leads us to my second point… Sane dependency graphs are easier to achieve and sustain with such a structure. With namespaces, it is easy to construct dependency cycles (that’s bad, mmkay?). With this pattern, the equivalent would be to build mega-components, which are an easier problem to spot than a decay into inter-dependent namespaces, for which you need specialized tools. I really like this pattern very much, and I would like to see more environments implement it. One could argue that dependency injection has some commonalities with this for example. What do you think? This is the half-baked result of some morning shower reflections, and I’d love to read your thoughts about it. What am I missing?

    Read the article

  • boot issues - long delay, then "gave up waiting for root device"

    - by chazomaticus
    I've had this issue on and off for about two years now. I noticed it on a new (custom built) machine running 10.04 when that first came out, but then it went away until a few months ago. I've gone through a number of hard drive changes but I can't say specifically what if anything I changed hardware-wise to make it stop or start happening. I had assumed upgrading to a modern Ubuntu version would fix the issue, so I installed 12.04 beta on a spare partition last night, but it's still happening. Here's the issue. After grub loads and I select a kernel to boot, the screen goes blank save for a blinking cursor. It sits in this state for many long minutes before it finally gives up and gives me an initramfs shell with the message gave up waiting for root device (and lists the /dev/disk/by-uuid/... path it was waiting for) but no other specific diagnostic information. Now, here's the tricky part. For one, the problem is intermittent - sometimes it progresses from the blinking cursor to the Ubuntu splash boot screen in a few seconds, and once it gets that far it always continues booting fine. The really bizarre thing is that I can "force" it to "find" the root device by repeatedly pressing the space bar and hitting the machine's power button. If I tap those enough, eventually I will notice the hard drive light coming on, at which point it will always continue the boot process after a few seconds. Interestingly, if I wait slightly too long before pressing the power button (30s?), as soon as I press it I get the gave up waiting message and the initramfs shell. I've tried setting up /etc/fstab (and the grub menu.lst or whatever it's called nowadays) to use device names (e.g. /dev/sda1) instead of UUIDs, but I get the same effect just with the device name, not UUID, in the error message. I should also mention that when I boot to Windows 7, there is no issue. It boots slowly all the time just by virtue of being Windows, but it never hangs indefinitely. This would seem to indicate it's a problem in Ubuntu, not the hardware. It's pretty annoying to have to babysit the computer every time it boots. Any ideas? I'm at a loss. Not even sure how to diagnose the issue. Thanks! EDIT: Here's some dmesg output from 10.04. The 15 second gap is where it was doing nothing. I pressed the power button and space bar a few times, and the stuff at 16 seconds happened. Not sure what any of it means. [ 1.320250] scsi18 : ahci [ 1.320294] scsi19 : ahci [ 1.320320] ata19: SATA max UDMA/133 abar m8192@0xfd4fe000 port 0xfd4fe100 ir q 18 [ 1.320323] ata20: SATA max UDMA/133 abar m8192@0xfd4fe000 port 0xfd4fe180 ir q 18 [ 1.403886] usb 2-4: new high speed USB device using ehci_hcd and address 4 [ 1.562558] usb 2-4: configuration #1 chosen from 1 choice [ 16.477824] ata16: SATA link down (SStatus 0 SControl 300) [ 16.477843] ata19: SATA link down (SStatus 0 SControl 300) [ 16.477857] ata3: SATA link down (SStatus 0 SControl 300) [ 16.477895] ata15: SATA link down (SStatus 0 SControl 300) [ 16.477906] ata20: SATA link down (SStatus 0 SControl 300) [ 16.477977] ata17: SATA link down (SStatus 0 SControl 300) [ 16.478003] ata12: SATA link down (SStatus 0 SControl 300) [ 16.478046] ata13: SATA link down (SStatus 0 SControl 300) [ 16.478063] ata14: SATA link down (SStatus 0 SControl 300) [ 16.478108] ata11: SATA link down (SStatus 0 SControl 300) [ 16.478123] ata18: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 16.478127] ata6: SATA link down (SStatus 0 SControl 300) [ 16.478157] ata5: SATA link down (SStatus 0 SControl 300) [ 16.478193] ata18.00: ATAPI: MARVELL VIRTUALL, 1.09, max UDMA/66 After that, it took its sweet time, and I had to keep hitting space bar to coax it along. Here's some more dmesg output from a little later in the boot process: [ 17.982291] input: BTC USB Multimedia Keyboard as /devices/pci0000:00/0000:00 :13.0/usb5/5-2/5-2:1.0/input/input4 [ 17.982335] generic-usb 0003:046E:5506.0002: input,hidraw1: USB HID v1.10 Key board [BTC USB Multimedia Keyboard] on usb-0000:00:13.0-2/input0 [ 18.005211] input: BTC USB Multimedia Keyboard as /devices/pci0000:00/0000:00 :13.0/usb5/5-2/5-2:1.1/input/input5 [ 18.005274] generic-usb 0003:046E:5506.0003: input,hiddev96,hidraw2: USB HID v1.10 Device [BTC USB Multimedia Keyboard] on usb-0000:00:13.0-2/input1 [ 22.484906] EXT4-fs (sda6): INFO: recovery required on readonly filesystem [ 22.484910] EXT4-fs (sda6): write access will be enabled during recovery [ 22.548542] EXT4-fs (sda6): recovery complete [ 22.549074] EXT4-fs (sda6): mounted filesystem with ordered data mode [ 32.516772] Adding 20482832k swap on /dev/sda5. Priority:-1 extents:1 across:20482832k [ 32.742540] udev: starting version 151 [ 33.002004] Bluetooth: Atheros AR30xx firmware driver ver 1.0 [ 33.008135] parport_pc 00:09: reported by Plug and Play ACPI [ 33.008186] parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE] [ 33.012076] lp: driver loaded but no devices found [ 33.037271] ppdev: user-space parallel port driver [ 33.090256] lp0: using parport0 (interrupt-driven). Any clues in there?

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • Keep a programming language backwards compatible vs. fixing its flaws

    - by Radu Murzea
    First, some context (stuff that most of you know anyway): Every popular programming language has a clear evolution, most of the time marked by its version: you have Java 5, 6, 7 etc., PHP 5.1, 5.2, 5.3 etc. Releasing a new version makes new APIs available, fixes bugs, adds new features, new frameworks etc. So all in all: it's good. But what about the language's (or platform's) problems? If and when there's something wrong in a language, developers either avoid it (if they can) or they learn to live with it. Now, the developers of those languages get a lot of feedback from the programmers that use them. So it kind of makes sense that, as time (and version numbers) goes by, the problems in those languages will slowly but surely go away. Well, not really. Why? Backwards compatibility, that's why. But why is this so? Read below for a more concrete situation. The best way I can explain my question is to use PHP as an example: PHP is loved thousands of people and hated by just as many thousands. All languages have flaws, but apparently PHP is special. Check out this blog post. It has a very long list of so called flaws in PHP. Now, I'm not a PHP developer (not yet), but I read through all of it and I'm sure that a big chunk of that list are indeed real issues. (Not all of it, since it's potentially subjective). Now, if I was one of the guys who actively develops PHP, I would surely want to fix those problems, one by one. However, if I do that, then code that relies on a particular behaviour of the language will break if it runs on the new version. Summing it up in 2 words: backwards compatibility. What I don't understand is: why should I keep PHP backwards compatible? If I release PHP version 8 with all those problems fixed, can't I just put a big warning on it saying: "Don't run old code on this version !"? There is a thing called deprecation. We had it for years and it works. In the context of PHP: look at how these days people actively discourage the use of the mysql_* functions (and instead recommend mysqli_* and PDO). Deprecation works. We can use it. We should use it. If it works for functions, why shouldn't it work for entire languages? Let's say I (the developer of PHP) do this: Launch a new version of PHP (let's say 8) with all of those flaws fixed New projects will start using that version, since it's much better, clearer, more secure etc. However, in order not to abandon older versions of PHP, I keep releasing updates to it, fixing security issues, bugs etc. This makes sense for reasons that I'm not listing here. It's common practice: look for example at how Oracle kept updating version 5.1.x of MySQL, even though it mostly focused on version 5.5.x. After about 3 or 4 years, I stop updating old versions of PHP and leave them to die. This is fine, since in those 3 or 4 years, most projects will have switched to PHP 8 anyway. My question is: Do all these steps make sense? Would it be so hard to do? If it can be done, then why isn't it done? Yes, the downside is that you break backwards compatibility. But isn't that a price worth paying ? As an upside, in 3 or 4 years you'll have a language that has 90 % of its problems fixed.... a language much more pleasant to work with. Its name will ensure its popularity. EDIT: OK, so I didn't expressed myself correctly when I said that in 3 or 4 years people will move to the hypothetical PHP 8. What I meant was: in 3 or 4 years, people will use PHP 8 if they start a new project.

    Read the article

  • BYOD-The Tablet Difference

    - by Samantha.Y. Ma
    By Allison Kutz, Lindsay Richardson, and Jennifer Rossbach, Sales Consultants Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Less than three years ago, Apple introduced a new concept to the world: The Tablet. It’s hard to believe that in only 32 months, the iPad induced an entire new way to do business. Because of their mobility and ease-of-use, tablets have grown in popularity to keep up with the increasing “on the go” lifestyle, and their popularity isn’t expected to decrease any time soon. In fact, global tablet sales are expected to increase drastically within the next five years, from 56 million tablets to 375 million by 2016. Tablets have been utilized for every function imaginable in today’s world. With over 730,000 active applications available for the iPad, these tablets are educational devices, portable book collections, gateways into social media, entertainment for children when Mom and Dad need a minute on their own, and so much more. It’s no wonder that 74% of those who own a tablet use it daily, 60% use it several times a day, and an average of 13.9 hours per week are spent tapping away. Tablets have become a critical part of a user’s personal life; but why stop there? Businesses today are taking major strides in implementing these devices, with the hopes of benefiting from efficiency and productivity gains. Limo and taxi drivers use tablets as payment devices instead of traditional cash transactions. Retail outlets use tablets to find the exact merchandise customers are looking for. Professors use tablets to teach their classes, and business professionals demonstrate solutions and review reports from tablets. Since an overwhelming majority of tablet users have started to use their personal iPads, PlayBooks, Galaxys, etc. in the workforce, organizations have had to make a change. In many cases, companies are willing to make that change. In fact, 79% of companies are making new investments in mobility this year. Gartner reported that 90% of organizations are expected to support corporate applications on personal devices by 2014. It’s not just companies that are changing. Business professionals have become accustomed to tablets making their personal lives easier, and want that same effect in the workplace. Professionals no longer want to waste time manually entering data in their computer, or worse yet in a notebook, especially when the data has to be later transcribed to an online system. The response: the Bring Your Own Device phenomenon. According to Gartner, BOYD is “an alternative strategy allowing employees, business partners and other users to utilize a personally selected and purchased client device to execute enterprise applications and access data.” Employees whose companies embrace this trend are more efficient because they get to use devices they are already accustomed to. Tablets change the game when it comes to how sales professionals perform their jobs. Sales reps can easily store and access customer information and analytics using tablet applications, such as Oracle Fusion Tap. This method is much more enticing for sales reps than spending time logging interactions on their (what seem to be outdated) computers. Forrester & IDC reported that on average sales reps spend 65% of their time on activities other than selling, so having a tablet application to use on the go is extremely powerful. In February, Information Week released a list of “9 Powerful Business Uses for Tablet Computers,” ranging from “enhancing the customer experience” to “improving data accuracy” to “eco-friendly motivations”. Tablets compliment the lifestyle of professionals who strive to be effective and efficient, both in the office and on the road. Three Things Businesses Need to do to Embrace BYOD Make customer-facing websites tablet-friendly for consistent user experiences Develop tablet applications to continue to enhance the customer experience Embrace and use the technology that comes with tablets Almost 55 million people in the U.S. own tablets because they are convenient, easy, and powerful. These are qualities that companies strive to achieve with any piece of technology. The inherent power of the devices coupled with the growing number of business applications ensures that tablets will transform the way that companies and employees perform.

    Read the article

  • Rebuilding CoasterBuzz, Part IV: Dependency injection, it's what's for breakfast

    - by Jeff
    (Repost from my personal blog.) This is another post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch soon. More: Part I: Evolution, and death to WCF Part II: Hot data objects Part III: The architecture using the "Web stack of love" If anything generally good for the craft has come out of the rise of ASP.NET MVC, it's that people are more likely to use dependency injection, and loosely couple the pieces parts of their applications. A lot of the emphasis on coding this way has been to facilitate unit testing, and that's awesome. Unit testing makes me feel a lot less like a hack, and a lot more confident in what I'm doing. Dependency injection is pretty straight forward. It says, "Given an instance of this class, I need instances of other classes, defined not by their concrete implementations, but their interfaces." Probably the first place a developer exercises this in when having a class talk to some kind of data repository. For a very simple example, pretend the FooService has to get some Foo. It looks like this: public class FooService {    public FooService(IFooRepository fooRepo)    {       _fooRepo = fooRepo;    }    private readonly IFooRepository _fooRepo;    public Foo GetMeFoo()    {       return _fooRepo.FooFromDatabase();    } } When we need the FooService, we ask the dependency container to get it for us. It says, "You'll need an IFooRepository in that, so let me see what that's mapped to, and put it in there for you." Why is this good for you? It's good because your FooService doesn't know or care about how you get some foo. You can stub out what the methods and properties on a fake IFooRepository might return, and test just the FooService. I don't want to get too far into unit testing, but it's the most commonly cited reason to use DI containers in MVC. What I wanted to mention is how there's another benefit in a project like mine, where I have to glue together a bunch of stuff. For example, when I have someone sign up for a new account on CoasterBuzz, I'm actually using POP Forums' new account mailer, which composes a bunch of text that includes a link to verify your account. The thing is, I want to use custom text and some other logic that's specific to CoasterBuzz. To accomplish this, I make a new class that inherits from the forum's NewAccountMailer, and override some stuff. Easy enough. Then I use Ninject, the DI container I'm using, to unbind the forum's implementation, and substitute my own. Ninject uses something called a NinjectModule to bind interfaces to concrete implementations. The forum has its own module, and then the CoasterBuzz module is loaded second. The CB module has two lines of code to swap out the mailer implementation: Unbind<PopForums.Email.INewAccountMailer>(); Bind<PopForums.Email.INewAccountMailer>().To<CbNewAccountMailer>(); Piece of cake! Now, when code asks the DI container for an INewAccountMailer, it gets my custom implementation instead. This is a lot easier to deal with than some of the alternatives. I could do some copy-paste, but then I'm not using well-tested code from the forum. I could write stuff from scratch, but then I'm throwing away a bunch of logic I've already written (in this case, stuff around e-mail, e-mail settings, mail delivery failures). There are other places where the DI container comes in handy. For example, CoasterBuzz does a number of custom things with user profiles, and special content for paid members. It uses the forum as the core piece to managing users, so I can ask the container to get me instances of classes that do user lookups, for example, and have zero care about how the forum handles database calls, configuration, etc. What a great world to live in, compared to ten years ago. Sure, the primary interest in DI is around the "separation of concerns" and facilitating unit testing, but as your library grows and you use more open source, it starts to be the glue that pulls everything together.

    Read the article

  • Using Resources the Right Way

    - by BuckWoody
    It’s an interesting time in computing technology. At one point there was a dearth of information available for solving a given problem, or educating ourselves on broader topics so that we can solve problems in the future. With dozens, perhaps hundreds or thousands of web sites and content available (for free, in many cases) from vendors, peers, even colleges and universities, it seems like there is actually too much information. Who has the time to absorb all this information and training? Even if you had the inclination, where to start? In fact, it seems so overwhelming that I often hear people saying that they can’t find the training they need, or that vendor X or Y “doesn’t help their users”. On questioning these folks, however, I often find that they – and sometimes I - haven’t put in the effort to learn what resources we have. That’s where blogs, like this one, can help. If you follow a blog, either by checking it often or perhaps subscribing to the Really Simple Syndication (RSS) feed, you’ll be able to spread out the search or create a mental filter for the information you need. But it’s not enough just read a blog or a web page. The creators need real feedback – what doesn’t work, and what does. Yes, you’re allowed to tell a vendor or writer “This helped me because…” so that you reinforce the positives. To be sure, bring up what doesn’t work as well –  that’s fine. But be specific, and be constructive. You’d be surprised at how much it matters. I know for a fact at Microsoft we listen – there is a real live person that reads your comments. I’m sure this is true of other vendors, and I also know that most blog authors – yours truly most especially – wants to know what you think.   In this blog entry I’d to call your attention to three resources you have at your disposal, and how you can use them to help. I’ll try to bring up things like this from time to time that I find useful, and cover in them in more depth like this. Think of this as a synopsis of a longer set of resources that you can use to filter whether you want to research further, bookmark, or forward on to a circle of friends where you think it might help them.   Data Driven Design Concepts http://msdn.microsoft.com/en-us/library/windowsazure/jj156154 I’ll start with a great site that walks you through the process of designing a solution from a data-first perspective. As you know, I believe all computing is merely re-arranging data. If you follow that logic as well, you’ll realize that whenever you create a solution, you should start at the data-end of the application. This resource helps you do that. Even if you don’t use the specific technologies the instructions use, the concepts hold for almost any other technology that deals with data. This should be a definite bookmark for a developer, DBA, or Data Architect. When I mentioned my admiration for this resource here at Microsoft, the team that created it contacted me and asked if I’d share an e-mail address to my readers so that you can comment on it. You’re guaranteed to be heard – you can suggest changes, talk about how useful – or not – it is, and so on. Here’s that address:  [email protected]   End-to-End Example of a complete Hybrid Application – with Live Demo https://azurestocktrader.cloudapp.net/Default.aspx I learn by example. I also like having ready-made, live, functional demos that show the completed solution at work. If you’ve ever wanted to learn how a complex, complete, hybrid application that bridges on-premises systems with cloud-based databases, code, functions and more, this is it. It’s a stock-trading simulator, and you can get everything from the design to the code itself, or you can just play with the application. It’s running on Windows Azure, the actual production servers we use for everything else. Using a Cloud-Based Service https://azureconfigweb.cloudapp.net/Default.aspx Along with that stock-trading application, you have a full demonstration and usable code sample of a web-based service available. If you’re a developer, this is a style of code you need to understand for everything from iPhone development to a full Service-Oriented Architecture (SOA) environment. So check out these resources. I’ll post more from time to time as I run across them. Hopefully they’ll be as useful to you as they are to me. Oh, and if you have a comment on any of the resources, let them know. And if you have any comments about these or any of my entries, feel free to post away. To quote a famous TV Show: “Hello Seattle – I’m listening…”

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • Private Cloud: Putting some method behind the madness

    - by Sudip Datta
    Finally, I decided to join the blogging community. And what could be a better time to start than the week after OpenWorld 2012. 50K+ attendees, demonstrations, speaker sessions and a whole lot of buzz on Oracle Cloud..It was raining clouds in this year's Openworld. I am not here to write about Oracle's cloud strategy in general, but on Enterprise Manager's cloud management capabilities. This year's Openworld was the first after we announced the 12c Cloud Control and we were happy to share the stage with quite a few early adopters. Stay tuned for videos from our customers and partners, I will post them as they get published. I met a number of platform administrators in Oracle-DBAs, Middleware Admins, SOA Admins...The cloud has affected them all, at least to the point where it beckoned more than just curiosity..Most IT infrastructure are already heavily virtualized (on VMWare and on others including Oracle VM), and some would claim they are already on “cloud” (at least their Sysadmins told them so). But none of them were confident of the benefits because their pain points continued to grow.. Isn't cloud supposed to ease those? Instead, they were chasing hundreds of databases running on hundreds of VMs, often with as much certainty propounded by Heisenberg. What happened to the age-old IT discipline around administration, compliance, configuration management? VMs are great for what they are. I personally think they have opened the doors to new approaches in which an application stack gets provisioned and updated. In fact, Enterprise Manager 12c is possibly the only tool out there that can provision full-fledged application as VM Assemblies. In this year's Openworld, customers talked on how they provisioned RAC and Siebel assemblies, which as the techies out there know, are not trivial (hearing provisioning time for Siebel down from weeks to hours was gratifying indeed). However, I do have an issue with a "one-size fits all" approach to cloud. In a week's span, I met several personas: Project owners requiring an EC2 like VM instance for their projects Admins needing the same for Sparc-Solaris. DBAs requiring dedicated databases for new projects APEX Developers needing just a ready-to-consume schema as a service Java Developers looking for a runtime platform QA engineers needing a fast clone of their production environment If you drill down further, you will end up peeling more layers of the details. For example, the requirements for Load testing and Functional testing are very different. For Load testing the test environment should ideally be the same as the production. You shouldn't run production on Exadata and load test on a VM; they will just not be good representations of one another. For Functional testing it does not possibly matter. DBAs seem to be at the worst affected of the lot. It seems they have been asked to choose between agile provisioning and  faster runtime performance. And in some cases, it is really a Hobson's choice, because their infrastructure provider made no distinction between the OLTP application and the Virtual desktop! Sad indeed. When one looks at the portfolio of services that we already offer (vanilla IaaS, VM Assembly based PaaS, DBaaS) or have announced (Java PaaS, Instant Cloning, Schema-aaS), one can possibly think that we are trying to be the "renaissance man" ! Well I would have possibly digested that had it not been for the various personas that I described above. Getting the use cases right is very important for an application such as cloud management. We iterate and iterate over these over and over again and re-validate them in CABs (Customer Advisory Boards). We consider over the major aspects of tenancy: service placement, resource isolation (can a tenant execute an expensive SQL and run away with all the resources), quota and security. We, in Engineering, keep reminding ourselves that we are dealing with enterprise clouds. We owe it to our customer base ! In the coming posts, I will drill down more into each of the services. In the meanwhile, here are some collateral and  demos for starters with EM 12c. http://www.oracle.com/technetwork/oem/cloud-mgmt/index.html Sudip Datta The views expressed here are my own and do not necessarily reflect the views of Oracle. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter --

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >