Search Results

Search found 38494 results on 1540 pages for 'small business software'.

Page 671/1540 | < Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >

  • The Red Gate and .NET Reflector Debacle

    - by Rick Strahl
    About a month ago Red Gate – the company who owns the NET Reflector tool most .NET devs use at one point or another – decided to change their business model for Reflector and take the product from free to a fully paid for license model. As a bit of history: .NET Reflector was originally created by Lutz Roeder as a free community tool to inspect .NET assemblies. Using Reflector you can examine the types in an assembly, drill into type signatures and quickly disassemble code to see how a particular method works.  In case you’ve been living under a rock and you’ve never looked at Reflector, here’s what it looks like drilled into an assembly from disk with some disassembled source code showing: Note that you get tons of information about each element in the tree, and almost all related types and members are clickable both in the list and source view so it’s extremely easy to navigate and follow the code flow even in this static assembly only view. For many year’s Lutz kept the the tool up to date and added more features gradually improving an already amazing tool and making it better. Then about two and a half years ago Red Gate bought the tool from Lutz. A lot of ruckus and noise ensued in the community back then about what would happen with the tool and… for the most part very little did. Other than the incessant update notices with prominent Red Gate promo on them life with Reflector went on. The product didn’t die and and it didn’t go commercial or to a charge model. When .NET 4.0 came out it still continued to work mostly because the .NET feature set doesn’t drastically change how types behave.  Then a month back Red Gate started making noise about a new Version Version 7 which would be commercial. No more free version - and a shit storm broke out in the community. Now normally I’m not one to be critical of companies trying to make money from a product, much less for a product that’s as incredibly useful as Reflector. There isn’t day in .NET development that goes by for me where I don’t fire up Reflector. Whether it’s for examining the innards of the .NET Framework, checking out third party code, or verifying some of my own code and resources. Even more so recently I’ve been doing a lot of Interop work with a non-.NET application that needs to access .NET components and Reflector has been immensely valuable to me (and my clients) if figuring out exact type signatures required to calling .NET components in assemblies. In short Reflector is an invaluable tool to me. Ok, so what’s the problem? Why all the fuss? Certainly the $39 Red Gate is trying to charge isn’t going to kill any developer. If there’s any tool in .NET that’s worth $39 it’s Reflector, right? Right, but that’s not the problem here. The problem is how Red Gate went about moving the product to commercial which borders on the downright bizarre. It’s almost as if somebody in management wrote a slogan: “How can we piss off the .NET community in the most painful way we can?” And that it seems Red Gate has a utterly succeeded. People are rabid, and for once I think that this outrage isn’t exactly misplaced. Take a look at the message thread that Red Gate dedicated from a link off the download page. Not only is Version 7 going to be a paid commercial tool, but the older versions of Reflector won’t be available any longer. Not only that but older versions that are already in use also will continually try to update themselves to the new paid version – which when installed will then expire unless registered properly. There have also been reports of Version 6 installs shutting themselves down and failing to work if the update is refused (I haven’t seen that myself so not sure if that’s true). In other words Red Gate is trying to make damn sure they’re getting your money if you attempt to use Reflector. There’s a lot of temptation there. Think about the millions of .NET developers out there and all of them possibly upgrading – that’s a nice chunk of change that Red Gate’s sitting on. Even with all the community backlash these guys are probably making some bank right now just because people need to get life to move on. Red Gate also put up a Feedback link on the download page – which not surprisingly is chock full with hate mail condemning the move. Oddly there’s not a single response to any of those messages by the Red Gate folks except when it concerns license questions for the full version. It puzzles me what that link serves for other yet than another complete example of failure to understand how to handle customer relations. There’s no doubt that that all of this has caused some serious outrage in the community. The sad part though is that this could have been handled so much less arrogantly and without pissing off the entire community and causing so much ill-will. People are pissed off and I have no doubt that this negative publicity will show up in the sales numbers for their other products. I certainly hope so. Stupidity ought to be painful! Why do Companies do boneheaded stuff like this? Red Gate’s original decision to buy Reflector was hotly debated but at that the time most of what would happen was mostly speculation. But I thought it was a smart move for any company that is in need of spreading its marketing message and corporate image as a vendor in the .NET space. Where else do you get to flash your corporate logo to hordes of .NET developers on a regular basis?  Exploiting that marketing with some goodwill of providing a free tool breeds positive feedback that hopefully has a good effect on the company’s visibility and the products it sells. Instead Red Gate seems to have taken exactly the opposite tack of corporate bullying to try to make a quick buck – and in the process ruined any community goodwill that might have come from providing a service community for free while still getting valuable marketing. What’s so puzzling about this boneheaded escapade is that the company doesn’t need to resort to underhanded tactics like what they are trying with Reflector 7. The tools the company makes are very good. I personally use SQL Compare, Sql Data Compare and ANTS Profiler on a regular basis and all of these tools are essential in my toolbox. They certainly work much better than the tools that are in the box with Visual Studio. Chances are that if Reflector 7 added useful features I would have been more than happy to shell out my $39 to upgrade when the time is right. It’s Expensive to give away stuff for Free At the same time, this episode shows some of the big problems that come with ‘free’ tools. A lot of organizations are realizing that giving stuff away for free is actually quite expensive and the pay back is often very intangible if any at all. Those that rely on donations or other voluntary compensation find that they amount contributed is absolutely miniscule as to not matter at all. Yet at the same time I bet most of those clamouring the loudest on that Red Gate Reflector feedback page that Reflector won’t be free anymore probably have NEVER made a donation to any open source project or free tool ever. The expectation of Free these days is just too great – which is a shame I think. There’s a lot to be said for paid software and having somebody to hold to responsible to because you gave them some money. There’s an incentive –> payback –> responsibility model that seems to be missing from free software (not all of it, but a lot of it). While there certainly are plenty of bad apples in paid software as well, money tends to be a good motivator for people to continue working and improving products. Reasons for giving away stuff are many but often it’s a naïve desire to share things when things are simple. At first it might be no problem to volunteer time and effort but as products mature the fun goes out of it, and as the reality of product maintenance kicks in developers want to get something back for the time and effort they’re putting in doing non-glamorous work. It’s then when products die or languish and this is painful for all to watch. For Red Gate however, I think there was always a pretty good payback from the Reflector acquisition in terms of marketing: Visibility and possible positioning of their products although they seemed to have mostly ignored that option. On the other hand they started this off pretty badly even 2 and a half years back when they aquired Reflector from Lutz with the same arrogant attitude that is evident in the latest episode. You really gotta wonder what folks are thinking in management – the sad part is from advance emails that were circulating, they were fully aware of the shit storm they were inciting with this and I suspect they are banking on the sheer numbers of .NET developers to still make them a tidy chunk of change from upgrades… Alternatives are coming For me personally the single license isn’t a problem, but I actually have a tool that I sell (an interop Web Service proxy generation tool) to customers and one of the things I recommend to use with has been Reflector to view assembly information and to find which Interop classes to instantiate from the non-.NET environment. It’s been nice to use Reflector for this with its small footprint and zero-configuration installation. But now with V7 becoming a paid tool that option is not going to be available anymore. Luckily it looks like the .NET community is jumping to it and trying to fill the void. Amidst the Red Gate outrage a new library called ILSpy has sprung up and providing at least some of the core functionality of Reflector with an open source library. It looks promising going forward and I suspect there will be a lot more support and interest to support this project now that Reflector has gone over to the ‘dark side’…© Rick Strahl, West Wind Technologies, 2005-2011

    Read the article

  • TDE Tablespace Encryption 11.2.0.1 Certified with EBS 11i

    - by Steven Chan
    Oracle Advanced Security is an optional licenced Oracle 11g Database add-on.  Oracle Advanced Security Transparent Data Encryption (TDE) offers two different features:  column encryption and tablespace encryption.  TDE Tablespace Encryption 11.2.0.1 is now certified with Oracle E-Business Suite Release 11i. What is Transparent Data Encryption (TDE) ? Oracle Advanced Security Transparent Data Encryption (TDE) allows you to protect data at rest. TDE helps address privacy and PCI requirements by encrypting personally identifiable information (PII) such as Social Security numbers and credit card numbers. TDE is completely transparent to existing applications with no triggers, views or other application changes required. Data is transparently encrypted when written to disk and transparently decrypted after an application user has successfully authenticated and passed all authorization checks. Authorization checks include verifying the user has the necessary select and update privileges on the application table and checking Database Vault, Label Security and Virtual Private Database enforcement policies.

    Read the article

  • Does it make sense to use ORM in Android development?

    - by Heinzi
    Does it make sense to use an ORM in Android development or is the framework optimized for a tighter coupling between the UI and the DB layer? Background: I've just started with Android development, and my first instinct (coming from a .net background) was to look for a small object-relational mapper and other tools that help reduce boilerplate clode (e.g. POJOs + OrmLite + Lombok). However, while developing my first toy application I stumbled upon a UI class that explicitly requires a database cursor: AlphabetIndexer. That made me wonder if maybe the Android library is not suited for a strict decoupling of UI and DB layer and that I will miss out on a lot of useful, time-saving features if I try to use POJOs everywhere (instead of direct database access). Clarification: I'm quite aware of the advantages of using ORM in general, I'm specifically interested in how well the Android class library plays along with it.

    Read the article

  • Vouchers grátis para exames de implementação (SOA, E2.0, etc)

    - by pfolgado
    Gostaria de receber 'vouchers' grátis para os exames de Implementação? É fácil! Registe-se numa das Comunidades de Parceiros de EMEA. A maioria destas Comunidades oferecem aos seus membros 'vouchers' grátis para os exames dos produtos cobertos por essascomunidades. Por exemplo, os membros da Comunidade Parceiros de SOA podem obter 'vouchers' grátis para os exames de implementação de SOA e BPM. Para mais informação sobre as comunidades de Parceiros Oracle ver: Tópico Contacto Applications & Systems Management Javier Puerta Business Intelligence & Enterprise Performance Management Mike Hallett Communications Paul Thompson CRM On Demand Paul Thompson Enterprise 2.0 (previously "Content Management") Hans Blaas Exadata Javier Puerta Healthcare Paul Thompson Identity Management & Security Wolfgang Ehrenthaler Manufacturing, Retail, Distribution and Life Science (MRD/LS) Paul Thompson Public Sector Paul Thompson SOA / Integration Jürgen Kress

    Read the article

  • Maintaining packages with code - Adding a property expression programmatically

    Every now and then I've come across scenarios where I need to update a lot of packages all in the same way. The usual scenario revolves around a group of packages all having been built off the same package template, and something needs to updated to keep up with new requirements, a new logging standard for example.You'd probably start by updating your template package, but then you need to address all your existing packages. Often this can run into the hundreds of packages and clearly that's not a job anyone wants to do by hand. I normally solve the problem by writing a simple console application that looks for files and patches any package it finds, and it is an example of this I'd thought I'd tidy up a bit and publish here. This sample will look at the package and find any top level Execute SQL Tasks, and change the SQL Statement property to use an expression. It is very simplistic working on top level tasks only, so nothing inside a Sequence Container or Loop will be checked but obviously the code could be extended for this if required. The code that actually sets the expression is shown below, the rest is just wrapper code to find the package and to find the task. /// <summary> /// The CreationName of the Tasks to target, e.g. Execute SQL Task /// </summary> private const string TargetTaskCreationName = "Microsoft.SqlServer.Dts.Tasks.ExecuteSQLTask.ExecuteSQLTask, Microsoft.SqlServer.SQLTask, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"; /// <summary> /// The name of the task property to target. /// </summary> private const string TargetPropertyName = "SqlStatementSource"; /// <summary> /// The property expression to set. /// </summary> private const string ExpressionToSet = "@[User::SQLQueryVariable]"; .... // Check if the task matches our target task type if (taskHost.CreationName == TargetTaskCreationName) { // Check for the target property if (taskHost.Properties.Contains(TargetPropertyName)) { // Get the property, check for an expression and set expression if not found DtsProperty property = taskHost.Properties[TargetPropertyName]; if (string.IsNullOrEmpty(property.GetExpression(taskHost))) { property.SetExpression(taskHost, ExpressionToSet); changeCount++; } } } This is a console application, so to specify which packages you want to target you have three options: Find all packages in the current folder, the default behaviour if no arguments are specified TaskExpressionPatcher.exe .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find all packages in a specified folder, pass the folder as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\ .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Find a specific package, pass the file path as the argument TaskExpressionPatcher.exe C:\Projects\Alpha\Packages\Package.dtsx The code was written against SQL Server 2005, but just change the reference to Microsoft.SQLServer.ManagedDTS to be the SQL Server 2008 version and it will work fine. If you get an error Microsoft.SqlServer.Dts.Runtime.DtsRuntimeException: The package failed to load due to error 0xC0011008… then check that the package is from the correct version of SSIS compared to the referenced assemblies, 2005 vs 2008 in other words. Download Sample Project TaskExpressionPatcher.zip (6 KB)

    Read the article

  • Online Advertising And Marketing Your Services?

    - by Zenph
    I have been working on freelance sites for a good 4 or 5 years, bending over backwards to build a decent portfolio and generate great ratings. I take huge pride in my work (web applications). I'm completely lost because when I think what would happen if I suddenly lost my freelance account it isn't a pretty picture. I have literally no idea where else I could advertise my services apart from google paid advertising. Any suggestions? I'd of course be more than willing to pay for marketing and such. I've been searching google for ages and can't find much advice on where to advertise to secure good clients for web development work. I say good clients because I mean actual business owners, not somebody else who is outsourcing to me (where do they find clients?). I'd appreciate any help.

    Read the article

  • SQL Server 2012 edition comparison details are published

    - by DavidWimbush
    Interesting stuff, particularly if you're doing BI. BISM tabular and Power View will not be in Standard Edition, only in the new - presumably more expensive - Business Intelligence Edition. That kind of makes sense as you need a fairly pricey edition of SharePoint to really get all the benefits, but it's a shame there won't be some kind of limited version in Standard Edition. And Always On will be in Standard Edition but limited to 2 nodes. I really expected Always On to be Enterprise-only so this is a great decision. It allows those of us working at a more modest scale to benefit and raises the fault tolerance of SQL Server as a product to a new level.Read all about it here: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-editions.aspx

    Read the article

  • Bash prompt doesn't print until I interact with console again

    - by durron597
    I don't even know where to begin to diagnose this one. Usually, when a command finishes, the prompt prints itself for the next command. However, that is not happening. Hard to explain with words, I'll just use an example: User@Machine:~$ cp /mnt/mountname/directory/textfile.txt . After waiting several seconds (far too long for this operation on a small file) I press Enter, and see: User@Machine:~$ cp /mnt/mountname/directory/textfile.txt . User@Machine:~$ User@Machine:~$ So clearly the operation had finished, but the prompt didn't display... until I pressed enter, and then BOTH prompts instantly displayed. This error does not happen with commands like cd.

    Read the article

  • Add AD Domain user to sudoers from the command line

    - by Wyatt Barnett
    I'm setting up an Ubuntu 11.04 server VM for use as a database server. It would make everyone's lives easier if we could have folks login using windows credentials and perhaps even make the machine work with the current AD-driven security we've got elsewhere. The first leg of this was really easy to accomplish -- apt-get install likewise-open and I was pretty much in business. The problem I'm having is getting our admins into the sudoers groups -- I can't seem to get anything to take. I've tried: a) usermod -aG sudoers [username] b) adding the user names in several formats (DOMAIN\user, user@domain) to the sudoers file. None of which seemed to take, I still get told "DOMAIN\user is not in the sudoers file. This incident will be reported." So, how do I add non-local users to the sudoers?

    Read the article

  • Kuppinger Cole Paper on Entitlements Server

    - by Naresh Persaud
    Kuppinger Cole recently released a paper discussing external authorization describing how organizations can "future proof" their enterprise security by deploying Oracle Entitlements Server.  By taking a declarative security approach, security policy can be flexible and distributed across multiple applications consistently. You can get a copy of the report here. In fact Oracle Entitlements Server is being used in many places to secure data and sensitive business transactions. The paper covers the major  use cases for Entitlements Server as well as Kuppinger Cole's assessment of the market. Here are some additional resources that reinforce the cases discussed in the paper. Today applications for cloud and mobile applications can utilize RESTful interfaces. Click on this link to learn how. OES can also be used to secure data in Oracle Databases.   To learn more check out the new Oracle U  OES 11g course.

    Read the article

  • SEO consequences for merging country sites in a .com

    - by Pekka
    I am in the process of refactoring a number of rental portals I've built for a company with locations in Austria, Germany, Switzerland, and the Netherlands. Instead of the current setting of each country site running under its own domain name: www.companyname.de www.companyname.ch www.companyname.at I would love to merge them all in this way: www.companyname.com/de www.companyname.com/ch www.companyname.com/at with the country TLDs doing a 301 redirect to the respective .com address. However, I have been repeatedly told not to do this due to likely problems with SEO - the business is very SEO dependent, and being a rental chain, needs to be strong in local results. So the question is: Is there an unavoidable hit in Search Engine Optimization when redirecting to a central .com domain? What measures can be taken to soften the blow? What comes to my mind is explicitly specifying a lang attribute in the html tag. Are there any other ways to specifically point out geographical location for sub-directories?

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Intermittent silent failures when receiving email

    - by s t
    I’ve had a company host my website and email for the last six months or so and I’m having intermittent and silent failures where emails sent to me are not received. The sender never receives a “delivery notification failure” message. This has happened on multiple domains (@gmail.com, @microsoft.com) I’ve experienced it happening first hand when I sent a mail to myself from another account but I was unable to reproduce the error. It’s very rare (one in every 300 emails or so) The mails are not routed to my junk folder :) Obviously I’m worried about the effect this has on my business – but what can I do? I don’t believe I have enough information to diagnose the problem (neither does my hosting company when I presented them with the same information) – should I switch to another host?

    Read the article

  • ‘Unleash the Power of Oracle WebLogic 12c: Architect, Deploy, Monitor and Tune JEE6’: Free Hands On Technical Workshop

    - by JuergenKress
    Come to our Workshop and get bootstrapped in the use of Oracle WebLogic 12c for high performance systems. The workshop, organised by Oracle Gold Partners - C2B2 Consulting -  and run by the Oracle Application Grid Certified Specialist Steve Milldge, will start with a simple WebLogic 12c system which will scale up to a distributed, reliable system designed to give zero downtime and support extreme throughput. When? Wednesday,25th of July Where? Oracle Corporation UK Ltd. One South Place, London EC2M 2RB Visit www.c2b2.co.uk/weblogic and join us for this unique technical event to learn, network and play with some cool technology! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: c2b2,ias to WebLogic,WebLogic basic,ias upgrade,C2B2,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • 2D Physics Engine to Handle Shapes Composed of Multiple Densities XNA

    - by Stupac
    The game I'm working on involves shapes that might be composed of multiple materials in a variety of ways. Let's just take for example a wooden rod with and sizable tip of iron or say a block composed of a couple triangles of stone and aluminum and small nugget of gold. The shapes and compositions will change from time to time, so I was wondering what engine I should use and how I might implement this feature? I've looked at Farseer 3, but I'm still trying to decipher the library by reading the source and the samples and wasn't sure if I was barking up the wrong tree. Thanks!

    Read the article

  • Oracle’s new release of Primavera P6 Enterprise Portfolio Management

    It is estimated that projects totaling more than $6 trillion in value have been managed with Primavera products. Companies turn to Oracle's Primavera project portfolio management solutions to help them make better portfolio management decisions, evaluate the risks and rewards associated with projects, and determine whether there are sufficient resources with the right skills to accomplish the work. Tune into this conversation with Yasser Mahmud, Director of Product Strategy, for the Oracle Primavera Global Business Unit, to learn how P6 revolutionized project management, the new features in the release of Oracle Primavera P6 version 7 and how this newest release helps project-intensive businesses manage their entire project portfolio lifecycle, including projects of all sizes.

    Read the article

  • Iron Speed Designer Review

    While Visual Studio allows developers to get productive fast by providing great design tools for a UI, it still lacks the ability to do smart layouts, data connections and queries. It is in this area that RAD suite of applications can tremendously boost productivity by abstracting away some of these issues and saving developer time to focus on business intelligence instead of data extraction and presentation. When it comes to RAD application suites for managed web applications, there is non better than Iron Speed Designer. The ease with which you can create a data-centric web application and have different reports of your data within minutes are unparalleled. This review delves into what Iron Speed Designer has to offer as well as some of its limitations. Iron Speed works with .NET 2.0, 3.0, 3.5 and even the latest version .NET 4.0. Read More >

    Read the article

  • What's So Smart About Oracle Exadata Smart Flash Cache?

    - by kimberly.billings
    Want to know what's so "smart" about Oracle Exadata Smart Flash Cache? This three minute video explains how Oracle Exadata Smart Flash Cache helps solve the random I/O bottleneck challenge and delivers extreme performance for consolidated database applications. Exadata Smart Flash Cache is a feature of the Sun Oracle Database Machine. With it, you get ten times faster I/O response time and use ten times fewer disks for business applications from Oracle and third-party providers. Read the whitepaper for more information. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Join us at BIWA Summit 2013!

    - by mhornick
    Registration is now open for BIWA Summit 2013.  This event, focused on Business Intelligence, Data Warehousing and Analytics, is hosted by the BIWA SIG of the IOUG on January 9 and 10 at the Hotel Sofitel, near Oracle headquarters in Redwood City, California. Be sure to check out our featured speakers, including Oracle executives Balaji Yelamanchili, Vaishnavi Sashikanth, and Tom Kyte, and Ari Kaplan, sports analyst, as well as the many other internationally recognized speakers.  Hands-on labs will give you the opportunity to try out much of the Oracle software for yourself (including Oracle R Enterprise)--be sure to bring a laptop capable of running Windows Remote Desktop.  There will be over 35 sessions on a wide range of BIWA-related topics.  See the BIWA Summit 2013 web site for details and be sure to register soon, while early bird rates still apply.

    Read the article

  • links for 2010-04-07

    - by Bob Rhubart
    James McGovern: Enterprise Architecture and Social CRM "With a few exceptions, the vast majority of enterprise architects I know spend an awful lot of time focused on internal issues whether it is rationalization, the cloud, storage governance, data center consolidation, creation of reference architectures, portfolio management and other considerations that aren’t even visible to customers. One should ask whether IT can be truly successful if we are busy listening to the business but otherwise are blissfully ignorant towards the customers they serve." -- James McGovern (tags: enterprisearchitecture crm socialcomputing) WRF Benchmark: X6275 Beats Power6 - BestPerf "Oracle's Sun Blade X6275 cluster is 28% faster than the IBM POWER6 cluster on Weather Research and Forecasting (WRF) continental United Status (CONUS) benchmark datasets. The Sun Blade X6275 cluster used a Quad Data Rate (QDR) InfiniBand connection along with Intel compilers and MPI." (tags: oracle sun x6275 benchmarks)

    Read the article

  • What must be done to allow a development team to minimize difficulties as new team members are added?

    - by Travis
    I work at a small Web Dev firm, and have been handling all the PHP/MySQL/etc. for a while. I'm looking at improving our practices to allow for easier collaboration as we grow. Some things I have in mind are: Implementing a versioning system (source control) Coding standards for the team (unless mandated by a certain framework, etc.) Enforcing a common directory structure for our Desktops (for backup purposes, etc.) Web-based task/project/time/file/password/contact management and collaboration app(we've tried a bunch; I may just create one) What do more experienced developers view as necessary first steps in this area? Do you recommend any books? One thing to consider is that the bulk of our daily tasks involve maintenance and adding minor functionality rather than new projects, and the team size will be between 3 and 5. I just found a related question about teams that will be expanding from a solo developer.

    Read the article

  • SQL SERVER – Video – Performance Improvement in Columnstore Index

    - by pinaldave
    I earlier wrote an article about SQL SERVER – Fundamentals of Columnstore Index and it got very well accepted in community. However, one of the suggestion I keep on receiving for that article is that many of the reader wanted to see columnstore index in the action but they were not able to do that. Some of the readers did not install SQL Server 2012 or some did not have good machine to recreate the big table involved in the demo. For the same reason, I have created small video for that. I have written two more article on columstore index. Please read them as followup to the video: SQL SERVER – How to Ignore Columnstore Index Usage in Query SQL SERVER – Updating Data in A Columnstore Index Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Advice on SCRUM for the solitary developer [closed]

    - by ProfK
    Possible Duplicate: Agile for the Solo Developer I am looking for advice on the SCRUM process for a solitary developer. Most SCRUM resources I see focus on its use in a team environment, hence my question here. I'd like some guidance on structuring and managing my projects for SCRUM, with me as a solitary developer and business owner, but still occasionally including my clients for input and feedback. Areas I'm not clear on include resolving my backlog into 'sprintable' project areas and stories, defining user stories properly with a view to being digested by developer level users, defining feasible sprints for a single developer etc. Essentially I'm looking for advice on moving from using scrum in a team/office environment, with colleagues and project manager, and using chaos/cowboy-coding on my own, to assuming the role of PM myself and adopting scrum for work on my own. Any advice is welcome.

    Read the article

  • Java 7 update 6 installation fails on Windows 7 when Chrome is default browser

    - by ali1234
    I am configuring a brand new Lenovo U410 system with Windows 7 Home Premium for a user. I received the system direct from the shop. As part of the configuration I installed Java using the online installer. This worked correctly. Later, due to a mistake I made, I needed to restore the system to factory default. The factory default FORMATS C:\ and puts back (supposedly) the exact factory configuration. However, after doing this, I was no longer able to install Java successfully using the same method I used before. Now, whenever I attempt to use the online Java installer, the following happens. First of all, a window always appears "Welcome to Java", "Downloading Java Installer...". After short time this window disappears and then one of three things happens: The very first time I do this after doing a factory reset, I get a Windows error report, which contains this information: Application Name: JavaSetup7u5.exe Application Version: 7.0.50.6 Application Timestamp: 4feacd84 Fault Module Name: JavaIC.dll Fault Module Version: 9.9.9.9 Fault Module Timestamp: 4f2343d6 Exception Offset: 000052cb Exception Code: c0000417 Exception Data: 00000000 OS Version: 6.1.7600.2.0.0.768.3 Locale ID: 1033 Additional Information 1: 773c Additional Information 2: 773cd78cf06816f8246f359fa270f3bb Additional Information 3: f51a Additional Information 4: f51aaea7d22f36fa9e3a626b5a5cd1c3 2. Subsequent runs produce either this error message: "Error: Java(TM) installer - Downloaded file C:\Users\\AppData\Local\Temp\fx-runtime.exe is corrupt." or Nothing happens at all. I Believe this is a red herring. Running the installer again causes a different error because the files were downloaded and the installer crashed before it could clean up. This isn't the actual problem, as when this happens the installer deletes the downloaded files, and then when you run it for the third time, it downloads everything again and does the javaic.dll crash. I suspect the downloader is appending to the existing files or something, causing the corruption. I have tried all of the above as Administrator and as a normal user. I have tried reseting the system to factory defaults several times. I have tried downloading with Chrome and Internet Explorer 9. I have tried uninstalling all anti-virus software and disabling the windows firewall entirely. The only thing which makes a difference is running the installer in Windows XP compatibility mode, which allows the installation to complete. I know I can workaround this error by using the offline installer so please don't post that as an answer. I am looking for an explanation of the root cause. Additionally, if I use the offline installer, the updater does not work. The updater also does not work if I install in XP mode. The updater fails because it works by just downloading the newest online setup and running it. Also remember that the installers are digitally signed. The signitures verify correctly so there is no way in hell that this is caused by corrupted downloads. Some theories I have: The Java setup files on java.com actually changed in between the first successful install and my later attempts. Seems unlikely as none of the version numbers have changed. However, I have seen a couple of reports of this error which showed up in the past 24 hours. This looks like the most likely explanation right now: http://www.oracle.com/us/corporate/press/1735645 - Oracle released 7 update 6 two days ago. Careful inspection of the installers reveal that they are in fact attempting to download .6, not .5 as the download page claims. Not actually correct. Only the update tool tries to install 7u6. The online installer still tries 7u5. However, 7u6 being released two days ago is too much of a coincidence to ignore. Update: The 7u6 online installer is available from Oracle technetwork. It crashes in exactly the same way. The factory reset software uses GMT-8 and I am on GMT-1. As a result, after factory reset, any software which cares to check would think that the system was restored 7 hours in the future, due to Window's awful policy of storing local time in the system clock. This could be confusing a certificate check or similar. Update: I discovered that this does cause Windows Update to fail. The workaround, setting the clock back before starting factory reset, does not enable Java to install correctly. The factory reset image isn't really the same as what is installed in the main partition when you buy the system. Naughty Lenovo. The installer appears to crash while installing or displaying something to do with the Ask.com toolbar. That seems to be what javaic.dll does. Microsoft Tuesday was the 14th. Some update in that could be causing this. However, I'm factory reseting the machine every time, so unless the patches get slipstreamed into the recovery image, or there is some mechanism by which they get silently installed even if updates are disabled, then I don't see how this can be the cause. Major breakthrough: The default browser on Lenovo systems is Google Chrome. I noticed that the JavaIC.dll "sponsor check" actually does a check on your default browser in order to decide which sponsor ad to display. Normally that would get you the Ask toolbar on IE9. But that toolbar doesn't work on Chrome, and so the installer tries to display a different ad. The different ad is what causes the crash. Changing the default browser to IE9 allows the installer to run correctly. So this looks like a genuine bug in the sponsor ad code in the installer, caused by a combination of Google Chrome default browser and not being in the US. (Installer also checks your location using IP geolocation service and displays different ads based on that.)

    Read the article

  • Evil DRY

    - by StefanSteinegger
    DRY (Don't Repeat Yourself) is a basic software design and coding principle. But there is just no silver bullet. While DRY should increase maintainability by avoiding common design mistakes, it could lead to huge maintenance problems when misunderstood. The root of the problem is most probably that many developers believe that DRY means that any piece of code that is written more then once should be made reusable. But the principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system." So the important thing here is "knowledge". Nobody ever said "every piece of code". I try to give some examples of misusing the DRY principle. Code Repetitions by Coincidence There is code that is repeated by pure coincidence. It is not the same code because it is based on the same piece of knowledge, it is just the same by coincidence. It's hard to give an example of such a case. Just think about some lines of code the developer thinks "I already wrote something similar". Then he takes the original code, puts it into a public method, even worse into a base class where none had been there before, puts some weird arguments and some if or switch statements into it to support all special cases and calls this "increasing maintainability based on the DRY principle". The resulting "reusable method" is usually something the developer not even can give a meaningful name, because its contents isn't anything specific, it is just a bunch of code. For the same reason, nobody will really understand this piece of code. Typically this method only makes sense to call after some other method had been called. All the symptoms of really bad design is evident. Fact is, writing this kind of "reusable methods" is worse then copy pasting! Believe me. What will happen when you change this weird piece of code? You can't say what'll happen, because you can't understand what the code is actually doing. So better don't touch it anymore. Maintainability just died. Of course this problem is with any badly designed code. But because the developer tried to make this method as reusable as possible, large parts of the system get dependent on it. Completely independent parts get tightly coupled by this common piece of code. Changing on the single common place will have effects anywhere in the system, a typical symptom of too tight coupling. Without trying to dogmatically (and wrongly) apply the DRY principle, you just had a system with a weak design. Now you get a system which just can't be maintained anymore. So what can you do against it? When making code reusable, always identify the generally reusable parts of it. Find the reason why the code is repeated, find the common "piece of knowledge". If you have to search too far, it's probably not really there. Explain it to a colleague, if you can't explain or the explanation is to complicated, it's probably not worth to reuse. If you identify the piece of knowledge, don't forget to carefully find the place where it should be implemented. Reusing code is never worth giving up a clean design. Methods always need to do something specific. If you can't give it a simple and explanatory name, you did probably something weird. If you can't find the common piece of knowledge, try to make the code simpler. For instance, if you have some complicated string or collection operations within this code, write some general-purpose operations into a helper class. If your code gets simple enough, its not so bad if it can't be reused. If you are not able to find anything simple and reasonable, copy paste it. Put a comment into the code to reference the other copies. You may find a solution later. Requirements Repetitions by Coincidence Let's assume that you need to implement complex tax calculations for many countries. It's possible that some countries have very similar tax rules. These rules are still completely independent from each other, since every country can change it of its own. (Assumed that this similarity is actually by coincidence and not by political membership. There might be basic rules applying to all European countries. etc.) Let's assume that there are similarities between an Asian country and an African country. Moving the common part to a central place will cause problems. What happens if one of the countries changes its rules? Or - more likely - what happens if users of one country complain about an error in the calculation? If there is shared code, it is very risky to change it, even for a bugfix. It is hard to find requirements to be repeated by coincidence. Then there is not much you can do against the repetition of the code. What you really should consider is to make coding of the rules as simple as possible. So this independent knowledge "Tax Rules in Timbuktu" or wherever should be as pure as possible, without much overhead and stuff that does not belong to it. So you can write every independent requirement short and clean. DRYing try-catch and using Blocks This is a technical issue. Blocks like try-catch or using (e.g. in C#) are very hard to DRY. Imagine a complex exception handling, including several catch blocks. When the contents of the try block as well as the contents of the individual catch block are trivial, but the whole structure is repeated on many places in the code, there is almost no reasonable way to DRY it. try { // trivial code here using (Thingy thing = new thingy) { //trivial, but always different line of code } } catch(FooException foo) { // trivial foo handling } catch (BarException bar) { // trivial bar handling } catch { // trivial common handling } finally { // trivial finally block } The key here is that every block is trivial, so there is nothing to just move into a separate method. The only part that differs from case to case is the line of code in the body of the using block (or any other block). The situation is especially interesting if the many occurrences of this structure are completely independent: they appear in classes with no common base class, they don't aggregate each other and so on. Let's assume that this is a common pattern in service methods within the whole system. Examples of Evil DRYing in this situation: Put a if or switch statement into the method to choose the line of code to execute. There are several reasons why this is not a good idea: The close coupling of the formerly independent implementation is the strongest. Also the readability of the code and the use of a parameter to control the logic. Put everything into a method which takes a delegate as argument to call. The caller just passes his "specific line of code" to this method. The code will be very unreadable. The same maintainability problems apply as for any "Code Repetition by Coincidence" situations. Enforce a base class to all the classes where this pattern appears and use the template method pattern. It's the same readability and maintainability problem as above, but additionally complex and tightly coupled because of the base class. I would call this "Inheritance by Coincidence" which will not lead to great software design. What can you do against it: Ideally, the individual line of code is a call to a class or interface, which could be made individual by inheritance. If this would be the case, it wouldn't be a problem at all. I assume that it is no such a trivial case. Consider to refactor the error concept to make error handling easier. The last but not worst option is to keep the replications. Some pattern of code must be maintained in consistency, there is nothing we can do against it. And no reason to make it unreadable. Conclusion The DRY-principle is an important and basic principle every software developer should master. The key is to identify the "pieces of knowledge". There is code which can't be reused easily because of technical reasons. This requires quite a bit flexibility and creativity to make code simple and maintainable. It's not the problem of the principle, it is the problem of blindly applying a principle without understanding the problem it should solve. The result is mostly much worse then ignoring the principle.

    Read the article

< Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >