Search Results

Search found 22301 results on 893 pages for 'software sources'.

Page 496/893 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • anxiety and programming [closed]

    - by user83379
    I went to the doctor for anxiety and was prescribed a small dose of lexapro to help with anxiety and sleeping better. I am cautious about taking it since this is the first time it's got bad enough for me to talk to someone and I'm concerned it may negatively impact my career as a software developer. I'm also afraid that once I start it may be difficult to come off. Does anyone on here have experience with this? Is it likely that taking lexapro would negatively affect my problem solving skills, passion for programming or job performance? Thanks for any suggestions.

    Read the article

  • Compiz stop working

    - by Aikanáro
    I'm on a laptop sony vaio, vng-nw330f with ubuntu 11.10. One day I used my computer normally, I shutdown and the other day when I turn on it again, all was different. Compiz effects is not working at all, it doesn't matter what plugin (from ccsm) i enable or disable, nothings happens, nothings changes. I tried next commands: unity --reset unity --reset-icons sudo rm -rf .config .gnome .gnome2 .gnome2_private .compiz .icons .fonts .nautilus .themes .qt .local (from my home directory) gconftool-2 --recursive-unset /apps/compiz-1 I try reinstalling compiz (I remove compiz from software center and installed it again). I have just a couple of weeks using ubuntu, I'm not sure if exist a desktop call it unity 3d, but I think that it's missing. I want the graphics interface of ubuntu 11.10 by default.

    Read the article

  • How to install a Logitech c310 webcam?

    - by Teja
    I bought a new Logitech C310 webcam.i dont know how to install webcam on ubuntu.am trying to install this software through the terminal window. wine /media/LWS_2_0/Setup.exe but its showing as fixme:ole:TLB_ReadTypeLib Header type magic 0x00905a4d not supported. err:ole:TLB_ReadTypeLib Loading of typelib L"Z:\\media\\LWS_2_1\\MSetup.exe" failed with error 0 and its opening a window with data as "We have detected you have connected your webcam to a USB1.1 port,for the best performance and full feature set,we suggest using a USB2.0 port" I tried this all the USB ports available in my system.its showing the same error. Please tell me how to install this webcam on ubuntu 11.10. Thanx for reading.

    Read the article

  • Oracle OpenWorld 2012 Hands-on Lab: “Using Oracle AIA Foundation Pack for Oracle E-Business Suite Integration”

    - by Lionel Dubreuil
    Sharpen your Oracle skill sets and master Oracle technology in Oracle OpenWorld Hands-on Labs.In self-paced, practical learning sessions covering everything from business applications to middleware, database, storage, and enterprise management solutions, you'll discover new ways to derive maximum benefits from your Oracle hardware and software solutionsOracle experts will be available in person to answer questions and guide you through each lab.Hands-on Labs fill up early, and seats are limited, so don’t be late.This HOL10233 - Using Oracle AIA Foundation Pack for Oracle E-Business Suite Integration is scheduled for: Date: Monday, Oct 1 Time: 10:45 AM - 11:45 AM Location: Marriott Marquis - Nob Hill CD In this Hands-on Lab, learn how to integrate Oracle E-Business Suite with third-party applications in your ecosystem with Oracle Application Integration Architecture (AIA) Foundation Pack.This hands-on lab focuses on SOA-based integration with Oracle E-Business Suite, using Oracle E-Business Suite Integrated SOA Gateway or Oracle Applications Adapter to orchestrate a process across disparate applications in your ecosystem.Objectives for this session are to: Learn how to design and build an integration, from functional definition to development Learn how to automatically build services with robust error handling logic Learn how to discover and reuse services available in Oracle E-Business Suite

    Read the article

  • LSP vs OCP / Liskov Substitution VS Open Close

    - by Kolyunya
    I am trying to understand the SOLID principles of OOP and I've come to the conclusion that LSP and OCP have some similarities (if not to say more). the open/closed principle states "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification". LSP in simple words states that any instance of Foo can be replaced with any instance of Bar which is derived from Foo and the program will work the same very way. I'm not a pro OOP programmer, but it seems to me that LSP is only possible if Bar, derived from Foo does not change anything in it but only extends it. That means that in particular program LSP is true only when OCP is true and OCP is true only if LSP is true. That means that they are equal. Correct me if I'm wrong. I really want to understand these ideas. Great thanks for an answer.

    Read the article

  • Windows 7 fails to install

    - by Brian Ortiz
    I'm upgrading from Vista SP1 (which was actually upgraded from XP over a year ago) to Windows 7 RTM (64-bit Ultimate to 64-bit Ultimate). After 4 hours or so, the install fails with the message "This version of Windows could not be installed, Your previous version of Windows has been restored, and you can continue to use it." This error is back at my Vista desktop, there's no error that I could see during install, I just a message indicating that it was reverting everything. I tracked down the error logs and here's the log at I uploaded the error log (from C:\$WINDOWS.~BT\Sources\Panther) and uploaded onto Pastebin. Here is an excerpt: 2009-08-09 02:54:57, Error Number of Enumerated Devices = 21[gle=0x00000103] 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x[gle=0x80092004] 2009-08-09 02:54:58, Error Failed to find driver file path. Error=00000002x[gle=0x80092004] It was suggested that I upgrade to SP2 before upgrading to Vista, but this made no difference. I since uninstalled SP2 since it was creating some problems with a piece of hardware. I know a fresh install is best, but I'm hoping to avoid that because I'd need a new hard drive. Per Reuben's instruction, I found the install's dump and uploaded it here. (266 KB)

    Read the article

  • Is there a better approach to speech synthesis than text-to-speech for more natural output? [closed]

    - by Anne Nonimus
    We've all heard the output of text-to-speech systems, and for anything but very short phrases, it sounds very machine-like. The ultimate goal of speech synthesis systems is to pass a Turing test of hearing. Clearly, the state of the art in text-to-speech has much to improve. However, speech synthesis isn't restricted to just text-to-speech systems, and I'm wondering if other approaches have been tried with better success. In other words, has there been any work done (libraries, software, research papers, etc.) on natural speech synthesis other than text-to-speech systems?

    Read the article

  • Understanding hand written lexers

    - by Cole Johnson
    I am going to make a compiler for C (C99; I own the standards PDF), written in C (go figure) and looking up on how compilers work on Wikipedia has told me a lot. However, after reading up on lexers has confused me. The Wikipedia page states that: the GNU Compiler Collection (gcc) uses hand-written lexers I have tried googling what a hand written lexer and have come up with nothing except for "making a flowchart that describes how it should function", however, isn't that how all software development should be done? So my question is: "What is a hand written lexer?"

    Read the article

  • How can I cap cloud-costs?

    - by Joe Simpson
    I am looking into launching a cloud-based product for consumers where with a prepaid account they can start a server with a simple click and load up the software and access it remotely. The technical side of that I can manage, but I am worried about the costs escalating ridiculously high for both me and my customers Is there a way I can Limit how much each server can cost me before it will be deactivated See how much a server is currently costing me (so I can deduct it from their account) with it being extremely reliable as I don't want to have to have a giant bill in any possibility.

    Read the article

  • Multi-lingual error messages and error numbers

    - by Jon Hopkins
    So we're looking at the possibility of porting our software to support multiple languages and one of the areas we're going to have to deal with is error messages and other notifications. These obviously have to be reported to the users in their own language. Our team (largely) only speak English and even if we were all multi-lingual we're looking at selling to a wide range of countries and could never expect to have a reasonable number of people speaking all languages (we're a small company). The obvious way to get round the language issue when errors or other messages we may get asked about which are being reported is error numbers which would be consistent across language. While these are going to exist in the backend (if only as key on the error message), I'd really rather not throw them at users if we don't have to but I don't have any other solution. Anyone have any useful suggestions for alternatives?

    Read the article

  • Oracle Exalogic Elastic Cloud - Planned Webcasts

    - by chuck.speaks
    I’m putting together a collection of recorded webcasts around Oracle Exalogic Elastic Cloud (Exalogic).  The plan is to do a systems overview and then multiple deep dives into hardware and software components that make up the engineered system. Those of you that are members of our partner community (Oracle Partner Network), drop me a note if you are interested in a full blown in-class delivery via PTS resources.  There is no schedule for these workshops but if there is enough interest, I would venture to guess it would roll out soon. Those of you with applications certified on Oracle WebLogic server that would like to scale to Exalogic, see me or watch this space.   Chuck Speaks chuck <dot> speaks at oracle <dot> com

    Read the article

  • RoundhousE now supports Oracle, SQL2000

    RoundhousE, the database migration software that is based on sql scripts has added support for Oracle and SQL 2000. There have also been numerous other little things, including better logging and a script run errors table. The script errors table captures what went wrong when/if your scripts are not quite up to par or there is some other issue. A special thanks goes out to http://twitter.com/PascalMestdach and http://twitter.com/jochenjonc. They worked hard on this and all I did was provide guidance...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to Configure Microsoft Security Essentials

    Microsoft Security Essentials is the software giant's free solution for home users as well as small businesses. As long as you have a genuine copy of Windows running on your PC, you can enjoy all it has to offer. The program is characterized by easy installation and a user interface that is intuitive and rather simple to navigate. With so many viruses, spyware, and other malicious items floating all around the Web, keeping your PC secure should be of utmost importance. After all, you want to protect your investment and your sanity at the same time. Having a solid program such as Microsof...

    Read the article

  • convert image to spritesheet of tiles for isometric map?

    - by Paul
    is there a way to convert an isometric image (like the first image) to a spritesheet (like the second image), in order to place each image on the isometric map with the code? The map looks like the first image, but some buildings are bigger than just one tile, so I need several squares (let's say the first image is a building, made of multiple tiles with different colors), and each square is placed with an offset of 64x32. The building is created in Blender and I save the image with the isometric perspective. But I have to split each square from this image in order to have the spritesheet, maybe there is smarter way, or a java software that would make the conversion for me?

    Read the article

  • problems updating system with apt-get

    - by Javier
    After to do apt-get update and try with apt-get upgrade I have the next error message: This is a coppy of my terminal (in spanish) root@LinuxJGP:/home/javiergp# apt-get upgrade Leyendo lista de paquetes... Hecho Creando árbol de dependencias Leyendo la información de estado... Hecho Se actualizarán los siguientes paquetes: apport apport-symptoms fonts-liberation gnome-icon-theme gnome-orca language-pack-en language-pack-en-base language-pack-es language-pack-es-base language-pack-gnome-en language-pack-gnome-en-base language-pack-gnome-es language-pack-gnome-es-base light-themes linux-firmware oneconf resolvconf sessioninstaller software-center ssl-cert tzdata ubuntu-docs ubuntu-keyring ubuntu-sso-client ubuntuone-control-panel ubuntuone-installer unity-lens-video unity-scope-video-remote xdiagnose 29 actualizados, 0 se instalarán, 0 para eliminar y 0 no actualizados. E: Los archivos de índice de paquetes están dañados. No existe un campo «Filename:» para el paquete ubuntu-keyring. How can resolve this problem?

    Read the article

  • cannot format a fat filesystem, getting error "Both FATs appear to be corrupt. Giving up."

    - by Nilesh
    i am trying format an FAT (or FAT32) file system on my ubuntu system, but i am not able to format the device, each time i am getting the error "Both FATs appear to be corrupt. Giving up." i have tried all options like sudo dosfsck -t -a -w /dev/sdc1 sudo dosfsck -w -r -l -a -v -t /dev/sdc1 but each time the same message comes, can any one guide me how to recover the filesystem, also i don't mind losing data of this drive as this is an external pen drive. Also, can u pl suggest of some method other then booting from a CD with software like GPARTED or something like that.

    Read the article

  • How can a student programmer improve his teamwork skill?

    - by xiao
    I am a student right now. Recently, I am working in a project as a leader with three other students. Due to the lack of experience, our project is progressing slowly and our members are frustrated. They do not feel sense of accomplishment in the project. I am pressured and frustrated, too. But as a team leader, I think I need to push them. But I do not know how to do. Do I help them solve coding problem or just encouragement? But if I pay too much attention on it, it would slow down my own progress. It is a not technical question, but it is very common in software development. I hope veteran programmers would give me some suggestions. Thanks!

    Read the article

  • Google I/O 2010 - Building your own Google Wave provider

    Google I/O 2010 - Building your own Google Wave provider Google I/O 2010 - Open source Google Wave: Building your own wave provider Wave 101 Dan Peterson, Jochen Bekmann, JD Zamfirescu Pereira, David LaPalomento (Novell) Learn how to build your own wave service. Google is open sourcing the lion's share of the code that went into creating Google Wave to help bootstrap a network of federated providers. This talk will discuss the state of the reference implementation: the software architecture, how you can plug it into your own use cases -- and how you can contribute to the code and definition of the underlying specification. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 8 0 ratings Time: 59:03 More in Science & Technology

    Read the article

  • Cyclic Dependencies.

    - by PhilCK
    Are cyclic dependencies a common thing in games dev? I ask as I keep getting into situation where I'm using and have been told more than once that they should be avoided. I am wondering if this is just a what people say as a general rule of thumb in the software development business. and that the nature of game programming produces such dependencies. // Foo #include <Bar.hpp> class Foo { bar& m_bar; }; and // Bar class Foo; class Bar { Foo* m_foo; }; I do this alot in Ruby, but dynamic languages are more forgiving in this instance, where as static ones, not so much.

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

  • How important is to sacriface your free time for accomplishing goals? [closed]

    - by Darf Zon
    I was reading a book about XP programming and about agile teams. While I was reading, I saw this scenario. I've never worked with a development team (just in school). So I would like what do you opine on this situation: Your boss has asked you to deliver software in a time that can only be possible to meet the project team asking if you want to work overtime without pay. All team members have young children. Discuss whether it should accept this request from your boss or should persuade the team to give their time to the organization rather than their families. What could be significant factors in the decision? As a programmer, you are offered an upgrade as project manager, but his feeling is that you can have a more effective contribution in a technical role in one administrative. Write when you should accept that promotion. Somethimes, I sacrifice my free time for accomplishing hits at work, so it's very important to me to know your opinion base of your experience.

    Read the article

  • Which tool to use for "home banking"?

    - by Huygens
    I would like to manage my bank accounts in a secure manner on Ubuntu. I saw several applications in the Software Centre, but I don't know which one to choose. I don't need fancy features like stock options. I just have regular accounts which I want to follow, I don't want complicated stuff. As bank data are quite sensitive, I would highly prefer an application that does encryption of the data. Though, if you have a really cool app but it does not have this feature, as long as it offers to store the data in one dedicated place, I could do with encrypting that place. So what tool do you use that could fit my needs?

    Read the article

  • OpenVPN, install a TAP adapter

    - by GolezTrol
    When I try to connect to my work VPN using OpenVPN, the connection fails with the message: All TAP-Win32 adapters on this system are currently in use. Many sources suggest to look in Control Panel\Network and Internet\Network Connections an enable the TAP adapter, but when I look there, there is none. Now I've run addtap.bat which is provided with OpenVPN, but I still don't get to see any TAP adapter, and logging in in VPN still fails. The output of addtap.bat is C:\Windows\system32>"C:\Program Files (x86)\OpenVPN\bin\tapinstall.exe" install "C:\Program Files (x86)\OpenVPN\driver\OemWin2k.inf" tap0801 Device node created. Install is complete when drivers are updated... Updating drivers for tap0801 from C:\Program Files (x86)\OpenVPN\driver\OemWin2k .inf. Drivers updated successfully. I've Run As Administrator both the setup of OpenVPN and addtap.bat. I've run deltapall.bat to remove any (maybe hidden) adapters. It said it removed three of them, after which I ran addtap.bat again to try to create another one. I also run OpenVPN itself as administrator. What's wrong? Running Windows 7 Home Premium on a HP Pavilion dv7 4050ed. It has worked before, but I recently had to reinstall my laptop, for which I used the restore disks I created when I just got it. Everything else seems to work fine. == UPDATE == The TAP adapter is found in Device Manager, but apparently it is disabled because it is incompatible with Windows 7 64bit. I've deïnstalled OpenVPNGui, downloaded a version that should be 64bit compatible, and installed that. Still no cigar. Then I found a tip to install OpenVPN (version 9) after installing OpenVPNGui, because that installs OpenVPN version 8. Now I got a v9 TAP driver in Device Manager, but it still doesn't work and shows up in device manager with an exclamation mark, and not at all in my network devices.

    Read the article

  • Webcast: Introducing the New Oracle VM Blade Cluster Reference Configuration

    - by Ferhat Hatay
    The Fastest Way to Virtualize Your Datacenter Join our webcast “Best Practices for Speeding Virtual Infrastructure Deployment with Oracle VM” Tues., January 25, 2011 9 a.m. PT / 12 p.m. ET Presented by: Marc Shelley, Senior Manager, Oracle Blades Product Management Tom Lisjac, Senior Member, Oracle Technical Staff Register now for our live webcast! The Oracle VM blade cluster reference configuration addresses the key challenges associated with deploying a virtualization infrastructure. It eliminates or significantly reduces the assembly and integration of the following components BY UP TO 98%: Servers Storage Network Connections Virtualization Software Operating Systems Attend this fact-filled, how-to Webcast with Oracle experts to learn the best practices for deploying the reference configuration for Oracle VM Server for x86 and Sun Blade and Sun Fire x86 rack mount servers. Virtualization is easier than ever with this new configuration. Register now for our live webcast! For more information, see: Oracle white paper: Accelerating deployment of virtualized infrastructures with the Oracle VM blade cluster reference configuration Oracle technical white paper: Best Practices and Guidelines for Deploying the Oracle VM Blade Cluster Reference Configuration

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >