Search Results

Search found 13719 results on 549 pages for 'design evolution'.

Page 517/549 | < Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >

  • Is hidden content (display: none;) -indexed- by search engines? [closed]

    - by user568458
    Possible Duplicate: How bad is it to use display: none in CSS? We've established on this site before (in this question) that, since there are so many legitimate uses for hiding content with display: none; when creating interactive features, that sites aren't automatically penalised for content that is hidden this way (so long as it doesn't look algorithmically spammy). Google's Webmaster guidelines also make clear that a good practice when using content that is initially legitimately hidden for interactivity purposes is to also include the same content in a <noscript> tag, and Google recommend that if you design and code for users including users with screen readers or javascript disabled, then 9 times out of 10 good relevant search rankings will follow (though their specific advice seems more written for cases where javascript writes new content to the page). JavaScript: Place the same content from the JavaScript in a tag. If you use this method, ensure the contents are exactly the same as what’s contained in the JavaScript, and that this content is shown to visitors who do not have JavaScript enabled in their browser. So, best practice seems pretty clear. What I can't find out is, however, the simple factual matter of whether hidden content is indexed by search engines (but with potential penalties if it looks 'spammy'), or, whether it is ignored, or, whether it is indexed but with a lower weighting (like <noscript> content is, apparently). (for bonus points it would be great to know if this varies or is consistent between display: none;, visibility: hidden;, etc, but that isn't crucial). This is different to the other questions on display:none; and SEO - those are about good and bad practice and the answers are discussions of good and bad practice, I'm interested simply in the factual 'Yes or no' question of whether search engines index, or ignore, content that is in display: none; - something those other questions' answers aren't totally clear on. One other question has an answer, "Yes", supported by a link to an article that doesn't really clear things up: it establishes that search engines can spot that text is hidden, it discusses (again) whether hidden text causes sites to be marked as spam, and ultimately concludes that in mid 2011, Google's policy on hidden text was evolving, and that they hadn't at that time started automatically penalising display:none; or marking it as spam. It's clear that display: none; isn't always spam and isn't always treated as spam (many Google sites use it...): but this doesn't clear up how, or if, it is indexed. What I will do will be to follow the guidelines and make sure that all the content that is initially hidden which regular users can explore using javascript-driven interactivity is also structured in way that noscript/screenreader users can use. So I'm not interested in best practice, opinions etc because best practice seems to be really clear: accessibility best practices boosts SEO. But I'd like to know what exactly will happen: whether any display: none; content I have alongside <noscript> or otherwise accessibility-optimised content will be be ignored, or indexed again, or picked up to compare against the <noscript> content but not indexed... etc.

    Read the article

  • Understanding the Value of SOA

    - by Mala Narasimharajan
    Written By: Debra Lilley, ACE Director, Fusion Applications Again I want to talk from my area of expertise of Fusion Applications and talk about their design fundamentals. If you look at the table below and start at the bottom Oracle have defined all of the business objects e.g. accounts, people, customers, invoices etc. used by Fusion Applications; each of these objects contain all of the information required and can be expanded if necessary.  That Oracle have created for each of these business objects every action that is needed for the applications e.g. all the actions to create a new customer, checking to see if it exists, credit checking with D&B (Dun & Bradstreet < http://www.dnb.co.uk/> ) , creating the record, notifying those required etc. Each of these actions is a stand-alone web service. Again you can create a new actions or subscribe to an external provided web service e.g. the D&B check. The diagram also shows that all of development of Fusion Applications is from their Fusion Middleware offerings. Then the Intelligent Business Process is the order in which you run these actions, this is Service Orientated Architecture, SOA. Not only is SOA used to orchestrate actions within Fusion Applications it is also used in the integration of Fusion Applications with the rest of the Oracle stable of applications such as EBS, PeopleSoft, JDE and Siebel. The other applications are written with propriety development tools so how do they work with SOA? It’s a very simple answer, with the introduction of the Oracle SOA platform each process within these applications was made available to be called as a web service. I won’t go into technically how that is done but what’s known as a wrapper to allow each of them to act in this way was added. Finally at the top of the diagram are the questions that each Fusion Application process must answer, and this is the ‘special’ sauce that makes them so good, the User Experience, but that is a topic for another day, or you can read about it in my blog http://debrasoracle.blogspot.co.uk/2014/04/going-on-record-about-fusion-apps-cloud.html or Oracle’s own UX blog https://blogs.oracle.com/usableapps/ The concept behind AppAdvantage is not new the idea that Oracle technology can add value to your Oracle applications investments is pretty fundamental. Nishit Rao who is in AppAdvantage team provided myself and other ACE Directors with demo kits so that we could demonstrate SOA running with the applications. The example I learnt to build was that of the EBS inventory open interface. The simple concept is that request records can be added to a table and an import run that creates these as transactions in inventory. What’s SOA allows you to do is to add to the table from any source and then run this process automatically whereas traditionally you had to run the process at regular intervals because you didn’t know if the table was empty or not. This may just sound like a different way of doing the same thing but if the process is critical for your business then the interval was very small and the process run potentially many times unnecessarily. Using SOA it only happened when necessary without any delay. So in my post today I’ve talked about how SOA is used with Fusion Applications and in the linking with more traditional applications but that is only the tip of the iceberg of potential, your applications are just part of your IT systems and SOA can orchestrate your data across all of them; the beauty of open standards.  Debra Lilley, Fusion Champion, UKOUG Board Member, Fusion User Experience Advocate and ACE Director.  Lilley has 18 years experience with Oracle Applications, with E Business Suite since 9.4.1, moving to Business Intelligence Team Lead and Oracle Alliance Director. She has spoken at over 100 conferences worldwide and posts at debrasoraclethoughts

    Read the article

  • A programmer who doesn't get to program - where to turn? [closed]

    - by Just an Anon
    I'm in my mid 20's, and have been working as a full time programmer / developer for the last ~6 years, with several years of part-time freelancing before this, and three straight years of freelancing in the middle of this short career. I work mostly with PHP and the Drupal framework. By and large, I focus on programming custom pieces of functionality; these, of course, vary greatly from project to project. I've got years of solid experience with OOP (have done some Java & C# years ago, too) including intensive experience with front-end development, and even some design work. I've lead small teams (2-4 people) of developers. And of course, given the large amount of freelancing, I've got decent project- & client-management skills. My problem is staying motivated at any place of employment. In the time mentioned I've worked (full-time) at six local companies. The longest I've stayed at any company was just over a year. I find that I'll get hired and be very excited and motivated for the first few months, but the work quickly gets "stale." By that I mean that the interesting components (ie. the programming) get done, and the rest of the work turns into boring cleanup (move a button, add text, change colours, add a field). I don't get challenged, and I don't feel like I'm learning anything new. This happens repeatedly time and time again, and I always end up leaving for either a new opportunity, or to freelance. I'm wondering if perhaps I've painted myself into a corner with the rather niche work market (although with very high demand and good compensation) and need to explore other career choices. Another possibility is that I may be choosing the wrong places of employment, mostly small agencies, and need to look into working for a larger, more established firm. I find programming, writing code, and architecting solutions very rewarding. When I'm working on an interesting problem I lose all sense of time and 14-16 hours can fly by like minutes. I get the same exciting feeling when I'm doing high-level planning of a complex system, breaking up the work and figuring out how everything will tie-in together. I absolutely hate doing small, "stupid" changes that pose no challenge, yet seem to make up more and more of my work. I want to find a workplace where I will get to work on such tasks, be challenged, and improve in all areas of product development. This maybe a programming job, management, architecture of desktop apps, or may be managing a taco stand on a beach in Mexico - I don't know, and I need some advice and real-world feedback. What are some job areas worth exploring? The requirements are fairly simple: working with computers interacting with others challenging decent pay (I'm making just short of 90k / year with a month of vacation & some benefits, and would like to stay in this range, but am willing to take a temporary cut in pay for a more interesting position) Any advice would be much appreciated!

    Read the article

  • Reality behind wireless security - the weakness of encrypting

    - by Cawas
    I welcome better key-wording here, both on tags and title, and I'll add more links as soon as possible. For some years I'm trying to conceive a wireless environment that I'd setup anywhere and advise for everyone, including from big enterprises to small home networks of 1 machine. I've always had the feeling using any kind of the so called "wireless security" methods is actually a bad design. I'm talking mostly about encrypting and pass-phrasing (which are actually two different concepts), since I won't even considering hiding SSID and mac filtering. I understand it's a natural way of thinking. With cable networking nobody can access the network unless they have access to the physical cable, so you're "secure" in the physical way. In a way, encrypting is for wireless what walling (building walls) is for the cables. And giving pass-phrases is adding a door with a key. But the cabling without encryption is also insecure. Someone just need to plugin and get your data! And while I can see the use for encrypting data, I don't think it's a security measure in wireless networks. As I said elsewhere, I believe we should encrypt only sensitive data regardless of wires. And passwords should be added to the users, always, not to wifi. For securing files, truly, best solution is backup. Sure all that doesn't happen that often, but I won't consider the most situations where people just don't care. I think there are enough situations where people actually care on using passwords on their OS users, so let's go with that in mind. For being able to break the walls or the door someone will need proper equipment such as a hammer or a master key of some kind. Same is true for breaking the wireless walls in the analogy. But, I'd say true data security is at another place. I keep promoting the Fonera concept as an instance. It opens up a free wifi port, if you choose so, and anyone can connect to the internet through that, without having any access to your LAN. It also uses a QoS which will never let your bandwidth drop from that public usage. That's security, and it's open. And who doesn't want to be able to use internet freely anywhere you can find wifi spots? I have 3G myself, but that's beyond the point here. If I have a wifi at home I want to let people freely use it for internet as to not be an hypocrite and even guests can easily access my files, just for reading access, so I don't need to keep setting up encryption and pass-phrases that are not whole compatible. I'll probably be bashed for promoting the non-usage of WPA 2 with AES or whatever, but I wanted to know from more experienced (super) users out there: what do you think? Is there really a need for encryption to have true wireless security?

    Read the article

  • Windows Question: RunOnce/Second Boot Issues

    - by Greg
    I am attempting to create a Windows XP SP3 image that will run my application on Second Boot. Here is the intended workflow. 1) Run Image Prep Utility (I wrote) on windows to add my runonce entries and clean a few things up. 2) Reboot to ghost, make image file. 3) Package into my ISO and distribute. 4) System will be imaged by user. 5) On first boot, I have about 5 things that run, one of which includes a driver updater (I wrote) for my own specific devices. 6) One of the entries inside of HKCU/../runonce is a reg file, which adds another key to HKLM/../runonce. This is how second boot is acquired. 7) As a result of the driver updater, user is prompted to reboot. 8) My application is then launched from HKLM/../runonce on second boot. This workflow works perfectly, except for a select few legacy systems that contain devices that cause the add hardware wizard to pop up. When the add hardware wizard pops up is when I begin to see problems. It's important to note, that if I manually inspect the registry after the add hardware wizard pops up, it appears as I would expect, with all the first boot scripts having run, and it's sitting in a state I would correctly expect it to be in for a second boot scenario. The problem comes when I click next on the add hardware wizard, it seems to re-run the single entry I've added, and re-executes the runonce scripts. (only one script now as it's already executed and cleared out the initial entries). This causes my application to open as if it were a second boot, only when next is clicked on the add hardware wizard. If I click cancel, and reboot, then it also works as expected. I don't care as much about other solutions, because I could design a system that doesn't fully rely on Microsoft's registry. I simply can't find any information as to WHY this is happening. I believe this is some type of Microsoft issue that's presenting itself as a result of an overstretched image that's expected to support too many legacy platforms, but any help that can be provided would be appreciated. Thanks,

    Read the article

  • SSD Fresh Does Not Start

    - by Jim Fell
    I recently installed a new 60GB SSD as my primary hard drive and re-installed Windows 7 Professional 64-bit. I then installed SSD Fress from Abelssoft to optimize Windows to run on the SSD. It seemed to install okay, but when I try to run the utility, its splash screen appears briefly before it quietly closes. No errors are displayed; the utility just fails to launch. I have run SSD Fresh on another SSD-equipped Windows 7 Pro x64 computer in the past without any problems. Does anyone know what might be preventing the program from running? I tried shutting down the Spybot Resident and disabling the firewall and virus scanner with no luck. I also tried running the tool as administrator; I even tried reinstalling it, running the installer as administrator. No luck. Every time I try to launch the program the Event Viewer logs this same set of errors: Error 4/2/2012 11:35:44 PM Application Error 1000 (100) Error 4/2/2012 11:35:43 PM .NET Runtime 1026 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None Error 4/2/2012 11:35:39 PM SideBySide 59 None For those who are interested, here is my system configuration: ASRock M3A770DE AM3 AMD 770 ATX AMD Motherboard AMD Athlon II X3 455 Rana 3.3GHz Socket AM3 95W Triple-Core Desktop Processor ADX455WFGMBOX G.SKILL Value Series 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) Desktop Memory Model F3-10600CL9D-8GBNT Mushkin Enhanced Chronos Deluxe MKNSSDCR60GB-DX 2.5" 60GB SATA III Synchronous MLC Internal Solid State Drive (SSD) (Primary/Boot HD) Western Digital Caviar Blue RFHWD1600AAJS 160GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive (Secondary HD) Sony Optiarc CD/DVD Burner Black SATA Model AD-7261S-0B LightScribe Support RAIDMAX RX-850AE 850W ATX12V v2.3 / EPS12V SLI Certified CrossFire Ready 80 PLUS GOLD Certified Modular Active PFC Power Supply ASUS HD7850-DC2-2GD5 Radeon HD 7850 2GB 256-bit GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card Asus ML228H 21.5" Full HD LED BackLight LED Monitor Slim Design (x3)

    Read the article

  • How to get same cookie to control two different folders on same site.

    - by Incandescent
    I am using the below cookie javascript to run a background color changer on my site. I want to also use it for the background color of my forum which is in a separate folder (http://lightbulbchoice.com/forum). I currently have it working on both the site and forum but you have to set each separately, i.e., each is setting it's own cookie. How do I get the forum to locate the main site cookie and not set it's own? // Cookie Functions - Second Helping (21-Jan-96) // Written by: Bill Dortch, hIdaho Design // The following functions were released to the public domain by him. function getCookieVal (offset) { var endstr = document.cookie.indexOf (";", offset); if (endstr == -1) endstr = document.cookie.length; return unescape(document.cookie.substring(offset, endstr)); } function GetCookie (name) { var arg = name + "="; var alen = arg.length; var clen = document.cookie.length; var i = 0; while (i < clen) { var j = i + alen; if (document.cookie.substring(i, j) == arg) return getCookieVal (j); i = document.cookie.indexOf(" ", i) + 1; if (i == 0) break; } return null; } function SetCookie (name, value) { var argv = SetCookie.arguments; var argc = SetCookie.arguments.length; var expires = (argc > 2) ? argv[2] : null; var path = (argc > 3) ? argv[3] : null; var domain = (argc > 4) ? argv[4] : null; var secure = (argc > 5) ? argv[5] : false; document.cookie = name + "=" + escape (value) + ((expires == null) ? "" : ("; expires=" + expires.toGMTString())) + ((path == null) ? "" : ("; path=" + path)) + ((domain == null) ? "" : ("; domain=" + domain)) + ((secure == true) ? "; secure" : ""); } // --> </script>

    Read the article

  • Best RAID setup for multimedia fileserver?

    - by Mr. Schwabe
    I'm building a fileserver for my small office. We do film and multimedia design. Only 3 clients connected. The server is primarily for local access to graphic assets and video files. I'm looking for advice on hardware and software required. Particularly for the RAID. I have the following objectives: A) merged capacity I'd like all other systems to access the data as a single mapped network drive that has an initial capacity of 10 TB. So perhaps 5x 2TB drives (plus mirror drives for redundancy). B) easy way to increase capacity Thinking long term, I'd like to 'easily' add more drives to the array for a potential two or three fold increase in capacity. So theoretically it could get upto a 30 TB raid array consisting of maybe 15x 2 TB drives of capacity (plus mirror drives for redundancy). C) maximum fault tolerance I want at least 1 mirror drive per capacity drive (in laymen's terms). So if I start with 10 TB / 5x 2TB of capacity, I suppose I would need another another 5x 2TB drives to be mirrors. So 10 drives total. But I'd also like potential for even more redundancy; with upto 2 additional mirrors per 'capacity drive' (and to be able to add them to the array anytime with ease). D) easy way to monitor drive health I'd like an intuitive interface for managing the raid and monitoring drive health The other systems accessing this network drive will be running Windows, but also the odd Ubuntu and MacOS system as well. Are these objectives attainable? What type of RAID setup do you recommend? What hardware will be required? Also what OS do you think this system should be running? Does it really matter? I'm no network admin - just a long time Windoze user, without much Linux experience. That said, I'm not opposed to a Linux solution if it's easy enough and more practical than a Windows OS for this server. Or maybe something such as Openfiler. Budget should hit the sweet spot for value and performance (hence my preference to use 2TB drives). The biggest focus is storage; aside from that the system just needs to keep the drives running optimally with perhaps 2 or 3 clients accessing / writing files at any given time. The hardware quote would start with something like 10x 2TB WD Caviar Blacks; about $1900 for the storage + $x for remaining parts. http://ncix.com/products/index.php?sku=42775&vpn=WD2001FASS&manufacture=Western%20Digital%20WD Your advice is appreciated, thanks!

    Read the article

  • Hot-swap drive got new name, can I change it on-the-fly?

    - by T.J. Crowder
    One of the HDDs in my server's RAID config failed, so I took it out of the array and had the data center hot-swap it. They've done that, but now the new drive is /dev/sdc rather than /dev/sda. I suspect — correct me if I'm wrong — that if I reboot the server, it will be /dev/sda again, so I'm hesitant to add it back to the array as /dev/sdc because I don't want to lay a trap for myself to fall into on the next reboot. I'd just as soon not reboot the server if I don't need to (if I do need to, well, too bad for me). Is there a way I can change the device name from /dev/sdc to /dev/sda without rebooting? This is on Ubuntu 10.04 LTS. It's an md array ("Linux Software RAID"), where currently one of the devices (there are a couple of them) looks like this ("degraded" because I've removed the old /dev/sda from it): # mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Sun Oct 11 21:07:54 2009 Raid Level : raid1 Array Size : 97536 (95.27 MiB 99.88 MB) Used Dev Size : 97536 (95.27 MiB 99.88 MB) Raid Devices : 2 Total Devices : 1 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 30 09:31:16 2011 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : 496be7a5:ab9177ed:7792c71e:7dc17aa4 Events : 0.112 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 0 0 1 removed Thanks, Update: Reading through the kernel md documentation, I suspect that if the name changes on reboot, it won't matter. (Good design, that.) Here's why: Boot time autodetection of RAID arrays When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be suppressed with the kernel parameter "raid=noautodetect". As of kernel 2.6.9, only drives with a type 0 superblock can be autodetected and run at boot time. The kernel parameter "raid=partitionable" (or "raid=part") means that all auto-detected arrays are assembled as partitionable. I do have md compiled into the kernel, so I'm rebuilding the array now and will do the reboot to see what happens. Even if it works, the above doesn't answer the question I actually asked, so unless someone comes along and answers that question in the meantime (I'd be interested, even if it's not necessary for what I'm doing this very moment), I'll just delete the question to keep noise down.

    Read the article

  • Moving default web site to another drive

    - by Chadworthington
    I set the default location from c:\inetpub\wwwroot to d:\inetpub\wwwroot but when I access my .NET 4.0 site get this error: Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: Unrecognized attribute 'targetFramework'. Note that attribute names are case-sensitive. Source Error: Line 105: Set explicit="true" to force declaration of all variables. Line 106: --> Line 107: <compilation debug="true" strict="true" explicit="true" targetFramework="4.0"> Line 108: <assemblies> Line 109: <add assembly="System.Web.Extensions.Design, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> When I try to Manage the Basic Settings on the Site and click the "Test Settings" button, I see that I have a problem under "authorization:" The server is configured to use pass-through authentication with a built-in account to access the specified physical path. However, IIS Manager cannot verify whether the built-in account has access. Make sure that the application pool identity has Read access to the physical path. If this server is joined to a domain, and the application pool identity is NetworkService or LocalSystem, verify that <domain>\<computer_name>$ has Read access to the physical path. Then test these settings again. 1) Do I need to grant rights to IIS to the new folder? Which user? I thought it was something like IIS_USER or something similar but I cannot determine the correct name of the user. 2) Also, do I need to set the default version of the framework somewhere at the Default Site level or at the Virtual folder level? How is this done in IIS6, I am used to IIS5 or whatever came with XP Pro. 3) My original site had a subfolder under wwwroot called "aspnet_client." How was this cleated? I manually copied it to the corresponding new location. My app was using seperate ASP specific databases for storing session state and role info, if that is relevant. Thanks

    Read the article

  • Windows 7 immediately disconnects a USB drive

    - by Daniel Saner
    I am having a problem with Windows 7 x64 consistently disconnecting one specific USB mass storage drive immediately after it is connected. The drive in question is a Cowon C2 digital music player which works in standard mass storage controller mode (i.e. no device-specific drivers needed/available). When I connect the player, Windows plays the "USB connect" sound and the device appears (under its correct name) in the device manager, but it never appears as a drive. The player itself displays "USB Connected" for a split-second before reporting that it has been disconnected again. Since the player, by design, reboots after it has been disconnected, Windows plays the "USB disconnect" sound before restarting the whole cycle once the player has powered back on. I am connecting the player through an Intel X79 Chipset motherboard (Gigabyte GA-X79-UD3) to Windows 7 Pro 64-bit. The player used to work fine the first few times I connected it, showing up as an external drive; it only recently stopped working. It is not a problem with the player, since it works fine when connected to another computer, even such running the exact same operating system. It is also not a problem with the USB controller, since the issue is the same on both the Intel USB 2.0 and the Fresco Logic FL1009 USB 3.0 controller ports. I have also not had the problem with any other drive so far. Among the things I have tried so far: Disabling USB legacy mode in BIOS Disabling energy-saving power down for all USB controllers in Windows' device manager Removing and reinstalling Windows' USB mass storage driver Removing and reinstalling Intel and Fresco Logic USB controller driver Restoring the player to factory defaults None of these made a difference. Again, the player used to work fine on the exact same system just days ago; I didn't install any new hardware or drivers on it since then. I would be very grateful for any hints on what else to try. Edit: Here is another new hint; I found out that when I connect the drive before booting Windows, it is available in Windows Explorer as it should, and does not automatically disconnect. If I remove and reconnect it though, the infinite connect/disconnect-loop starts anew.

    Read the article

  • Google Rules for Retail

    - by David Dorf
    In the book What Would Google Do?, Jeff Jarvis outlines ten "Google Rules" that define how Google acts.  These rules help define how Web 2.0 businesses operate today and into the future.  While there's a chapter in the book on applying these rules to the retail industry, it wasn't very in-depth.  So I've decided to more directly apply the rules to retail, along with some notable examples of success.  The table below shows Jeff's Google Rule, some Industry Examples, and New Retailer Rules that I created. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid black 1.0pt; mso-border-themecolor:text1; mso-border-alt:solid black .5pt; mso-border-themecolor:text1; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid black; mso-border-insideh-themecolor:text1; mso-border-insidev:.5pt solid black; mso-border-insidev-themecolor:text1; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Google Rule Industry Examples New Retailer Rule New Relationship Your worst customer is your friend; you best customer is your partner Newegg.com lets manufacturers respond to customer comments that are critical of the product, and their EggXpert site lets customers help other customers. Listen to what your customers are saying about you.  Convert the critics to fans and the fans to influencers. New Architecture Join a network; be a platform Tesco and BestBuy released APIs for their product catalogs so third-parties could create new applications. Become a destination for information. New Publicness Life is public, so is business Zappos and WholeFoods founders are prolific tweeters/bloggers, sharing their opinions and connecting to customers.  It's not always pretty, but it's genuine. Be transparent.  Share both your successes and failures with your customers. New Society Elegant organization Wet Seal helps their customers assemble outfits and show them off to each other.  Barnes & Noble has a community site that includes a bookclub. Communities of your customers already exist, so help them organize better. New Economy Mass market is dead; long live the mass of niches lululemon found a niche for yoga inspired athletic wear.  Threadless uses crowd-sourcing to design short-runs of T-shirts. Serve small markets with niche products. New Business Reality Decide what business you're in When Lowes realized catering to women brought the men along, their sales increased. Customers want experiences to go with the products they buy. New Attitude Trust the people and listen In 2008 Starbucks launched MyStartbucksIdea to solicit ideas from their customers. Use social networks as additional data points for making better merchandising decisions. New Ethic Be honest and transparent; don't be evil Target is giving away reusable shopping bags for Earth Day.  Kohl's has outfitted 67 stores with solar arrays. Being green earns customers' respect and lowers costs too. New Speed Life is live H&M and Zara keep up with fashion trends. Be prepared to pounce on you customers' fickle interests. New Imperatives Encourage, enable and protect innovation 1-800-Flowers was the first do sales in Facebook and an early adopter of mobile commerce.  The Sears Personal Shopper mobile app finds products based on a photo. Give your staff permission to fail so innovation won't be stifled. Jeff will be a keynote speaker at Crosstalk, our upcoming annual user conference, so I'm looking forward to hearing more of his perspective on retail and the new economy.

    Read the article

  • Where is my app.config for SSIS?

    Sometimes when working with SSIS you need to add or change settings in the .NET application configuration file, which can be a bit confusing when you are building a SSIS package not an application. First of all lets review a couple of examples where you may need to do this. You are using referencing an assembly in a Script Task that uses Enterprise Library (aka EntLib), so you need to add the relevant configuration sections and settings, perhaps for the logging application block. You are using using Enterprise Library in a custom task or component, and again you need to add the relevant configuration sections and settings. You are using a web service with Microsoft Web Services Enhancements (WSE) 3.0 and hosting the proxy in SSIS, in an assembly used by your package, and need to add the configuration sections and settings. You need to change behaviours of the .NET framework which can be influenced by a configuration file, such as the System.Net.Mail default SMTP settings. Perhaps you wish to configure System.Net and the httpWebRequest header for parsing unsafe header (useUnsafeHeaderParsing), which will change the way the HTTP Connection manager behaves. You are consuming a WCF service and wish to specify the endpoint in configuration. There are no doubt plenty more examples but each of these requires us to identify the correct configuration file and and make the relevant changes. There are actually several configuration files, each used by a different execution host depending on how you are working with the SSIS package. The folders we need to look in will actually vary depending on the version of SQL Server as well as the processor architecture, but most are all what we can call the Binn folder. The SQL Server 2005 Binn folder is at C:\Program Files\Microsoft SQL Server\90\DTS\Binn\, compared to C:\Program Files\Microsoft SQL Server\100\DTS\Binn\ for SQL Server 2008. If you are on a 64-bit machine then you will see C:\Program Files (x86)\Microsoft SQL Server\90\DTS\Binn\ for the 32-bit executables and C:\Program Files\Microsoft SQL Server\90\DTS\Binn\ for 64-bit, so be sure to check all relevant locations. Of course SQL Server 2008 may have a C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\ on a 64-bit machine too. To recap, the version of SQL Server determines if you look in the 90 or 100 sub-folder under SQL Server in Program Files (C:\Program Files\Microsoft SQL Server\nn\) . If you are running a 64-bit operating system then you will have two instances program files, C:\Program Files (x86)\ for 32-bit and  C:\Program Files\ for 64-bit. You may wish to check both depending on what you are doing, but this is covered more under each section below. There are a total of five specific configuration files that you may need to change, each one is detailed below: DTExec.exe.config DTExec.exe is the standalone command line tool used for executing SSIS packages, and therefore it is an execution host with an app.config file. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTExec.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. DtsDebugHost.exe.config DtsDebugHost.exe is the execution host used by Business Intelligence Development Studio (BIDS) / Visual Studio when executing a package from the designer in debug mode, which is the default behaviour. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DtsDebugHost.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. This may surprise some people as Visual Studio is only 32-bit, but thankfully the debugger supports both. This can be set in the project properties, see the Run64BitRuntime property (true or false) in the Debugging pane of the Project Properties. dtshost.exe.config dtshost.exe is the execution host used by what I think of as the built-in features of SQL Server such as SQL Server Agent e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\dtshost.exe.config This file can be found in both the 32-bit and 64-bit Binn folders devenv.exe.config Something slightly different is devenv.exe which is Visual Studio. This configuration file may also need changing if you need a feature at design-time such as in a Task Editor or Connection Manager editor. Visual Studio 2005 for SQL Server 2005  - C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\devenv.exe.config Visual Studio 2008 for SQL Server 2008  - C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe.config Visual Studio is only available for 32-bit so on a 64-bit machine you will have to look in C:\Program Files (x86)\ only. DTExecUI.exe.config The DTExec UI tool can also have a configuration file and these cab be found under the Tools folders for SQL Sever as shown below. C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe A configuration file may not exist, but if you can find the matching executable you know you are in the right place so can go ahead and add a new file yourself. In summary we have covered the assembly configuration files for all of the standard methods of building and running a SSIS package, but obviously if you are working programmatically you will need to make the relevant modifications to your program’s app.config as well.

    Read the article

  • Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider

    - by Gopinath
    If you going to buy iPhone 4S on a two year contract in USA, Europe or Australia you may not find it expensive. But if you are planning to buy it in any other parts of the world, you will definitely feel the heat of ridiculous iPhone 4S price. In India iPhone 4S costs approximately costs $1000 which is 30% more than the price tag of an unlocked iPhone sold in USA. Personally I love iPhones as there is no match for the user experience provided by Apple as well as the wide range of really meaning applications available for iPhone. But it breaks heart to spend $1000 for a phone and I’m forced to look at alternates available in the market. Here are the four iPhone 4S alternates available in almost all the countries where we can buy iPhone 4S Google Galaxy Nexus The Galaxy Nexus is Google’s own Android smartphone manufactured by Samsung and sold under the brand name of Google Nexus. Galaxy Nexus is the pure Android phone available in the market without any bloat software or custom user interfaces like other Androids available in the market. Galaxy Nexus is also the first Android phone to be shipped with the latest version of Android OS, Ice Cream Sandwich. This phone is the benchmark for the rest of Android phones that are going to enter the market soon. In the words of Google this smartphone is called as “Galaxy Nexus: Simple. Beautiful. Beyond Smart.”.  BGR review summarizes the phone as This is almost comical at this point, but the Samsung Galaxy Nexus is my favourite Android device in the world. Easily replacing the HTC Rezound, the Motorola DROID RAZR, and Samsung Galaxy S II, the Galaxy Nexus champions in a brand new version of Android that pushes itself further than almost any other mobile OS in the industry. Samsung Galaxy S II The one single company that is able to sell more smartphones than Apple is Samsung. Samsung recently displaced Apple from the top smartphone seller spot and occupied it with loads of pride. Samsung’s Galaxy S II fits as one the best alternatives to Apple’s iPhone 4S with it’s beautiful design and remarkable performance. Engadget summarizes Samsung Galaxy S2 review as It’s the best Android smartphone yet, but more importantly, it might well be the best smartphone, period. Of course, a 4.3-inch screen size won’t suit everyone, no matter how stupendously thin the device that carries it may be, and we also can’t say for sure that the Galaxy S II would justify a long-term iOS user foresaking his investment into one ecosystem and making the leap to another. Nonetheless, if you’re asking us what smartphone to buy today, unconstrained by such externalities, the Galaxy S II would be the clear choice. Sometimes it’s just as simple as that. Nokia Lumia 800 Here comes unexpected Windows Phone in to the boxing ring. May be they are not as great as Androids available in the market today, but they are picking up very quickly. Especially the Nokia Lumia 800 seems to be first ever Windows Phone 7 aimed at competing serious with Androids and iPhones available in the market. There are reports that Nokia Lumia 800 is outselling all Androids in UK and few high profile tech blogs are calling it as the king of Windows Phone. Considering this phone while evaluating the alternative of iPhone 4S will not disappoint you. We assure. Droid RAZR Remember the Motorola Driod that swept entire Android market share couple of years ago? The first two version of Motorola Droids were the best in the market and they out performed almost every other Android phone those days. The invasion of Samsung Androids, Motorola lost it charm. With the recent release of Droid RAZR, Motorola seems to be in the right direction to reclaiming the prestige. Droid RAZR is the thinnest smartphone available in the market and it’s beauty is not just skin deep. Here is a review of the phone from Engadget blog the RAZR’s beauty is not only skin deep. The LTE radio, 1.2GHz dual-core processor and 1GB of RAM make sure this sleek number is ready to run with the big boys. It kept pace with, and in some cases clearly outclassed its high-end competition. Despite its deficiencies in the display department and underwhelming battery life, the RAZR looks to be a perfectly viable alternative when considering the similarly-pricey Rezound and Galaxy Nexus Further Reading So we have seen the four alternates of iPhone 4S available in the market and I personally love to buy a Samsung smartphone if I’m don’t have money to afford an iPhone 4S. If you are interested in deep diving into the alternates, here few links that help you do more research Apple iPhone 4S vs. Samsung Galaxy Nexus vs. Motorola Droid RAZR: How Their Specs Compare by Huffington Post Nokia Lumia 800 vs. iPhone 4S vs. Nexus Galaxy: Spec Smackdown by PC World Browser Speed Test: Nokia Lumia 800 vs. iPhone 4S vs. Samsung Galaxy S II – by Gizmodo iPhone 4S vs Samsung Galaxy S II by pocket lint Apple iPhone 4S vs. Samsung Galaxy S II by techie buzz This article titled,Looking For iPhone 4S Alternatives? Here Are 3 Smartphones You Should Consider, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • May 20th Links: ASP.NET MVC, ASP.NET, .NET 4, VS 2010, Silverlight

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series and ASP.NET MVC 2 series for other on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET MVC How to Localize an ASP.NET MVC Application: Michael Ceranski has a good blog post that describes how to localize ASP.NET MVC 2 applications. ASP.NET MVC with jTemplates Part 1 and Part 2: Steve Gentile has a nice two-part set of blog posts that demonstrate how to use the jTemplate and DataTable jQuery libraries to implement client-side data binding with ASP.NET MVC. CascadingDropDown jQuery Plugin for ASP.NET MVC: Raj Kaimal has a nice blog post that demonstrates how to implement a dynamically constructed cascading dropdownlist on the client using jQuery and ASP.NET MVC. How to Configure VS 2010 Code Coverage for ASP.NET MVC Unit Tests: Visual Studio enables you to calculate the “code coverage” of your unit tests.  This measures the percentage of code within your application that is exercised by your tests – and can give you a sense of how much test coverage you have.  Gunnar Peipman demonstrates how to configure this for ASP.NET MVC projects. Shrinkr URL Shortening Service Sample: A nice open source application and code sample built by Kazi Manzur that demonstrates how to implement a URL Shortening Services (like bit.ly) using ASP.NET MVC 2 and EF4.  More details here. Creating RSS Feeds in ASP.NET MVC: Damien Guard has a nice post that describes a cool new “FeedResult” class he created that makes it easy to publish and expose RSS feeds from within ASP.NET MVC sites. NoSQL with MongoDB, NoRM and ASP.NET MVC Part 1 and Part 2: Nice two-part blog series by Shiju Varghese on how to use MongoDB (a document database) with ASP.NET MVC.  If you are interested in document databases also make sure to check out the Raven DB project from Ayende. Using the FCKEditor with ASP.NET MVC: Quick blog post that describes how to use FCKEditor – an open source HTML Text Editor – with ASP.NET MVC. ASP.NET Replace Html.Encode Calls with the New HTML Encoding Syntax: Phil Haack has a good blog post that describes a useful way to quickly update your ASP.NET pages and ASP.NET MVC views to use the new <%: %> encoding syntax in ASP.NET 4.  I blogged about the new <%: %> syntax – it provides an easy and concise way to HTML encode content. Integrating Twitter into an ASP.NET Website using OAuth: Scott Mitchell has a nice article that describes how to take advantage of Twiter within an ASP.NET Website using the OAuth protocol – which is a simple, secure protocol for granting API access. Creating an ASP.NET report using VS 2010 Part 1, Part 2, and Part 3: Raj Kaimal has a nice three part set of blog posts that detail how to use SQL Server Reporting Services, ASP.NET 4 and VS 2010 to create a dynamic reporting solution. Three Hidden Extensibility Gems in ASP.NET 4: Phil Haack blogs about three obscure but useful extensibility points enabled with ASP.NET 4. .NET 4 Entity Framework 4 Video Series: Julie Lerman has a nice, free, 7-part video series on MSDN that walks through how to use the new EF4 capabilities with VS 2010 and .NET 4.  I’ll be covering EF4 in a blog series that I’m going to start shortly as well. Getting Lazy with System.Lazy: System.Lazy and System.Lazy<T> are new features in .NET 4 that provide a way to create objects that may need to perform time consuming operations and defer the execution of the operation until it is needed.  Derik Whittaker has a nice write-up that describes how to use it. LINQ to Twitter: Nifty open source library on Codeplex that enables you to use LINQ syntax to query Twitter. Visual Studio 2010 Using Intellitrace in VS 2010: Chris Koenig has a nice 10 minute video that demonstrates how to use the new Intellitrace features of VS 2010 to enable DVR playback of your debug sessions. Make the VS 2010 IDE Colors look like VS 2008: Scott Hanselman has a nice blog post that covers the Visual Studio Color Theme Editor extension – which allows you to customize the VS 2010 IDE however you want. How to understand your code using Dependency Graphs, Sequence Diagrams, and the Architecture Explorer: Jennifer Marsman has a nice blog post describes how to take advantage of some of the new architecture features within VS 2010 to quickly analyze applications and legacy code-bases. How to maintain control of your code using Layer Diagrams: Another great blog post by Jennifer Marsman that demonstrates how to setup a “layer diagram” within VS 2010 to enforce clean layering within your applications.  This enables you to enforce a compiler error if someone inadvertently violates a layer design rule. Collapse Selection in Solution Explorer Extension: Useful VS 2010 extension that enables you to quickly collapse “child nodes” within the Visual Studio Solution Explorer.  If you have deeply nested project structures this extension is useful. Silverlight and Windows Phone 7 Building a Simple Windows Phone 7 Application: A nice tutorial blog post that demonstrates how to take advantage of Expression Blend to create an animated Windows Phone 7 application. If you haven’t checked out my Windows Phone 7 Twitter Tutorial I also recommend reading that. Hope this helps, Scott P.S. If you haven’t already, check out this month’s "Find a Hoster” page on the www.asp.net website to learn about great (and very inexpensive) ASP.NET hosting offers.

    Read the article

  • How to use jQuery Date Range Picker plugin in asp.net

    - by alaa9jo
    I stepped by this page: http://www.filamentgroup.com/lab/date_range_picker_using_jquery_ui_16_and_jquery_ui_css_framework/ and let me tell you,this is one of the best and coolest daterangepicker in the web in my opinion,they did a great job with extending the original jQuery UI DatePicker.Of course I made enhancements to the original plugin (fixed few bugs) and added a new option (Clear) to clear the textbox. In this article I well use that updated plugin and show you how to use it in asp.net..you will definitely like it. So,What do I need? 1- jQuery library : you can use 1.3.2 or 1.4.2 which is the latest version so far,in my article I will use the latest version. 2- jQuery UI library (1.8): As I mentioned earlier,daterangepicker plugin is based on the original jQuery UI DatePicker so that library should be included into your page. 3- jQuery DateRangePicker plugin : you can go to the author page or use the modified one (it's included in the attachment),in this article I will use the modified one. 4- Visual Studio 2005 or later : very funny :D,in my article I will use VS 2008. Note: in the attachment,I included all CSS and JS files so don't worry. How to use it? First thing,you will have to include all of the CSS and JS files into your page like this: <script src="Scripts/jquery-1.4.2.min.js" type="text/javascript"></script> <script src="Scripts/jquery-ui-1.8.custom.min.js" type="text/javascript"></script> <script src="Scripts/daterangepicker.jQuery.js" type="text/javascript"></script> <link href="CSS/redmond/jquery-ui-1.8.custom.css" rel="stylesheet" type="text/css" /> <link href="CSS/ui.daterangepicker.css" rel="stylesheet" type="text/css" /> <style type="text/css"> .ui-daterangepicker { font-size: 10px; } </style> Then add this html: <asp:TextBox ID="TextBox1" runat="server" Font-Size="10px"></asp:TextBox><asp:Button ID="SubmitButton" runat="server" Text="Submit" OnClick="SubmitButton_Click" /> <span>First Date:</span><asp:Label ID="FirstDate" runat="server"></asp:Label> <span>Second Date:</span><asp:Label ID="SecondDate" runat="server"></asp:Label> As you can see,it includes TextBox1 which we are going to attach the daterangepicker to it,2 labels to show you later on by code on how to read the date from the textbox and set it to the labels Now we have to attach the daterangepicker to the textbox by using jQuery (Note:visit the author's website for more info on daterangerpicker's options and how to use them): <script type="text/javascript"> $(function() { $("#<%= TextBox1.ClientID %>").attr("readonly", "readonly"); $("#<%= TextBox1.ClientID %>").attr("unselectable", "on"); $("#<%= TextBox1.ClientID %>").daterangepicker({ presetRanges: [], arrows: true, dateFormat: 'd M, yy', clearValue: '', datepickerOptions: { changeMonth: true, changeYear: true} }); }); </script> Finally,add this C# code: protected void SubmitButton_Click(object sender, EventArgs e) { if (TextBox1.Text.Trim().Length == 0) { return; } string selectedDate = TextBox1.Text; if (selectedDate.Contains("-")) { DateTime startDate; DateTime endDate; string[] splittedDates = selectedDate.Split("-".ToCharArray(), StringSplitOptions.RemoveEmptyEntries); if (splittedDates.Count() == 2 && DateTime.TryParse(splittedDates[0], out startDate) && DateTime.TryParse(splittedDates[1], out endDate)) { FirstDate.Text = startDate.ToShortDateString(); SecondDate.Text = endDate.ToShortDateString(); } else { //maybe the client has modified/altered the input i.e. hacking tools } } else { DateTime selectedDateObj; if (DateTime.TryParse(selectedDate, out selectedDateObj)) { FirstDate.Text = selectedDateObj.ToShortDateString(); SecondDate.Text = string.Empty; } else { //maybe the client has modified/altered the input i.e. hacking tools } } } This is the way on how to read from the textbox,That's it!. FAQ: 1-Why did you add this code?: <style type="text/css"> .ui-daterangepicker { font-size: 10px; } </style> A:For two reasons: 1)To show the Daterangepicker in a smaller size because it's original size is huge 2)To show you how to control the size of it. 2- Can I change the theme? A: yes you can,you will notice that I'm using Redmond theme which you will find it in jQuery UI website,visit their website and download a different theme,you may also have to make modifications to the css of daterangepicker,it's all yours. 3- Why did you add a font size to the textbox? A: To make the design look better,try to remove it and see by your self. 4- Can I register the script at codebehind? A: yes you can 5- I see you have added these two lines,what they do? $("#<%= TextBox1.ClientID %>").attr("readonly", "readonly"); $("#<%= TextBox1.ClientID %>").attr("unselectable", "on"); A:The first line will make the textbox not editable by the user,the second will block the blinking typing cursor from appearing if the user clicked on the textbox,you will notice that both lines are necessary to be used together,you can't just use one of them...for logical reasons of course. Finally,I hope everyone liked the article and as always,your feedbacks are always welcomed and if anyone have any suggestions or made any modifications that might be useful for anyone else then please post it at at the author's website and post a reference to your post here.

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Associating File Types with AutoVue Desktop Deployment

    - by [email protected]
    Windows users take for granted that when they double click on a document or design, that it will open up in its application automatically. One of the questions I'm commonly asked is "How can I get the same behavior with AutoVue Desktop Deployment?". It's pretty easy, but there are a few tricks to doing it. Step 1: Download new jvue_direct.bat and icon The first thing you'll need to do is download a slightly modified version of jvue_direct.bat. You can find it here (Document 1075784.1) on Oracle's Support Portal. You also want to download the AV.ico file. This is the icon that will be used for all file types associated with AutoVue. Place both of these files in your <AutoVueInstallDirectory>\bin directory. Step 2: Associate File Types With AutoVue There are two ways to do this. You can do this through the Windows user interface, or you can set up a batch file to do this. Associating File Types Through Windows The way most people associate file types to an application is using the Windows user interface. You've probably tried to open a file type that Windows doesn't recognize and seen this window pop up: Although you can use this dialog to associate that file type with AutoVue, I don't recommend it. I much prefer using a batch file to associate file types with AutoVue. Associating File Types Using A Batch File There are a few good reasons to associate file types using a batch file instead of using the pop-up dialog method: If you have several file types to associate with AutoVue, it's much easier to use a batch file to do them all at once. Doing it through the Windows user interface requires having files of each type available. Using a batch file doesn't require having the files you're associating. Associating file types through the dialog may work well for one person, but what if you're an administrator doing an enterprise wide deployment of AutoVue Desktop Deployment for several hundred users? You don't want to do this manually for each user. You can have one simple batch file that's run on each user's PC to set up all the file types. You can easily associate an icon with the file types you're opening with AutoVue. To use the batch file method follow these steps: Create a file called filetype.bat using a text editor and copy and paste the following into it: @assoc .dwg=AVFile @assoc .jpg=AVFile @assoc .doc=AVFile @ftype AVFile="%~dp0jvue_direct.bat" "%%1" @reg add HKEY_CLASSES_ROOT\AVFile\DefaultIcon /v "" /f /d "%~dp0AV.ico" Change the lines starting with @assoc. Each of these lines associates a file extension with AutoVue. You can have as many @assoc lines as you want. Save this file in your <AutoVueInstallDirectory>\bin directory. Double click this file, or run it from a command prompt. Restart Windows to get the icons to show up. How Does This Work? The first three lines are creating a file type called AVFile. We are associating the extensions .dwg, .jpg, and .doc with this file type. You will want to change these lines when creating your own batch file. For example, to associate Microstation designs, which have extension .dgn, you should delete the @assoc lines above and add the line: @assoc .dgn=AVfile The line beginning with @ftype tells Windows that all AVFile type files should be opened using AutoVue Desktop Deployment. The final line associates the AutoVue icon with these file types. You may need to restart Windows to see the new icons. Warning: One Size Doesn't Fit All When deciding which file types should be associated with AutoVue, remember that there are different types of users using it. Your engineers may be pretty surprised to find that after installing AutoVue, double clicking their .dwg file opens up AutoVue instead of AutoCAD. If you have more than one type of AutoVue user, make sure you've considered what file types each user group will and will not want to be associated with AutoVue. If necessary, create a separate file association batch file for each user type. So that's it. In two simple steps you can double click your favorite designs and have them open automatically in AutoVue Desktop Deployment. I'd love to hear how are you using AutoVue Desktop Deployment. What other deployment tips would you be interested in learning about?

    Read the article

  • Where is my app.config for SSIS?

    Sometimes when working with SSIS you need to add or change settings in the .NET application configuration file, which can be a bit confusing when you are building a SSIS package not an application. First of all lets review a couple of examples where you may need to do this. You are using referencing an assembly in a Script Task that uses Enterprise Library (aka EntLib), so you need to add the relevant configuration sections and settings, perhaps for the logging application block. You are using using Enterprise Library in a custom task or component, and again you need to add the relevant configuration sections and settings. You are using a web service with Microsoft Web Services Enhancements (WSE) 3.0 and hosting the proxy in SSIS, in an assembly used by your package, and need to add the configuration sections and settings. You need to change behaviours of the .NET framework which can be influenced by a configuration file, such as the System.Net.Mail default SMTP settings. Perhaps you wish to configure System.Net and the httpWebRequest header for parsing unsafe header (useUnsafeHeaderParsing), which will change the way the HTTP Connection manager behaves. You are consuming a WCF service and wish to specify the endpoint in configuration. There are no doubt plenty more examples but each of these requires us to identify the correct configuration file and and make the relevant changes. There are actually several configuration files, each used by a different execution host depending on how you are working with the SSIS package. The folders we need to look in will actually vary depending on the version of SQL Server as well as the processor architecture, but most are all what we can call the Binn folder. The SQL Server 2005 Binn folder is at C:\Program Files\Microsoft SQL Server\90\DTS\Binn\, compared to C:\Program Files\Microsoft SQL Server\100\DTS\Binn\ for SQL Server 2008. If you are on a 64-bit machine then you will see C:\Program Files (x86)\Microsoft SQL Server\90\DTS\Binn\ for the 32-bit executables and C:\Program Files\Microsoft SQL Server\90\DTS\Binn\ for 64-bit, so be sure to check all relevant locations. Of course SQL Server 2008 may have a C:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\ on a 64-bit machine too. To recap, the version of SQL Server determines if you look in the 90 or 100 sub-folder under SQL Server in Program Files (C:\Program Files\Microsoft SQL Server\nn\) . If you are running a 64-bit operating system then you will have two instances program files, C:\Program Files (x86)\ for 32-bit and  C:\Program Files\ for 64-bit. You may wish to check both depending on what you are doing, but this is covered more under each section below. There are a total of five specific configuration files that you may need to change, each one is detailed below: DTExec.exe.config DTExec.exe is the standalone command line tool used for executing SSIS packages, and therefore it is an execution host with an app.config file. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTExec.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. DtsDebugHost.exe.config DtsDebugHost.exe is the execution host used by Business Intelligence Development Studio (BIDS) / Visual Studio when executing a package from the designer in debug mode, which is the default behaviour. e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DtsDebugHost.exe.config The file can be found in both the 32-bit and 64-bit Binn folders. This may surprise some people as Visual Studio is only 32-bit, but thankfully the debugger supports both. This can be set in the project properties, see the Run64BitRuntime property (true or false) in the Debugging pane of the Project Properties. dtshost.exe.config dtshost.exe is the execution host used by what I think of as the built-in features of SQL Server such as SQL Server Agent e.g. C:\Program Files\Microsoft SQL Server\90\DTS\Binn\dtshost.exe.config This file can be found in both the 32-bit and 64-bit Binn folders devenv.exe.config Something slightly different is devenv.exe which is Visual Studio. This configuration file may also need changing if you need a feature at design-time such as in a Task Editor or Connection Manager editor. Visual Studio 2005 for SQL Server 2005  - C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\devenv.exe.config Visual Studio 2008 for SQL Server 2008  - C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe.config Visual Studio is only available for 32-bit so on a 64-bit machine you will have to look in C:\Program Files (x86)\ only. DTExecUI.exe.config The DTExec UI tool can also have a configuration file and these cab be found under the Tools folders for SQL Sever as shown below. C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\DTExecUI.exe A configuration file may not exist, but if you can find the matching executable you know you are in the right place so can go ahead and add a new file yourself. In summary we have covered the assembly configuration files for all of the standard methods of building and running a SSIS package, but obviously if you are working programmatically you will need to make the relevant modifications to your program’s app.config as well.

    Read the article

  • MySQL 5.5 brings in new ways to authenticate users

    - by Georgi Kodinov
    Ever wanted to use your server's OS for authenticating MySQL users ? Or the corporate LDAP repository ? Unfortunately options like the above are plentiful nowadays. And providing hard-coded support for protocol X or service Y is not the best possible idea. MySQL 5.5 has taken the step into the right direction by providing an infrastructure allowing one to make the server understand different authentication protocols by creating a set of simple plugins (one for the client and one for the server). So now you can easily extend MySQL to search for and authenticate users in your favorite user directory. In fact the API supplied is so versatile that we took the possibility to re-design the current "native" authentication mechanism into a built-in always-on plugin ! OK, let me give you an example: Imagine we have a bunch of users defined in your OS, e.g. we have a user joro with his respective password. And we have a MySQL instance running on the same computer. It would not be unexpected to need to let joro access and/or modify MySQL data. The first step is to define him as a MySQL user. And there's a problem right there : MySQL's CREATE USER joro@localhost IDENTIFIED BY 'joros_password' statement needs a password. And this is a password in no way related to the password that joro have set up in the OS. What's worse : if joro changes his OS password this will in no way be reflected in MySQL. So he'll need to change his MySQL password in a separate step. Not very convenient, specially when you have a lot of users. This is a laborious setup for joro's DBA as well : he'll have to disable his access in both MySQL and the OS should he decides that joro's out of the "nice" list. Now mysql 5.5 to the rescue: Imagine that the smart DBA has created a MySQL server plugin that will check if the name of the user logging in is a valid and enabled OS name and if the password supplied to the mysql client matches the OS and has called this plugin 'auth_os'. Now all that's left to do is to define joro as a MySQL user that will be authenticated externally. This is done by the following command : CREATE USER 'joro'@'localhost' IDENTIFIED WITH 'auth_os'; Now joro can login to MySQL using his current OS password. Note : joro is still a valid MySQL user, so you can grant privileges to him just like you would for all other users. What's better: you can have users that authenticate using different mechanisms in the same server. So you can e.g. safely experiment with external authentication for selected users while keeping your current user base operational. What happens under the hood when joro logs in ? The server will find out by the user definition that it needs to use a non-default authentication and will ask the client to "switch" to using the appropriate client-side plugin (if of course the client is not already using it). If the client can't do this (e.g. because it's an old client or doesn't have the necessary plugin available) the server will reject the login. Otherwise the server will let the server-side plugin decide (while possibly talking to the client side plugin and the OS user directory) if this is a valid login or not. If it is the login process will continue as usual, while if it's not the login will get rejected. There's a lot more that MySQL 5.5 can do for you than just the simple case above. Stay tuned for more advanced use cases like mapping groups of external users to a single MySQL user (so you won't have to have 1-to-1 mapping between your external user directory and your mysql user repository) or ways to control the process as a DBA. Or you can simply skip ahead and read the relevant topics from MySQL's excellent online documentation. Or take a look at the example plugins in plugin/auth. Or take a look at the test suite in mysql-test/t/plugin_auth.test. Changelog entry: http://dev.mysql.com/doc/refman/5.5/en/news-5-5-7.html Primary new sections: Pluggable authentication Proxy users Client plugin C API functions Revised sections: New PROXY privilege New proxies_priv grant table Passwords might be external New external_user and proxy_user system variables New --default-auth and --plugin-dir mysql options New MYSQL_DEFAULT_AUTH and MYSQL_PLUGIN_DIR options for mysql_options() CREATE USER has IDENTIFIED WITH clause to specify auth plugin GRANT has PROXY privilege, IDENTIFIED WITH clause to specify auth plugin The data structure for writing client plugins

    Read the article

  • Announcing SonicAgile – An Agile Project Management Solution

    - by Stephen.Walther
    I’m happy to announce the public release of SonicAgile – an online tool for managing software projects. You can register for SonicAgile at www.SonicAgile.com and start using it with your team today. SonicAgile is an agile project management solution which is designed to help teams of developers coordinate their work on software projects. SonicAgile supports creating backlogs, scrumboards, and burndown charts. It includes support for acceptance criteria, story estimation, calculating team velocity, and email integration. In short, SonicAgile includes all of the tools that you need to coordinate work on a software project, get stuff done, and build great software. Let me discuss each of the features of SonicAgile in more detail. SonicAgile Backlog You use the backlog to create a prioritized list of user stories such as features, bugs, and change requests. Basically, all future work planned for a product should be captured in the backlog. We focused our attention on designing the user interface for the backlog. Because the main function of the backlog is to prioritize stories, we made it easy to prioritize a story by just drag and dropping the story from one location to another. We also wanted to make it easy to add stories from the product backlog to a sprint backlog. A sprint backlog contains the stories that you plan to complete during a particular sprint. To add a story to a sprint, you just drag the story from the product backlog to the sprint backlog. Finally, we made it easy to track team velocity — the average amount of work that your team completes in each sprint. Your team’s average velocity is displayed in the backlog. When you add too many stories to a sprint – in other words, you attempt to take on too much work – you are warned automatically: SonicAgile Scrumboard Every workday, your team meets to have their daily scrum. During the daily scrum, you can use the SonicAgile Scrumboard to see (at a glance) what everyone on the team is working on. For example, the following scrumboard shows that Stephen is working on the Fix Gravatar Bug story and Pete and Jane have finished working on the Product Details Page story: Every story can be broken into tasks. For example, to create the Product Details Page, you might need to create database objects, do page design, and create an MVC controller. You can use the Scrumboard to track the state of each task. A story can have acceptance criteria which clarify the requirements for the story to be done. For example, here is how you can specify the acceptance criteria for the Product Details Page story: You cannot close a story — and remove the story from the list of active stories on the scrumboard — until all tasks and acceptance criteria associated with the story are done. SonicAgile Burndown Charts You can use Burndown charts to track your team’s progress. SonicAgile supports Release Burndown, Sprint Burndown by Task Estimates, and Sprint Burndown by Story Points charts. For example, here’s a sample of a Sprint Burndown by Story Points chart: The downward slope shows the progress of the team when closing stories. The vertical axis represents story points and the horizontal axis represents time. Email Integration SonicAgile was designed to improve your team’s communication and collaboration. Most stories and tasks require discussion to nail down exactly what work needs to be done. The most natural way to discuss stories and tasks is through email. However, you don’t want these discussions to get lost. When you use SonicAgile, all email discussions concerning a story or a task (including all email attachments) are captured automatically. At any time in the future, you can view all of the email discussion concerning a story or a task by opening the Story Details dialog: Why We Built SonicAgile We built SonicAgile because we needed it for our team. Our consulting company, Superexpert, builds websites for financial services, startups, and large corporations. We have multiple teams working on multiple projects. Keeping on top of all of the work that needs to be done to complete a software project is challenging. You need a good sense of what needs to be done, who is doing it, and when the work will be done. We built SonicAgile because we wanted a lightweight project management tool which we could use to coordinate the work that our team performs on software projects. How We Built SonicAgile We wanted SonicAgile to be easy to use, highly scalable, and have a highly interactive client interface. SonicAgile is very close to being a pure Ajax application. We built SonicAgile using ASP.NET MVC 3, jQuery, and Knockout. We would not have been able to build such a complex Ajax application without these technologies. Almost all of our MVC controller actions return JSON results (While developing SonicAgile, I would have given my left arm to be able to use the new ASP.NET Web API). The controller actions are invoked from jQuery Ajax calls from the browser. We built SonicAgile on Windows Azure. We are taking advantage of SQL Azure, Table Storage, and Blob Storage. Windows Azure enables us to scale very quickly to handle whatever demand is thrown at us. Summary I hope that you will try SonicAgile. You can register at www.SonicAgile.com (there’s a free 30-day trial). The goal of SonicAgile is to make it easier for teams to get more stuff done, work better together, and build amazing software. Let us know what you think!

    Read the article

  • Parallelism in .NET – Part 1, Decomposition

    - by Reed
    The first step in designing any parallelized system is Decomposition.  Decomposition is nothing more than taking a problem space and breaking it into discrete parts.  When we want to work in parallel, we need to have at least two separate things that we are trying to run.  We do this by taking our problem and decomposing it into parts. There are two common abstractions that are useful when discussing parallel decomposition: Data Decomposition and Task Decomposition.  These two abstractions allow us to think about our problem in a way that helps leads us to correct decision making in terms of the algorithms we’ll use to parallelize our routine. To start, I will make a couple of minor points. I’d like to stress that Decomposition has nothing to do with specific algorithms or techniques.  It’s about how you approach and think about the problem, not how you solve the problem using a specific tool, technique, or library.  Decomposing the problem is about constructing the appropriate mental model: once this is done, you can choose the appropriate design and tools, which is a subject for future posts. Decomposition, being unrelated to tools or specific techniques, is not specific to .NET in any way.  This should be the first step to parallelizing a problem, and is valid using any framework, language, or toolset.  However, this gives us a starting point – without a proper understanding of decomposition, it is difficult to understand the proper usage of specific classes and tools within the .NET framework. Data Decomposition is often the simpler abstraction to use when trying to parallelize a routine.  In order to decompose our problem domain by data, we take our entire set of data and break it into smaller, discrete portions, or chunks.  We then work on each chunk in the data set in parallel. This is particularly useful if we can process each element of data independently of the rest of the data.  In a situation like this, there are some wonderfully simple techniques we can use to take advantage of our data.  By decomposing our domain by data, we can very simply parallelize our routines.  In general, we, as developers, should be always searching for data that can be decomposed. Finding data to decompose if fairly simple, in many instances.  Data decomposition is typically used with collections of data.  Any time you have a collection of items, and you’re going to perform work on or with each of the items, you potentially have a situation where parallelism can be exploited.  This is fairly easy to do in practice: look for iteration statements in your code, such as for and foreach. Granted, every for loop is not a candidate to be parallelized.  If the collection is being modified as it’s iterated, or the processing of elements depends on other elements, the iteration block may need to be processed in serial.  However, if this is not the case, data decomposition may be possible. Let’s look at one example of how we might use data decomposition.  Suppose we were working with an image, and we were applying a simple contrast stretching filter.  When we go to apply the filter, once we know the minimum and maximum values, we can apply this to each pixel independently of the other pixels.  This means that we can easily decompose this problem based off data – we will do the same operation, in parallel, on individual chunks of data (each pixel). Task Decomposition, on the other hand, is focused on the individual tasks that need to be performed instead of focusing on the data.  In order to decompose our problem domain by tasks, we need to think about our algorithm in terms of discrete operations, or tasks, which can then later be parallelized. Task decomposition, in practice, can be a bit more tricky than data decomposition.  Here, we need to look at what our algorithm actually does, and how it performs its actions.  Once we have all of the basic steps taken into account, we can try to analyze them and determine whether there are any constraints in terms of shared data or ordering.  There are no simple things to look for in terms of finding tasks we can decompose for parallelism; every algorithm is unique in terms of its tasks, so every algorithm will have unique opportunities for task decomposition. For example, say we want our software to perform some customized actions on startup, prior to showing our main screen.  Perhaps we want to check for proper licensing, notify the user if the license is not valid, and also check for updates to the program.  Once we verify the license, and that there are no updates, we’ll start normally.  In this case, we can decompose this problem into tasks – we have a few tasks, but there are at least two discrete, independent tasks (check licensing, check for updates) which we can perform in parallel.  Once those are completed, we will continue on with our other tasks. One final note – Data Decomposition and Task Decomposition are not mutually exclusive.  Often, you’ll mix the two approaches while trying to parallelize a single routine.  It’s possible to decompose your problem based off data, then further decompose the processing of each element of data based on tasks.  This just provides a framework for thinking about our algorithms, and for discussing the problem.

    Read the article

  • Entity Framework 4.0: Creating objects of correct type when using lazy loading

    - by DigiMortal
    In my posting about Entity Framework 4.0 and POCOs I introduced lazy loading in EF applications. EF uses proxy classes for lazy loading and this means we have new types in that come and go dynamically in runtime. We don’t have these types available when we write code but we cannot forget that EF may expect us to use dynamically generated types. In this posting I will give you simple hint how to use correct types in your code. The background of lazy loading and proxy classes As a first thing I will explain you in short what is proxy class. Business classes when designed correctly have no knowledge about their birth and death – they don’t know how they are created and they don’t know how their data is persisted. This is the responsibility of object runtime. When we use lazy loading we need a little bit different classes that know how to load data for properties when code accesses the property first time. As we cannot add this functionality to our business classes (they may be stored through more than one data access technology or by more than one Data Access Layer (DAL)) we create proxy classes that extend our business classes. If we have class called Product and product has lazy loaded property called Customer then we need proxy class, let’s say ProductProxy, that has same public signature as Product so we can use it INSTEAD OF product in our code. ProductProxy overrides Customer property. If customer is not asked then customer is null. But if we ask for Customer property then overridden property of ProductProxy loads it from database. This is how lazy loading works. Problem – two types for same thing As lazy loading may introduce dynamically generated proxy types we don’t know in our application code which type is returned. We cannot be sure that we have Product not ProductProxy returned. This leads us to the following question: how can we create Product of correct type if we don’t know the correct type? In EF solution is simple. Solution – use factory methods If you are using repositories and you are not using factories (imho it is pretty pointless with mapper) you can add factory methods to your EF based repositories. Take a look at this class. public class Event {     public int ID { get; set; }     public string Title { get; set; }     public string Location { get; set; }     public virtual Party Organizer { get; set; }     public DateTime Date { get; set; } } We have virtual member called Organizer. This property is virtual because we want to use lazy loading on this class so Organizer is loaded only when we ask it. EF provides us with method called CreateObject<T>(). CreateObject<T>() is member of ObjectContext class and it creates the object based on given type. In runtime proxy type for Event is created for us automatically and when we call CreateObject<T>() for Event it returns as object of Event proxy type. The factory method for events repository is as follows. public Event CreateEvent() {     var evt = _context.CreateObject<Event>();     return evt; } And we are done. Instead of creating factory classes we created factory methods that guarantee that created objects are of correct type. Conclusion Although lazy loading introduces some new objects we cannot use at design time because they live only in runtime we can write code without worrying about exact implementation type of object. This holds true until we have clean code and we don’t make any decisions based on object type. EF4.0 provides us with very simple factory method that create and return objects of correct type. All we had to do was adding factory methods to our repositories.

    Read the article

  • SQL Developer at Oracle Open World 2012

    - by thatjeffsmith
    We have a lot going on in San Francisco this fall. One of the most personal exciting bits, for what will be my 4th or 5th Open World, is that this will be my FIRST as a member of Team Oracle. I’ve presented once before, but most years it was just me pressing flesh at the vendor booths. After 3-4 days of standing and talking, you’re ready to just go home and not do anything for a few weeks. This time I’ll have a chance to walk around and talk with our users and get a good idea of what’s working and what’s not. Of course it will be a great opportunity for you to find us and get to know your SQL Developer team! 3.4 miles across and back – thanks Ashley for signing me up for the run! This year is going to be a bit crazy. Work wise I’ll be presenting twice, working a booth, and proctoring several of our Hands-On Labs. The fun parts will be equally crazy though – running across the Bay Bridge (I don’t run), swimming the Bay (I don’t swim), having my wife fly out on Wednesday for the concert, and then our first WhiskyFest on Friday (I do drink whisky though.) But back to work – let’s talk about EVERYTHING you can expect from the SQL Developer team. Booth Hours We’ll have 2 ‘demo pods’ in the Exhibition Hall over at Moscone South. Look for the farm of Oracle booths, we’ll be there under the signs that say ‘SQL Developer.’ There will be several people on hand, mostly developers (yes, they still count as people), who can answer your questions or demo the latest features. Come by and say ‘Hi!’, and let us know what you like and what you think we can do better. Seriously. Monday 10AM – 6PM Tuesday 9:45AM – 6PM Wednesday 9:45AM – 4PM Presentations Stop by for an hour, pull up a chair, sit back and soak in all the SQL Developer goodness. You’ll only have to suffer my bad jokes for two of the presentations, so please at least try to come to the other ones. We’ll be talking about data modeling, migrations, source control, and new features in versions 3.1 and 3.2 of SQL Developer and SQL Developer Data Modeler. Day Time Event Monday 10:454:45 What’s New in SQL Developer Why Move to Oracle Application Express Listener Tueday 10:1511:455:00 Using Subversion in Oracle SQL Developer Data Modeler Oracle SQL Developer Tips & Tricks Database Design with Oracle SQL Developer Data Modeler Wednesday 11:453:30 Migrating Third-Party Databases and Applications to Oracle Exadata 11g Enterprise Options and Management Packs for Developers Hands On Labs (HOLs) The Hands On Labs allow you to come into a classroom environment, sit down at a computer, and run through some exercises. We’ll provide the hardware, software, and training materials. It’s self-paced, but we’ll have several helpers walking around to answer questions and chat up any SQL Developer or database topic that comes to mind. If your employer is sending you to Open World for all that great training, the HOLs are a great opportunity to capitalize on that. They are only 60 minutes each, so you don’t have to worry about burning out. And there’s no homework! Of course, if you do want to take the labs home with you, many are already available via the Developer Day Hands-On Database Applications Developer Lab. You will need your own computer for those, but we’ll take care of the rest. Wednesday PL/SQL Development and Unit Testing with Oracle SQL Developer 10:15 Performance Tuning with Oracle SQL Developer 11:45 Thursday The Soup to Nuts of Data Modeling with Oracle SQL Developer Data Modeler 11:15 Some Parting Advice Always wanted to meet your favorite Oracle authors, speakers, and thought-leaders? Don’t be shy, walk right up to them and introduce yourself. Normal social rules still apply, but at the conference everyone is open and up for meeting and talking with attendees. Just understand if there’s a line that you might only get a minute or two. It’s a LONG conference though, so you’ll have plenty of time to catch up with everyone. If you’re going to be around on Tuesday evening, head on over to the OTN Lounge from 4:30 to 6:30 and hang out for our Tweet Meet. That’s right, all the Oracle nerds on Twitter will be there in one place. Be sure to put your Twitter handle on your name tag so we know who you are!

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

< Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >