Search Results

Search found 5245 results on 210 pages for 'confused or too stupid'.

Page 208/210 | < Previous Page | 204 205 206 207 208 209 210  | Next Page >

  • Upgrading Team Foundation Server 2008 to 2010

    - by Martin Hinshelwood
    I am sure you will have seen my posts on upgrading our internal Team Foundation Server from TFS2008 to TFS2010 Beta 2, RC and RTM, but what about a fresh upgrade of TFS2008 to TFS2010 using the RTM version of TFS. One of our clients is taking the plunge with TFS2010, so I have the job of doing the upgrade. It is sometimes very useful to have a team member that starts work when most of the Sydney workers are heading home as I can do the upgrade without impacting them. The down side is that if you have any blockers then you can be pretty sure that everyone that can deal with your problem is asleep I am starting with an existing blank installation of TFS 2010, but Adam Cogan let slip that he was the one that did the install so I thought it prudent to make sure that it was OK. Verifying Team Foundation Server 2010 We need to check that TFS 2010 has been installed correctly. First, check the Admin console and have a root about for any errors. Figure: Even the SQL Setup looks good. I don’t know how Adam did it! Backing up the Team Foundation Server 2008 Databases As we are moving from one server to another (recommended method) we will be taking a backup of our TFS2008 databases and resorting them to the SQL Server for the new TFS2010 Server. Do not just detach and reattach. This will cause problems with the version of the database. If you are running a test migration you just need to create a backup of the TFS 2008 databases, but if you are doing the live migration then you should stop IIS on the TFS 2008 server before you backup the databases. This will stop any inadvertent check-ins or changes to TFS 2008. Figure: Stop IIS before you take a backup to prevent any TFS 2008 changes being written to the database. It is good to leave a little time between taking the TFS 2008 server offline and commencing the upgrade as there is always one developer who has not finished and starts screaming. This time it was John Liu that needed 10 more minutes to make his changes and check-in, so I always give it 30 minutes and see if anyone screams. John Liu [SSW] said:   are you doing something to TFS :-O MrHinsh [SSW UK][VS ALM MVP] said:   I have stopped TFS 2008 as per my emails John Liu [SSW] said:   haven't finish check in @_@   can we have it for 10mins? :) MrHinsh [SSW UK][VS ALM MVP] said:   TFS 2008 has been started John Liu [SSW] said:   I love you! -IM conversation at TFS Upgrade +25 minutes After John confirmed that he had everything done I turned IIS off again and made a cup of tea. There were no more screams so the upgrade can continue. Figure: Backup all of the databases for TFS and include the Reporting Services, just in case.   Figure: Check that all the backups have been taken Once you have your backups, you need to copy them to your new TFS2010 server and restore them. This is a good way to proceed as if we have any problems, or just plain run out of time, then you just turn the TFS 2008 server back on and all you have lost is one upgrade day, and not 10 developer days. As per the rules, you should record the number of files and the total number of areas and iterations before the upgrade so you have something to compare to: TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 You can use this to verify that the upgrade was successful. it should however be noted that the numbers in TFS 2010 will be bigger. This is due to some of the sorting out that TFS does during the upgrade process. Restore Team Foundation Server 2008 Databases Restoring the databases is much more time consuming than just attaching them as you need to do them one at a time. But you may be taking a backup of an operational database and need to restore all your databases to a particular point in time instead of to the latest. I am doing latest unless I encounter any problems. Figure: Restore each of the databases to either a latest or specific point in time.     Figure: Restore all of the required databases Now that all of your databases are restored you now need to upgrade them to Team Foundation Server 2010. Upgrade Team Foundation Server 2008 Databases This is probably the easiest part of the process. You need to call a fire and forget command that will go off to the database specified, find the TFS 2008 databases and upgrade them to 2010. During this process all of the 6 main TFS 2008 databases are merged into the TfsVersionControl database, upgraded and then the database is renamed to TFS_[CollectionName]. The rename is only the database and not the physical files, so it is worth going back and renaming the physical file as well. This keeps everything neat and tidy. If you plan to keep the old TFS 2008 server around, for example if you are doing a test migration first, then you will need to change the TFS GUID. This GUID is unique to each TFS instance and is preserved when you upgrade. This GUID is used by the clients and they can get a little confused if there are two servers with the same one. To kick of the upgrade you need to open a command prompt and change the path to “C:\Program Files\Microsoft Team Foundation Server 2010\Tools” and run the “import” command in  “tfsconfig”. TfsConfig import /sqlinstance:<Previous TFS Data Tier>                  /collectionName:<Collection Name>                  /confirmed Imports a TFS 2005 or 2008 data tier as a new project collection. Important: This command should only be executed after adequate backups have been performed. After you import, you will need to configure portal and reporting settings via the administration console. EXAMPLES -------- TfsConfig import /sqlinstance:tfs2008sql /collectionName:imported /confirmed TfsConfig import /sqlinstance:tfs2008sql\Instance /collectionName:imported /confirmed OPTIONS: -------- sqlinstance         The sql instance of the TFS 2005 or 2008 data tier. The TFS databases at that location will be modified directly and will no longer be usable as previous version databases.  Ensure you have back-ups. collectionName      The name of the new Team Project Collection. confirmed           Confirm that you have backed-up databases before importing. This command will automatically look for the TfsIntegration database and verify that all the other required databases exist. In this case it took around 5 minutes to complete the upgrade as the total database size was under 700MB. This was unlike the upgrade of SSW’s production database with over 17GB of data which took a few hours. At the end of the process you should get no errors and no warnings. The Upgrade operation on the ApplicationTier feature has completed. There were 0 errors and 0 warnings. As this is a new server and not a pure upgrade there should not be a problem with the GUID. If you think at any point you will be doing this more than once, for example doing a test migration, or merging many TFS 2008 instances into a single one, then you should go back and rename the physical TfsVersionControl.mdf file to the same as the new collection. This will avoid confusion later down the line. To do this, detach the new collection from the server and rename the physical files. Then reattach and change the physical file locations to match the new name. You can follow http://www.mssqltips.com/tip.asp?tip=1122 for a more detailed explanation of how to do this. Figure: Stop the collection so TFS does not take a wobbly when we detach the database. When you try to start the new collection again you will get a conflict with project names and will require to remove the Test Upgrade collection. This is fine and it just needs detached. Figure: Detaching the test upgrade from the new Team Foundation Server 2010 so we can start the new Collection again. You will now be able to start the new upgraded collection and you are ready for testing. Do you remember the stats we took off the TFS 2008 server? TFS2008 File count: Type Count 1 1845 2 15770 Areas & Iterations: 139 Well, now we need to compare them to the TFS 2010 stats, remembering that there will probably be more files under source control. TFS2010 File count: Type Count 1 19288 Areas & Iterations: 139 Lovely, the number of iterations are the same, and the number of files is bigger. Just what we were looking for. Testing the upgraded Team Foundation Server 2010 Project Collection Can we connect to the new collection and project? Figure: We can connect to the new collection and project.   Figure: make sure you can connect to The upgraded projects and that you can see all of the files. Figure: Team Web Access is there and working. Note that for Team Web Access you now use the same port and URL as for TFS 2010. So in this case as I am running on the local box you need to use http://localhost:8080/tfs which will redirect you to http://localhost:8080/tfs/web for the web access. If you need to connect with a Visual Studio 2008 client you will need to use the full path of the new collection, http://[servername]/tfs/[collectionname] and this will work with all of your collections. With Visual Studio 2005 you will only be able to connect to the Default collection and in both VS2008 and VS2005 you will need to install the forward compatibility updates. Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 To make sure that you have everything up to date, make sure that you run SSW Diagnostics and get all green ticks. Upgrade Done! At this point you can send out a notice to everyone that the upgrade is complete and and give them the connection details. You need to remember that at this stage we have 2008 project upgraded to run under TFS 2010 but it is still running under that same process template that it was running before. You can only “enable” 2010 features in a process template you can’t upgrade. So what to do? Well, you need to create a new project and migrate things you want to keep across. Souse code is easy, you can move or Branch, but Work Items are more difficult as you can’t move them between projects. This instance is complicated more as the old project uses the Conchango/EMC Scrum for Team System template and I will need to write a script/application to get the work items across with their attachments in tact. That is my next task! Technorati Tags: TFS 2010,TFS 2008,VS ALM

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Red Gate Coder interviews: Alex Davies

    - by Michael Williamson
    Alex Davies has been a software engineer at Red Gate since graduating from university, and is currently busy working on .NET Demon. We talked about tackling parallel programming with his actors framework, a scientific approach to debugging, and how JavaScript is going to affect the programming languages we use in years to come. So, if we start at the start, how did you get started in programming? When I was seven or eight, I was given a BBC Micro for Christmas. I had asked for a Game Boy, but my dad thought it would be better to give me a proper computer. For a year or so, I only played games on it, but then I found the user guide for writing programs in it. I gradually started doing more stuff on it and found it fun. I liked creating. As I went into senior school I continued to write stuff on there, trying to write games that weren’t very good. I got a real computer when I was fourteen and found ways to write BASIC on it. Visual Basic to start with, and then something more interesting than that. How did you learn to program? Was there someone helping you out? Absolutely not! I learnt out of a book, or by experimenting. I remember the first time I found a loop, I was like “Oh my God! I don’t have to write out the same line over and over and over again any more. It’s amazing!” When did you think this might be something that you actually wanted to do as a career? For a long time, I thought it wasn’t something that you would do as a career, because it was too much fun to be a career. I thought I’d do chemistry at university and some kind of career based on chemical engineering. And then I went to a careers fair at school when I was seventeen or eighteen, and it just didn’t interest me whatsoever. I thought “I could be a programmer, and there’s loads of money there, and I’m good at it, and it’s fun”, but also that I shouldn’t spoil my hobby. Now I don’t really program in my spare time any more, which is a bit of a shame, but I program all the rest of the time, so I can live with it. Do you think you learnt much about programming at university? Yes, definitely! I went into university knowing how to make computers do anything I wanted them to do. However, I didn’t have the language to talk about algorithms, so the algorithms course in my first year was massively important. Learning other language paradigms like functional programming was really good for breadth of understanding. Functional programming influences normal programming through design rather than actually using it all the time. I draw inspiration from it to write imperative programs which I think is actually becoming really fashionable now, but I’ve been doing it for ages. I did it first! There were also some courses on really odd programming languages, a bit of Prolog, a little bit of C. Having a little bit of each of those is something that I would have never done on my own, so it was important. And then there are knowledge-based courses which are about not programming itself but things that have been programmed like TCP. Those are really important for examples for how to approach things. Did you do any internships while you were at university? Yeah, I spent both of my summers at the same company. I thought I could code well before I went there. Looking back at the crap that I produced, it was only surpassed in its crappiness by all of the other code already in that company. I’m so much better at writing nice code now than I used to be back then. Was there just not a culture of looking after your code? There was, they just didn’t hire people for their abilities in that area. They hired people for raw IQ. The first indicator of it going wrong was that they didn’t have any computer scientists, which is a bit odd in a programming company. But even beyond that they didn’t have people who learnt architecture from anyone else. Most of them had started straight out of university, so never really had experience or mentors to learn from. There wasn’t the experience to draw from to teach each other. In the second half of my second internship, I was being given tasks like looking at new technologies and teaching people stuff. Interns shouldn’t be teaching people how to do their jobs! All interns are going to have little nuggets of things that you don’t know about, but they shouldn’t consistently be the ones who know the most. It’s not a good environment to learn. I was going to ask how you found working with people who were more experienced than you… When I reached Red Gate, I found some people who were more experienced programmers than me, and that was difficult. I’ve been coding since I was tiny. At university there were people who were cleverer than me, but there weren’t very many who were more experienced programmers than me. During my internship, I didn’t find anyone who I classed as being a noticeably more experienced programmer than me. So, it was a shock to the system to have valid criticisms rather than just formatting criticisms. However, Red Gate’s not so big on the actual code review, at least it wasn’t when I started. We did an entire product release and then somebody looked over all of the UI of that product which I’d written and say what they didn’t like. By that point, it was way too late and I’d disagree with them. Do you think the lack of code reviews was a bad thing? I think if there’s going to be any oversight of new people, then it should be continuous rather than chunky. For me I don’t mind too much, I could go out and get oversight if I wanted it, and in those situations I felt comfortable without it. If I was managing the new person, then maybe I’d be keener on oversight and then the right way to do it is continuously and in very, very small chunks. Have you had any significant projects you’ve worked on outside of a job? When I was a teenager I wrote all sorts of stuff. I used to write games, I derived how to do isomorphic projections myself once. I didn’t know what the word was so I couldn’t Google for it, so I worked it out myself. It was horrifically complicated. But it sort of tailed off when I started at university, and is now basically zero. If I do side-projects now, they tend to be work-related side projects like my actors framework, NAct, which I started in a down tools week. Could you explain a little more about NAct? It is a little C# framework for writing parallel code more easily. Parallel programming is difficult when you need to write to shared data. Sometimes parallel programming is easy because you don’t need to write to shared data. When you do need to access shared data, you could just have your threads pile in and do their work, but then you would screw up the data because the threads would trample on each other’s toes. You could lock, but locks are really dangerous if you’re using more than one of them. You get interactions like deadlocks, and that’s just nasty. Actors instead allows you to say this piece of data belongs to this thread of execution, and nobody else can read it. If you want to read it, then ask that thread of execution for a piece of it by sending a message, and it will send the data back by a message. And that avoids deadlocks as long as you follow some obvious rules about not making your actors sit around waiting for other actors to do something. There are lots of ways to write actors, NAct allows you to do it as if it was method calls on other objects, which means you get all the strong type-safety that C# programmers like. Do you think that this is suitable for the majority of parallel programming, or do you think it’s only suitable for specific cases? It’s suitable for most difficult parallel programming. If you’ve just got a hundred web requests which are all independent of each other, then I wouldn’t bother because it’s easier to just spin them up in separate threads and they can proceed independently of each other. But where you’ve got difficult parallel programming, where you’ve got multiple threads accessing multiple bits of data in multiple ways at different times, then actors is at least as good as all other ways, and is, I reckon, easier to think about. When you’re using actors, you presumably still have to write your code in a different way from you would otherwise using single-threaded code. You can’t use actors with any methods that have return types, because you’re not allowed to call into another actor and wait for it. If you want to get a piece of data out of another actor, then you’ve got to use tasks so that you can use “async” and “await” to await asynchronously for it. But other than that, you can still stick things in classes so it’s not too different really. Rather than having thousands of objects with mutable state, you can use component-orientated design, where there are only a few mutable classes which each have a small number of instances. Then there can be thousands of immutable objects. If you tend to do that anyway, then actors isn’t much of a jump. If I’ve already built my system without any parallelism, how hard is it to add actors to exploit all eight cores on my desktop? Usually pretty easy. If you can identify even one boundary where things look like messages and you have components where some objects live on one side and these other objects live on the other side, then you can have a granddaddy object on one side be an actor and it will parallelise as it goes across that boundary. Not too difficult. If we do get 1000-core desktop PCs, do you think actors will scale up? It’s hard. There are always in the order of twenty to fifty actors in my whole program because I tend to write each component as actors, and I tend to have one instance of each component. So this won’t scale to a thousand cores. What you can do is write data structures out of actors. I use dictionaries all over the place, and if you need a dictionary that is going to be accessed concurrently, then you could build one of those out of actors in no time. You can use queuing to marshal requests between different slices of the dictionary which are living on different threads. So it’s like a distributed hash table but all of the chunks of it are on the same machine. That means that each of these thousand processors has cached one small piece of the dictionary. I reckon it wouldn’t be too big a leap to start doing proper parallelism. Do you think it helps if actors get baked into the language, similarly to Erlang? Erlang is excellent in that it has thread-local garbage collection. C# doesn’t, so there’s a limit to how well C# actors can possibly scale because there’s a single garbage collected heap shared between all of them. When you do a global garbage collection, you’ve got to stop all of the actors, which is seriously expensive, whereas in Erlang garbage collections happen per-actor, so they’re insanely cheap. However, Erlang deviated from all the sensible language design that people have used recently and has just come up with crazy stuff. You can definitely retrofit thread-local garbage collection to .NET, and then it’s quite well-suited to support actors, even if it’s not baked into the language. Speaking of language design, do you have a favourite programming language? I’ll choose a language which I’ve never written before. I like the idea of Scala. It sounds like C#, only with some of the niggles gone. I enjoy writing static types. It means you don’t have to writing tests so much. When you say it doesn’t have some of the niggles? C# doesn’t allow the use of a property as a method group. It doesn’t have Scala case classes, or sum types, where you can do a switch statement and the compiler checks that you’ve checked all the cases, which is really useful in functional-style programming. Pattern-matching, in other words. That’s actually the major niggle. C# is pretty good, and I’m quite happy with C#. And what about going even further with the type system to remove the need for tests to something like Haskell? Or is that a step too far? I’m quite a pragmatist, I don’t think I could deal with trying to write big systems in languages with too few other users, especially when learning how to structure things. I just don’t know anyone who can teach me, and the Internet won’t teach me. That’s the main reason I wouldn’t use it. If I turned up at a company that writes big systems in Haskell, I would have no objection to that, but I wouldn’t instigate it. What about things in C#? For instance, there’s contracts in C#, so you can try to statically verify a bit more about your code. Do you think that’s useful, or just not worthwhile? I’ve not really tried it. My hunch is that it needs to be built into the language and be quite mathematical for it to work in real life, and that doesn’t seem to have ended up true for C# contracts. I don’t think anyone who’s tried them thinks they’re any good. I might be wrong. On a slightly different note, how do you like to debug code? I think I’m quite an odd debugger. I use guesswork extremely rarely, especially if something seems quite difficult to debug. I’ve been bitten spending hours and hours on guesswork and not being scientific about debugging in the past, so now I’m scientific to a fault. What I want is to see the bug happening in the debugger, to step through the bug happening. To watch the program going from a valid state to an invalid state. When there’s a bug and I can’t work out why it’s happening, I try to find some piece of evidence which places the bug in one section of the code. From that experiment, I binary chop on the possible causes of the bug. I suppose that means binary chopping on places in the code, or binary chopping on a stage through a processing cycle. Basically, I’m very stupid about how I debug. I won’t make any guesses, I won’t use any intuition, I will only identify the experiment that’s going to binary chop most effectively and repeat rather than trying to guess anything. I suppose it’s quite top-down. Is most of the time then spent in the debugger? Absolutely, if at all possible I will never debug using print statements or logs. I don’t really hold much stock in outputting logs. If there’s any bug which can be reproduced locally, I’d rather do it in the debugger than outputting logs. And with SmartAssembly error reporting, there’s not a lot that can’t be either observed in an error report and just fixed, or reproduced locally. And in those other situations, maybe I’ll use logs. But I hate using logs. You stare at the log, trying to guess what’s going on, and that’s exactly what I don’t like doing. You have to just look at it and see does this look right or wrong. We’ve covered how you get to grip with bugs. How do you get to grips with an entire codebase? I watch it in the debugger. I find little bugs and then try to fix them, and mostly do it by watching them in the debugger and gradually getting an understanding of how the code works using my process of binary chopping. I have to do a lot of reading and watching code to choose where my slicing-in-half experiment is going to be. The last time I did it was SmartAssembly. The old code was a complete mess, but at least it did things top to bottom. There wasn’t too much of some of the big abstractions where flow of control goes all over the place, into a base class and back again. Code’s really hard to understand when that happens. So I like to choose a little bug and try to fix it, and choose a bigger bug and try to fix it. Definitely learn by doing. I want to always have an aim so that I get a little achievement after every few hours of debugging. Once I’ve learnt the codebase I might be able to fix all the bugs in an hour, but I’d rather be using them as an aim while I’m learning the codebase. If I was a maintainer of a codebase, what should I do to make it as easy as possible for you to understand? Keep distinct concepts in different places. And name your stuff so that it’s obvious which concepts live there. You shouldn’t have some variable that gets set miles up the top of somewhere, and then is read miles down to choose some later behaviour. I’m talking from a very much SmartAssembly point of view because the old SmartAssembly codebase had tons and tons of these things, where it would read some property of the code and then deal with it later. Just thousands of variables in scope. Loads of things to think about. If you can keep concepts separate, then it aids me in my process of fixing bugs one at a time, because each bug is going to more or less be understandable in the one place where it is. And what about tests? Do you think they help at all? I’ve never had the opportunity to learn a codebase which has had tests, I don’t know what it’s like! What about when you’re actually developing? How useful do you find tests in finding bugs or regressions? Finding regressions, absolutely. Running bits of code that would be quite hard to run otherwise, definitely. It doesn’t happen very often that a test finds a bug in the first place. I don’t really buy nebulous promises like tests being a good way to think about the spec of the code. My thinking goes something like “This code works at the moment, great, ship it! Ah, there’s a way that this code doesn’t work. Okay, write a test, demonstrate that it doesn’t work, fix it, use the test to demonstrate that it’s now fixed, and keep the test for future regressions.” The most valuable tests are for bugs that have actually happened at some point, because bugs that have actually happened at some point, despite the fact that you think you’ve fixed them, are way more likely to appear again than new bugs are. Does that mean that when you write your code the first time, there are no tests? Often. The chance of there being a bug in a new feature is relatively unaffected by whether I’ve written a test for that new feature because I’m not good enough at writing tests to think of bugs that I would have written into the code. So not writing regression tests for all of your code hasn’t affected you too badly? There are different kinds of features. Some of them just always work, and are just not flaky, they just continue working whatever you throw at them. Maybe because the type-checker is particularly effective around them. Writing tests for those features which just tend to always work is a waste of time. And because it’s a waste of time I’ll tend to wait until a feature has demonstrated its flakiness by having bugs in it before I start trying to test it. You can get a feel for whether it’s going to be flaky code as you’re writing it. I try to write it to make it not flaky, but there are some things that are just inherently flaky. And very occasionally, I’ll think “this is going to be flaky” as I’m writing, and then maybe do a test, but not most of the time. How do you think your programming style has changed over time? I’ve got clearer about what the right way of doing things is. I used to flip-flop a lot between different ideas. Five years ago I came up with some really good ideas and some really terrible ideas. All of them seemed great when I thought of them, but they were quite diverse ideas, whereas now I have a smaller set of reliable ideas that are actually good for structuring code. So my code is probably more similar to itself than it used to be back in the day, when I was trying stuff out. I’ve got more disciplined about encapsulation, I think. There are operational things like I use actors more now than I used to, and that forces me to use immutability more than I used to. The first code that I wrote in Red Gate was the memory profiler UI, and that was an actor, I just didn’t know the name of it at the time. I don’t really use object-orientation. By object-orientation, I mean having n objects of the same type which are mutable. I want a constant number of objects that are mutable, and they should be different types. I stick stuff in dictionaries and then have one thing that owns the dictionary and puts stuff in and out of it. That’s definitely a pattern that I’ve seen recently. I think maybe I’m doing functional programming. Possibly. It’s plausible. If you had to summarise the essence of programming in a pithy sentence, how would you do it? Programming is the form of art that, without losing any of the beauty of architecture or fine art, allows you to produce things that people love and you make money from. So you think it’s an art rather than a science? It’s a little bit of engineering, a smidgeon of maths, but it’s not science. Like architecture, programming is on that boundary between art and engineering. If you want to do it really nicely, it’s mostly art. You can get away with doing architecture and programming entirely by having a good engineering mind, but you’re not going to produce anything nice. You’re not going to have joy doing it if you’re an engineering mind. Architects who are just engineering minds are not going to enjoy their job. I suppose engineering is the foundation on which you build the art. Exactly. How do you think programming is going to change over the next ten years? There will be an unfortunate shift towards dynamically-typed languages, because of JavaScript. JavaScript has an unfair advantage. JavaScript’s unfair advantage will cause more people to be exposed to dynamically-typed languages, which means other dynamically-typed languages crop up and the best features go into dynamically-typed languages. Then people conflate the good features with the fact that it’s dynamically-typed, and more investment goes into dynamically-typed languages. They end up better, so people use them. What about the idea of compiling other languages, possibly statically-typed, to JavaScript? It’s a reasonable idea. I would like to do it, but I don’t think enough people in the world are going to do it to make it pick up. The hordes of beginners are the lifeblood of a language community. They are what makes there be good tools and what makes there be vibrant community websites. And any particular thing which is the same as JavaScript only with extra stuff added to it, although it might be technically great, is not going to have the hordes of beginners. JavaScript is always to be quickest and easiest way for a beginner to start programming in the browser. And dynamically-typed languages are great for beginners. Compilers are pretty scary and beginners don’t write big code. And having your errors come up in the same place, whether they’re statically checkable errors or not, is quite nice for a beginner. If someone asked me to teach them some programming, I’d teach them JavaScript. If dynamically-typed languages are great for beginners, when do you think the benefits of static typing start to kick in? The value of having a statically typed program is in the tools that rely on the static types to produce a smooth IDE experience rather than actually telling me my compile errors. And only once you’re experienced enough a programmer that having a really smooth IDE experience makes a blind bit of difference, does static typing make a blind bit of difference. So it’s not really about size of codebase. If I go and write up a tiny program, I’m still going to get value out of writing it in C# using ReSharper because I’m experienced with C# and ReSharper enough to be able to write code five times faster if I have that help. Any other visions of the future? Nobody’s going to use actors. Because everyone’s going to be running on single-core VMs connected over network-ready protocols like JSON over HTTP. So, parallelism within one operating system is going to die. But until then, you should use actors. More Red Gater Coder interviews

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Faceted search with Solr on Windows

    - by Dr.NETjes
    With over 10 million hits a day, funda.nl is probably the largest ASP.NET website which uses Solr on a Windows platform. While all our data (i.e. real estate properties) is stored in SQL Server, we're using Solr 1.4.1 to return the faceted search results as fast as we can.And yes, Solr is very fast. We did do some heavy stress testing on our Solr service, which allowed us to do over 1,000 req/sec on a single 64-bits Solr instance; and that's including converting search-url's to Solr http-queries and deserializing Solr's result-XML back to .NET objects! Let me tell you about faceted search and how to integrate Solr in a .NET/Windows environment. I'll bet it's easier than you think :-) What is faceted search? Faceted search is the clustering of search results into categories, allowing users to drill into search results. By showing the number of hits for each facet category, users can easily see how many results match that category. If you're still a bit confused, this example from CNET explains it all: The SQL solution for faceted search Our ("pre-Solr") solution for faceted search was done by adding a lot of redundant columns to our SQL tables and doing a COUNT(...) for each of those columns:   So if a user was searching for real estate properties in the city 'Amsterdam', our facet-query would be something like: SELECT COUNT(hasGarden), COUNT(has2Bathrooms), COUNT(has3Bathrooms), COUNT(etc...) FROM Houses WHERE city = 'Amsterdam' While this solution worked fine for a couple of years, it wasn't very easy for developers to add new facets. And also, performing COUNT's on all matched rows only performs well if you have a limited amount of rows in a table (i.e. less than a million). Enter Solr "Solr is an open source enterprise search server based on the Lucene Java search library, with XML/HTTP and JSON APIs, hit highlighting, faceted search, caching, replication, and a web administration interface." (quoted from Wikipedia's page on Solr) Solr isn't a database, it's more like a big index. Every time you upload data to Solr, it will analyze the data and create an inverted index from it (like the index-pages of a book). This way Solr can lookup data very quickly. To explain the inner workings of Solr is beyond the scope of this post, but if you want to learn more, please visit the Solr Wiki pages. Getting faceted search results from Solr is very easy; first let me show you how to send a http-query to Solr:    http://localhost:8983/solr/select?q=city:Amsterdam This will return an XML document containing the search results (in this example only three houses in the city of Amsterdam):    <response>     <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>        </doc>         <doc>             <long name="id">3205</long>             <str name="city">Amsterdam</str>             <str name="steet">Vondelstraat</str>             <int name="numberOfBathrooms">3</int>          </doc>          <doc>             <long name="id">4293</long>             <str name="city">Amsterdam</str>             <str name="steet">Wibautstraat</str>             <int name="numberOfBathrooms">2</int>          </doc>       </result>   </response> By adding a facet-querypart for the field "numberOfBathrooms", Solr will return the facets for this particular field. We will see that there's one house in Amsterdam with three bathrooms and two houses with two bathrooms.    http://localhost:8983/solr/select?q=city:Amsterdam&facet=true&facet.field=numberOfBathrooms The complete XML response from Solr now looks like:    <response>      <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>         </doc>         <doc>            <long name="id">3205</long>            <str name="city">Amsterdam</str>            <str name="steet">Vondelstraat</str>            <int name="numberOfBathrooms">3</int>         </doc>         <doc>            <long name="id">4293</long>            <str name="city">Amsterdam</str>            <str name="steet">Wibautstraat</str>            <int name="numberOfBathrooms">2</int>         </doc>      </result>      <lst name="facet_fields">         <lst name="numberOfBathrooms">            <int name="2">2</int>            <int name="3">1</int>         </lst>      </lst>   </response> Trying Solr for yourself To run Solr on your local machine and experiment with it, you should read the Solr tutorial. This tutorial really only takes 1 hour, in which you install Solr, upload sample data and get some query results. And yes, it works on Windows without a problem. Note that in the Solr tutorial, you're using Jetty as a Java Servlet Container (that's why you must start it using "java -jar start.jar"). In our environment we prefer to use Apache Tomcat to host Solr, which installs like a Windows service and works more like .NET developers expect. See the SolrTomcat page.Some best practices for running Solr on Windows: Use the 64-bits version of Tomcat. In our tests, this doubled the req/sec we were able to handle!Use a .NET XmlReader to convert Solr's XML output-stream to .NET objects. Don't use XPath; it won't scale well.Use filter queries ("fq" parameter) instead of the normal "q" parameter where possible. Filter queries are cached by Solr and will speed up Solr's response time (see FilterQueryGuidance)In my next post I’ll talk about how to keep Solr's indexed data in sync with the data in your SQL tables. Timestamps / rowversions will help you out here!

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Part 2: Career development as a Software Developer without becoming a manager.

    - by albertpascual
    Seems like my previous post inspired by the work of Michael “Doc” Norton was a great success for the amount of emails I have received. Yet amazed how many people didn’t want to discuss their questions in the comments  sections. I would encourage people to be more public, still I would like to reply to all of you on this public media. I still welcome those emails. What I found out is that many people feels like me, they want to be developers and still be compensated for their experience without wanting to take a job as a manager. Their perfect day is a full day of coding and learning. Many believe their companies will never pay a manager’s salary to a developer no matter what. Most of you ask how to get the ball rolling. And is the later that I’m addressing here, the previous group, will never try. What companies understand developers value and where can I find them? This is a very difficult question to ask, I don’t have a list of those companies or departments, I have seen in my past signs in companies bending backwards to compensate, in more ways the monetary, a developer that is a good resource to them. Allowing the person to move out of the state and still let them work for the company from home is a sign that company goes by individual cases. Allowing them to go to conference that will not benefit the company is another big sign. Simple signs like flexible hours and letting some people work from home. To see those signs you need to be working in that company for awhile and look at the departments where the manager is taking care of their employees in individual cases. Look for the department where people get quiet extra perks, where some people in the department work from home or remotely. In my experience, but not always true, medium to big companies, are prompt to recognize good developers. Then again, some companies just don’t get it and is when you see many technical people managing developers. For all the people that email me stating that developers can also be very good managers, I do not disagree, I just think that a good developers loves writing code, when you remove that part the better salary isn’t enough to keep a developer happy. Burned out developers appreciate being promoted to managers. How do I know I work in a bad company? In my experience I have been a consultant and seen many companies, a few signs I have learned about companies that will not recognize good developers are: When the turn over is pretty high, when developers are moving out in a big rate, no rocket scientist needs to tap you in the shoulder. When the company is looking always to outsource their development resources. The product is not that interesting nor the company cares too much for their final result and support. Code sweat shops. You’ll know when you start working in one of those. Run for the hills! Where do I start? Disclaimer: I have only based this post on Michael “Doc” Norton, this is just my interpretation and ideas. First thing is to look at Michael “Doc” Norton presentation Take Control of Your Development Career http://docondev.blogspot.com/ That should be the first thing any developer should look and follow like it was a pattern. I would personally recommend to find some language or pattern you are interested with and learn it, learn something that will make you happy. Second, join a User Group and get involve in the community. There are hundreds of user groups, and I’m sure you’ll find one in your city or near you town. Code Camps are Developers Meet Ups are also good resources. Third, I would join a open source project you are interested or better yet, create a new open source project with the new technology that you have learn and get coding. Fourth, create a Twitter account and follow the people that talks about the technology you are interested on. If you follow this 4 steps above I think you’ll be on your way, after they are complete, when you release your Open Source project you can say that you accomplished the first steps. Now, do not expect anything to change in your career life, you are changing and should not expect anything in return, besides borrowing some time from sleeping and your family. Creating a good schedule may help you, I find wasted time in many places that I use. Flying for work is actually one of those that allows me to do my best work on a airplane, don’t need to borrow time from anywhere else. Making sure you always have a light, charged laptop is so important. Next steps following the Michael “Doc” Norton Pattern or my interpretation of. First, help run a user group or better yet, start a new user group. I’ll add, as well, go to one conference a year and free development events around your city; Code Camps, Geek Dinners, etc. There are many free events sponsored by different companies for developers to get to know their products, I highly recommend those as the way to get connected. Second, chose a mentor, this is a very hard thing to do I experienced, find an expert in the technology you are learning that has the time for you, it is difficult, I wish you best of luck. Third, learn another technology or pattern, open your horizons a little bit more. Why not, if you had fun previously, keep doing it. Fourth, get involved in forums to answer and ask questions, getting notice in public forums is rewarding for your ego after such a long journey. Final steps following the Michael “Doc” Norton Pattern Teach what you know, become humble on your knowledge, find as many opportunities to teach and to get involved with the community, bring all that to your day job. Mr. Norton talks about getting naked, expose yourself to others in your knowledge and what you do not know. You are never too important for small opportunities, yet don’t  be afraid to take anything big and learn from the experience. Anytime you have the opportunity to talk to somebody that has reach the point the community knows his or her name, means that you should learn from it. Take opportunities that won’t make you money, yet will make you happy. Sometimes you need to spend money and time. Register talks in Code Camps and Dev Meet Ups, those are free, also go to Conference, Development Summits and Geek Diners for example. One day, people will pay you to attend. When will all these pay off? I don’t know. I’m still in the path, there are a few things that during your journey you may get little acknowledgements that you are in the correct path. In my case I think those are the little signs that tells you about your journey. I got awarded the Microsoft Most Valuable Professional for ASP.NET in 2007, 2008, 2009 and 2010. I got selected to speak at the DevConnections in Las Vegas in 2010 and Orlando 2011. I do believe that I do have a long way to go, yet what I do makes me happy and I hope I can keep doing for years to come. Every year I can see an improvement on my code, and more frameworks and languages are under my belt, I learn to embrace them all as well as in my daily job, I have been able to work in a few projects beyond my department. I’m a learner and believer of the Michael “Doc” Norton pattern. Looking forward to learn more about it to be able to apply it better. In my short journey I now see my mistakes, I did a few things right, I have been listening the intelligent people and not being afraid to move along the technology changes. In my professional life, I have tried to avoid being placed in only one technology and product. I have always share my code and never confused anybody that wanted to take over any of my projects, I didn’t think anything I created as my own nor care too much when politics didn’t see my vision. I stayed flexible, ready and visible, yet humble. I keep my head just below the clouds, and avoided managers meetings. I credit my manager for my success, and I faulted publicly only myself for the failures. Hope this helps. Cheers, Al Follow me in Twitter  Read my previous post tweetmeme_url = 'http://weblogs.asp.net/albertpascual/archive/2010/12/09/part-2-career-development-as-a-software-developer-without-becoming-a-manager.aspx'; tweetmeme_source = 'alpascual';

    Read the article

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

  • VLC 2.0.3 on Lubuntu 12.04: No audio?

    - by drezabek
    I am on Lubuntu 12.04, and I have installed VLC media player version 2.0.3. When I try and play an audio file, it appears to load fine, and the media position bar displays the progress, and it says it is playing, but I can't here any thing through my speakers. I can hear game audio, web audio, and audio from SMPlayer just fine, but with VLC, I can't here anything. Below is the "Messages" output with the verbosity option set to "2 (debug)" main debug: processing request item: The Bottom, node: Playlist, skip: 0 main debug: resyncing on The Bottom main debug: The Bottom is at 0 main debug: starting playback of the new playlist item main debug: resyncing on The Bottom main debug: The Bottom is at 0 main debug: creating new input thread main debug: Creating an input for 'The Bottom' main debug: TIMER input launching for 'Floex - Machinarium Soundtrack - 01 The Bottom.flac' : 23.706 ms - Total 23.706 ms / 1 intvls (Avg 23.706 ms) main debug: using timeshift granularity of 50 MiB, in path '/tmp' main debug: `file:///home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' gives access `file' demux `' path `/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' main debug: creating demux: access='file' demux='' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' file='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for access_demux module: 3 candidates main debug: no access_demux module matching "file" could be loaded main debug: TIMER module_need() : 2.332 ms - Total 2.332 ms / 1 intvls (Avg 2.332 ms) main debug: creating access 'file' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac', path='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for access module: 2 candidates filesystem debug: opening file `/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: using access module "filesystem" main debug: TIMER module_need() : 0.762 ms - Total 0.762 ms / 1 intvls (Avg 0.762 ms) main debug: Using stream method for AStream* main debug: starting pre-buffering main debug: received first data after 0 ms main debug: pre-buffering done 1024 bytes in 0s - 43478 KiB/s main debug: looking for stream_filter module: 7 candidates main debug: no stream_filter module matching "any" could be loaded main debug: TIMER module_need() : 0.236 ms - Total 0.236 ms / 1 intvls (Avg 0.236 ms) main debug: looking for stream_filter module: 1 candidate main debug: using stream_filter module "stream_filter_record" main debug: TIMER module_need() : 0.156 ms - Total 0.156 ms / 1 intvls (Avg 0.156 ms) main debug: creating demux: access='file' demux='' location='/home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' file='/home/doug/Music/unsorted/Floex - Machinarium Soundtrack/Floex - Machinarium Soundtrack - 01 The Bottom.flac' main debug: looking for demux module: 54 candidates flacsys debug: Picture type=3 mime=image/png description='' file length=679371 qt4 debug: IM: Setting an input main debug: looking for packetizer module: 21 candidates main debug: using packetizer module "packetizer_flac" main debug: TIMER module_need() : 0.211 ms - Total 0.211 ms / 1 intvls (Avg 0.211 ms) main debug: using demux module "flacsys" main debug: TIMER module_need() : 4.023 ms - Total 4.023 ms / 1 intvls (Avg 4.023 ms) main debug: looking for a subtitle file in /home/doug/Music/unsorted/Floex - Machinarium Soundtrack/ main debug: looking for meta reader module: 2 candidates main debug: using meta reader module "taglib" main debug: TIMER module_need() : 5.245 ms - Total 5.245 ms / 1 intvls (Avg 5.245 ms) main debug: removing module "taglib" main debug: `file:///home/doug/Music/unsorted/Floex%20-%20Machinarium%20Soundtrack/Floex%20-%20Machinarium%20Soundtrack%20-%2001%20The%20Bottom.flac' successfully opened main debug: selecting program id=0 main debug: looking for decoder module: 30 candidates main debug: using decoder module "flac" main debug: TIMER module_need() : 0.442 ms - Total 0.442 ms / 1 intvls (Avg 0.442 ms) main debug: Buffering 0% flac debug: decode STREAMINFO flac debug: channels:2 samplerate:44100 bitspersamples:16 flac debug: STREAMINFO decoded main debug: Buffering 30% main debug: recycling audio output main debug: looking for audio output module: 3 candidates main debug: Buffering 61% pulse debug: using stereo channel map pulse debug: using library version 1.1.0 pulse debug: (compiled with version 1.1.0, protocol 26) main debug: Buffering 92% main debug: Stream buffering done (371 ms in 2 ms) pulse debug: connected locally to unix:/home/doug/.pulse/dce22254e867f905188a2ce200000003-runtime/native as client #14 pulse debug: using protocol 26, server protocol 26 pulse debug: using buffer metrics: maxlength=4194304, tlength=9880, prebuf=0, minreq=3528 pulse debug: connected to sink 0: alsa_output.pci-0000_00_14.2.analog-stereo main debug: using audio output module "pulse" main debug: TIMER module_need() : 4.571 ms - Total 4.571 ms / 1 intvls (Avg 4.571 ms) main debug: output 's16l' 44100 Hz Stereo frame=1 samples/4 bytes main debug: mixer 'f32l' 44100 Hz Stereo frame=1 samples/8 bytes main debug: filter(s) 'f32l'->'s16l' 44100 Hz->44100 Hz Stereo->Stereo main debug: looking for audio filter module: 14 candidates audio_format debug: f32l->s16l, bits per sample: 32->16 main debug: using audio filter module "audio_format" main debug: TIMER module_need() : 0.187 ms - Total 0.187 ms / 1 intvls (Avg 0.187 ms) main debug: conversion pipeline completed main debug: looking for audio mixer module: 2 candidates main debug: using audio mixer module "float32_mixer" main debug: TIMER module_need() : 0.125 ms - Total 0.125 ms / 1 intvls (Avg 0.125 ms) main debug: input 's16l' 44100 Hz Stereo frame=1 samples/4 bytes main debug: looking for audio filter module: 1 candidate scaletempo debug: format: 44100 rate, 2 nch, 4 bps, fl32 scaletempo debug: params: 30 stride, 0.200 overlap, 14 search scaletempo debug: 1.000 scale, 1323.000 stride_in, 1323 stride_out, 1059 standing, 264 overlap, 617 search, 2204 queue, fl32 mode main debug: using audio filter module "scaletempo" main debug: TIMER module_need() : 0.233 ms - Total 0.233 ms / 1 intvls (Avg 0.233 ms) main debug: filter(s) 's16l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo pulse debug: listing sink alsa_output.pci-0000_00_14.2.analog-stereo (0): Built-in Audio Analog Stereo main debug: looking for audio filter module: 14 candidates audio_format debug: s16l->f32l, bits per sample: 16->32 main debug: using audio filter module "audio_format" main debug: TIMER module_need() : 0.147 ms - Total 0.147 ms / 1 intvls (Avg 0.147 ms) main debug: conversion pipeline completed pulse debug: base volume: 65536 main debug: looking for audio filter module: 1 candidate equalizer debug: equalizer loaded for 44100 Hz with 10 bands 2 pass equalizer debug: 60 Hz -> factor:0.000000 alpha:0.003013 beta:0.993973 gamma:1.993901 equalizer debug: 170 Hz -> factor:0.000000 alpha:0.008490 beta:0.983019 gamma:1.982437 equalizer debug: 310 Hz -> factor:0.000000 alpha:0.015374 beta:0.969252 gamma:1.967331 equalizer debug: 600 Hz -> factor:0.000000 alpha:0.029328 beta:0.941343 gamma:1.934254 equalizer debug: 1000 Hz -> factor:0.000000 alpha:0.047918 beta:0.904163 gamma:1.884869 equalizer debug: 3000 Hz -> factor:0.000000 alpha:0.130408 beta:0.739184 gamma:1.582718 equalizer debug: 6000 Hz -> factor:0.000000 alpha:0.226555 beta:0.546889 gamma:1.015267 equalizer debug: 12000 Hz -> factor:0.000000 alpha:0.344937 beta:0.310127 gamma:-0.181410 equalizer debug: 14000 Hz -> factor:0.000000 alpha:0.366438 beta:0.267123 gamma:-0.521151 equalizer debug: 16000 Hz -> factor:0.000000 alpha:0.379009 beta:0.241981 gamma:-0.808451 main debug: using audio filter module "equalizer" main debug: TIMER module_need() : 0.353 ms - Total 0.353 ms / 1 intvls (Avg 0.353 ms) main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: looking for visualization2 module: 1 candidate main debug: looking for text renderer module: 2 candidates freetype debug: Building font databases. freetype debug: Took 0 microseconds freetype debug: Using Serif Bold as font from file /usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf freetype debug: using fontsize: 2 main debug: using text renderer module "freetype" main debug: TIMER module_need() : 3.278 ms - Total 3.278 ms / 1 intvls (Avg 3.278 ms) main debug: looking for video filter2 module: 18 candidates swscale debug: 32x32 chroma: YUVA -> 16x16 chroma: RGBA with scaling using Bicubic (good quality) main debug: using video filter2 module "swscale" main debug: TIMER module_need() : 1.037 ms - Total 1.037 ms / 1 intvls (Avg 1.037 ms) main debug: looking for video filter2 module: 18 candidates yuvp debug: YUVP to YUVA converter main debug: using video filter2 module "yuvp" main debug: TIMER module_need() : 0.156 ms - Total 0.156 ms / 1 intvls (Avg 0.156 ms) main debug: Deinterlacing available main debug: deinterlace 0, mode blend, is_needed 0 main debug: Opening vout display wrapper main debug: looking for vout display module: 6 candidates main debug: looking for vout window xid module: 4 candidates qt4 debug: requesting video... qt4 debug: Video was requested 0, 0 main debug: using vout window xid module "qt4" main debug: TIMER module_need() : 61.671 ms - Total 61.671 ms / 1 intvls (Avg 61.671 ms) main debug: looking for inhibit module: 2 candidates main debug: using inhibit module "xdg_screensaver" main debug: TIMER module_need() : 0.336 ms - Total 0.336 ms / 1 intvls (Avg 0.336 ms) xdg_screensaver debug: started xdg-screensaver (PID = 6682) xcb_xv debug: connected to X11.0 server xcb_xv debug: vendor : The X.Org Foundation xcb_xv debug: version: 11103000 xcb_xv debug: using screen 0x15a xcb_xv debug: using XVideo extension v2.2 xcb_xv debug: using adaptor NV17 Video Texture xcb_xv debug: using port 310 xcb_xv debug: using image format 0x30323449 xcb_xv debug: using X11 visual ID 0x21 (depth: 24) xcb_xv debug: using X11 window 0x03400000 xcb_xv debug: using X11 graphic context 0x03400002 main debug: VoutDisplayEvent 'fullscreen' 0 main debug: VoutDisplayEvent 'resize' 800x500 window main debug: using vout display module "xcb_xv" main debug: TIMER module_need() : 69.890 ms - Total 69.890 ms / 1 intvls (Avg 69.890 ms) main debug: original format sz 800x500, of (0,0), vsz 800x500, 4cc I420, sar 1:1, msk r0x0 g0x0 b0x0 main debug: removing module "freetype" main debug: looking for text renderer module: 2 candidates freetype debug: Building font databases. freetype debug: Took 0 microseconds freetype debug: Using Serif Bold as font from file /usr/share/fonts/truetype/ttf-dejavu/DejaVuSans.ttf freetype debug: using fontsize: 2 main debug: using text renderer module "freetype" main debug: TIMER module_need() : 4.552 ms - Total 4.552 ms / 1 intvls (Avg 4.552 ms) main debug: using visualization2 module "visual" main debug: TIMER module_need() : 84.104 ms - Total 84.104 ms / 1 intvls (Avg 84.104 ms) main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: filter(s) 'f32l'->'f32l' 44100 Hz->44100 Hz Stereo->Stereo main debug: conversion pipeline completed main debug: filter(s) 'f32l'->'f32l' 48510 Hz->44100 Hz Stereo->Stereo main debug: looking for audio filter module: 14 candidates main debug: using audio filter module "samplerate" main debug: TIMER module_need() : 0.375 ms - Total 0.375 ms / 1 intvls (Avg 0.375 ms) main debug: conversion pipeline completed main debug: End of audio preroll main debug: Decoder buffering done in 91 ms main warning: PTS is out of range (-9269), dropping buffer pulse debug: deferring start (190703 us) main debug: looking for video blending module: 1 candidate main debug: using video blending module "blend" main debug: TIMER module_need() : 0.275 ms - Total 0.275 ms / 1 intvls (Avg 0.275 ms) main debug: Detected interlaced video main debug: deinterlace 0, mode blend, is_needed 1 xcb_xv debug: display is visible pulse debug: starting deferred pulse warning: too late by 93760 us pulse debug: changed sample rate to 44186 Hz pulse debug: started pulse warning: too late by 94474 us pulse debug: changed sample rate to 44229 Hz pulse warning: too late by 93532 us pulse debug: changed sample rate to 44272 Hz pulse warning: too late by 92829 us pulse debug: changed sample rate to 44315 Hz pulse warning: too late by 92132 us pulse debug: changed sample rate to 44358 Hz xcb_xv debug: display is visible pulse warning: too late by 91534 us pulse debug: changed sample rate to 44401 Hz xcb_xv debug: display is visible pulse warning: too late by 89482 us pulse debug: changed sample rate to 44440 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible pulse warning: too late by 87529 us pulse debug: changed sample rate to 44479 Hz pulse warning: too late by 84577 us pulse debug: changed sample rate to 44504 Hz main debug: auto hiding mouse cursor pulse warning: too late by 78562 us pulse debug: changed sample rate to 44492 Hz pulse warning: too late by 68015 us pulse debug: changed sample rate to 44422 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible main debug: auto hiding mouse cursor pulse debug: changed sample rate to 44336 Hz xcb_xv debug: display is visible xcb_xv debug: display is visible xcb_xv debug: display is visible main debug: auto hiding mouse cursor I have had issues with VLC in the past- the audio quality was extremely crackly, as if the headphone jack was plugged in only half way, and the sounds were extremely sharp and caused my speakers to make a ringing/vibrating noise... It would eventually start working after I messed around with the audio settings, but it happened every restart. I eventually switched to SMPlayer, but now I need some of the features that VLC offers, but I still can't use VLC. At this point, the audio can not be heard at all, and the method I used before, messing around with the audio settings, isn't getting me anywhere. (note, I reposted this on VideoLan's forums, link is here: http://forum.videolan.org/viewtopic.php?f=13&t=104726) Please let me know if you need more information, or are confused by something I posted! Thanks!

    Read the article

  • Why version of chrome does not matter much more then firefox and firefox does not matter much as IE

    - by anirudha
    Everything not perfect. in software the software make and growth by user feedback like what user expected from the software and want in next version of software. In a chrome Event i hear about the Chromium. you can find some interesting things here Video 1 Video 2 come to the point. when i hear about some good website of india. many of them talking a little thing in common that. We are #1 because we not thing that we make a great application and deploy them and think that we finished own works preharps in a small days we make a small website deploy them and improve them always latter. what the point they all talking about:- the conclusion is that software make by user feedback. they tell that he not spent much time and wait for a long time when their project was finish and they launch their website. preharps they tell that they make a small website in a small time and launched them. make a research on them later and make them better later and website growth as they thing. if they are late then someone else can win even their project was much good them other. not more but a little story:-  before few month i hear about a great website who sold many of books daily i myself purchase some from them to track how they work and how they provided service. i not found any problem with their service. the service they provided is good but when i see their website i found that the mockup code was very badly designed. i am not know the matter how they growth because they used very other stuff who make their website slow. when i research something more i found that their is very hard to implement the website look like them. on their blog they writing about a mail they have. the clone of them make by many other but not goes good as well as they make. after few month later website is looking great. many thing they improved and make them better as  other thing. a another conclusion that same as another story that user feedback. well now come to the point. we talking about Chrome,firefox and IE. what thing is goes common that they all are browser. but something goes different that Chrome is a one of the best browser. from a month many of issue submitted to chrome that user found when they use them. so what is make this different the different is that when feedback goes to someone they take a action and think to make them better so improvement of chrome based on feedback user put using many things. secondly because it's goes open-source many of developer contribute them and make them real browser not real [tape] browser as like IE [a good example]. as you see in video they talking about silent update in chrome and futurecoming chromium. the thing they implement is too good. because by this thing user not worry about a new version. i myself never find a problem that you need to user new version as we found same problem in other application. Well think are great in chrome and now talking about Firefox. Firefox is a best option for development as well as chrome best for surfing the internet. in firefox many thing are great like plugin [ex: Firebug] , addons personas themes and many other thing and customization in firefox make them really a browser not like a joker [IE a good example]. well now come to IE. are IE really great no. someone from Microsoft can say that ha ha hi hi because they can't see the power of open-source. they thing that they make a software and they never need user feedback because they produced windows who really great for user because they used them. example :- before few month Microsoft shipped Windows live. when i use them that i found that their is no sense make for using this one software. suppose you need to write a post through Live writer. the old version are great i myself have no problem but in 2011 i found that they changed everything in user interface. so learn a new thing and spent sometime more to learn a new version whenever need are same and feature are same so why user spent a little time more to learn a lesson who they want to teach even their is no sense to learn them. the problem in 2011 Live not only of mine their are many other have same problem as mine and forget live 2011 after the see a badly design user interface. even they tell we maked in WPF yeah yeah WPF we make in .net. are you can say that what is the matter .net for user. the user have no problem to use WPF based application even you make them fool as we make them in WPF 2020 they are future technologies and we launch it 10 year before only for you yeah you dear customer of mine. yeah they thing WPF is best and thing to implement every software they make even they forget to make better user interface but they also remember to make them next version in WPF. the IE 9 Rc release on 10 febuary. but are they really cool. how much feedback they take and take action of them. their is no answer because they thing to launch a software they never thing what user want and off-course not care of user feedback. as we mention in Firefox and in chrome user feedback have a big matter because sound come from a public and user who use the software not only who make them software as IE 9 have. so feedback take a opportunities to make their software better and less hassel to use them in user hands not only in developer hands. so IE9 is not a good guys who still need of user if they really want a experience. well what Microsoft implemented in IE. i am not talking about that furthure more but i found in article last days[why not say reading a google blog]  yeah see them in http://googleblog.blogspot.com/2011/02/microsofts-bing-uses-google-search.html Well their is nothing good for developer in IE9. the blah blah blah they can always said on MSDN and many other site they have. many from public talking about them because they never can see a good software outside Microsoft. they never talking about Firebug even in books they never show you that. well i know competitor never show you a stuff of competitor i have same issue from Yahoo. on a days i hear from newsletter from them they write a subline on the bottom that USE IE or Firefox to exerience better Web. i am agree with Firefox and i am not know they really talking about IE or joking but i never believe they forget to put chrome. well i know their is corporate rule everyone should follow first. so no problem yahoo i know the matter. well IE:- so what is IE and Why We should use IE. well their is no sense to use IE. the thing we expect from IE but never found that:- first thing is that as a developer we thing the customization as well as other browser have like in chrome have it's own customization and firefox is also great in this matter. but IE really for Web development. are you joking:- the thing they mention in their blog is that IE9 have a new developer tool who have three new panel or tabs. are this joke whenever Firefox and chrome have everyday a new plugin or great upgrade of old plugin they tell we add three new panel first is network second is blah third is blah. well nice joke you make all MSDN blogger i like the way you talking about IE.  even we know what matter the browser have. i thing whenever they make IE 6 they talking about IE as same as they talking today. Secondly their is no other tool to use with IE deveoper tool like Firebug is avilable in IE but not make by IE. firebug team themselves make them for IE. because many of developer thing to use firebug but can't use because they still goes mad about IE because day and night they only hear about tools maked by Microsoft. so no plugin [even very small developer tool] no customized no personas on themse. no update yeah why forget these topic come with us and share a little thing more. IE launch IE 6 after 7 after 8 and now 9 [even in future] but what they do. they do nothing on user feedback they still thing WPF is great because colors make user cool and they forget to implement other things as other already provide. Chrome and Firefox are come after IE. Mozilla firefox come in 2004 and chrome is late in 2008. even they are late they still focus on Developer and thing they feel first is that customization like developer tool , themese and perfsonas and many other great things. are they can find in IE even next i means 10 yeah IE10 never because they thing only making a software or force user to use new version of OS. i am confused that why not wait and force user to purchase windows 8 instead of 7. so IE have no customization even small developer tool i thing that they make a customizable interface like in firefox who configure by about:config. so thing is discussed about really not a point we thing to goes but now it's clear what is making no matter for version in Firefox and chrome. because chrome and firefox not wait for  a long time and explode a bomb to make publicity. they still work and make upgrade possible to user as soon as possible. [chrome never tell about they goes old they himself update them].so update comes soon in Firefox and in chrome but in IE their is a long time to wait and they make them without feedback. so IE really not for human and not really for us. whenver you found a bug in chrome and in firefox you report them and found that they are work in progressed and can be see in next version of firefox. but what you see whenever you see IE. you found that what the bug can found in IE whenver they not implemented same feature in IE. well IE 9 is next IE6 for developer. conclusion:-  after reading a whole post you find that i hate all thing about IE. why are i write a big post on a small pity software IE. why i open the poll of IE. are their anything in IE break my heart. are their is something goes wrong with me and with my IE9. are their is anything i got with IE9. why i write a big post. well as a developer play a trick that give sometime to chrome to make them better and some other to make firefox better and feel something you contribute really have a matter as a contribute you find some other and their thought on same software. some are great maybe some of them blah blah. but are their is true that outside Microsoft their is no good sollution can make because it's outside Microsoft. their is not true. the thing developer make not have matter even using Microsoft technologies or outside technologies of MS. so stop this i not want to talking some other things just stop it. i means their is no more blah i want to talking with you for IE.i still hate them and believe it is next IE6 for Web. Answers: if you still need a answer in lines that the answer is that IE late update as long as they can and also make force user to upgrade IE9 because they want to promote windows first then thing about IE and chrome and firefox not do that as same as IE. so IE is late and user forced software. in firefox and chrome upgrade come soon as soon as they possible. Thanks to give me a great time and red my blah on Blah i means IE9 Thanks again Anirudha

    Read the article

  • FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!

    - by Daniel Mortimer
    Introduction There is nothing more frustrating than a problem that "cannot be reproduced". Logs, configuration files have been analysed but there just isn't enough information to establish the root cause. The issue maybe closed, but you are left with the feeling that the problem will raise its ugly head again in the future. Trouble is, to resolve such issues you need to capture diagnostic data at the exact time the incident occurs. Step forward Fusion Middleware Diagnostic Framework!  Diagnostic Framework monitors WebLogic Managed Servers and delivers "Automatic capture of diagnostic data upon first failure". To quote fromOracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems "When a critical error occurs ... the Diagnostic Framework automatically collects diagnostics, such as thread dumps, DMS metric dumps, and WebLogic Diagnostics Framework (WLDF) server image dumps ... The data is stored in a file-based repository and is accessible with command-line utilities." In other words the data collected upon first failure - especially the thread and image dumps - provides a snapshot of the system as or immediately after the problem occurs. The table below shows the type of WebLogic Server issues which fall into the scope of Diagnostic Framework How to Configure Diagnostic Framework? Depending on your Fusion Middleware product choice you may not need to do anything! Diagnostic Framework is automatically installed, configured and initiated for any WebLogic Domain which has the Oracle Java Required Files (JRF) template applied. This template is applied by default whenever you configure WebLogic Managed Servers for products such as Portal / Forms / Reports / Discoverer Identity Management ( OID , OAM , OIM etc) WebCenter SOA Check your WebLogic Domain directory structure. If you have an "adr" sub directory under DOMAIN_HOME/servers/<servername>/ then JRF template has been applied and Diagnostic Framework will be in play. Should the "adr" sub directory not exist, review the advice given in My Oracle Support article How to Apply FMW ( EM ) Control and JRF to a WebLogic Domain and Managed Servers [ID 947043.1] If you are working with a standalone WebLogic Server solution and applying Oracle JRF is not acceptable, consider using WLDF - WebLogic Diagnostic Framework. (Fusion Middleware Diagnostic Framework makes use of WLDF under the covers.) Couple of useful links about WLDF are listed below Configuring and Using the Diagnostics Framework for Oracle WebLogic Server 11g WebLogic Diagnostics Framework-A Very Useful Tool [A nice blog which describes a WLDF use case] How to Get Started With Diagnostic Framework To be frank, the Fusion Middleware Administrator's Guide is the best place to start your learning Oracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems A lot of reading here,  but if you are in hurry and just want to get the right information to Oracle Support to help resolve your issue, check out the next section below. How to Upload Diagnostic Framework Incident Data to Oracle Support Some Background Information There are three interfaces to the Repository: Enterprise Manager Cloud Control (Support Workbench) WLST (Command Line) ADRCI (Command Line) The Enterprise Manager Cloud Control does provide a nice GUI interface to search, view and package diagnostic framework incidents. However, this software is not to be confused with Fusion Middleware (EM) Control. Cloud Control (formerly known as Grid Control) is part of the Enterprise Manager media package. EM Cloud Control has it's own install and configuration story. Therefore, for the benefit of those yet to install and play with Cloud Control, I am going to describe how to use the command line tools. Ideally, you would only need to one command line interface, but currently I suggest using both - mainly due to the fact that ADRCI SHOW INCIDENTS does not reveal the description behind the Diagnostic Framework error code. Instructions: Note: WLST and ADRCI are case sensitive when it comes to handling parameter values. If you make a mistake, expect an unfriendly syntax error message. 1) Find the incident Note: The managed server which you are troubleshooting must be up and running. If the managed server is down, ensure the domain's Admin Server is accessible. If you cannot connect to the Admin Server or the Managed Server the example WLST commands will not work. a) Launch WLST  Note: Use the WLST which resides in the "oracle_common" directory (not WL_HOME/common/bin) otherwise you will get a syntax error like the one below Traceback (innermost last):  File "<console>", line 1, in ?NameError: listIncidents MW_HOME/oracle_common/common/bin/wlst.sh b) Connect to the managed server or the admin server e.g. wls:/offline> connect('weblogic','welcome1','t3://localhost:7020') c) Run the command wls:/MyDomain/serverConfig> listIncidents() This will list the incidents for the server to which you have connected. If you have connected to the Admin Server and want to list the incidents for a managed server within the domain, use the command wls:/MyDomain/serverConfig> listIncidents(adrHome='diag\ofm\MyDomain\MyManagedServer' ,server='MyManagedServer') Example output Incident Id     Problem Key              Incident Time         1       DFW-99998 [java.lang.NullPointerException] [oracle.error.simulator.ErrorSimulator.createNullPointerException][errorWebApp_1-0-0-0]        Fri Nov 02 10:38:46 GMT 2012  The piece highlighted in bold is the description you do not see when using the ADRCI 'SHOW INCIDENT' command. Make a note of the incident id. You are ready to move to step 2 2. Package the incident a) Set up the environment - example commands below are for Unix cd <DOMAIN_HOME>/bin . ./setDomainEnv.sh If you want ADRCI to run a Remote Diagnostic Agent collection (recommended) at generate package time, point ORACLE_HOME at oracle_common ORACLE_HOME=$MW_HOME/oracle_common; export ORACLE_HOME To prevent ADRCI from running RDA at generate package time, point ORACLE_HOME at WL_HOME/server/adr directory.  ORACLE_HOME=$WL_HOME/server/adr; export ORACLE_HOME b) Launch adrci $WL_HOME/server/adr/adrci c) Set BASE and HOMEPATH adrci> SET BASE /oracle/middleware/user_projects/domains/ mydomain/servers/mymanagedserver/adr adrci> SET HOMEPATH diag/ofm/mydomain/mymanagedserver d)  Optionally run SHOW INCIDENTS e.g. adrci> SHOW INCIDENTS -MODE DETAIL ADR Home = /oracle/middleware/user_projects/domains/mydomain/ servers/mymanagedserver/adr/diag/ofm/mydomain/mymanagedserver:***********************************************************************************************************************************INCIDENT INFO RECORD 1**********************************************************   INCIDENT_ID                   1   STATUS                        ready   CREATE_TIME                   2012-11-02 10:38:46.468000 +00:00   PROBLEM_ID                    1   CLOSE_TIME                    <NULL>   FLOOD_CONTROLLED              none   ERROR_FACILITY                DFW   ERROR_NUMBER                  99998   ERROR_ARG1                    <NULL>   ERROR_ARG2                    <NULL>   ERROR_ARG3                    <NULL>   ERROR_ARG4                    <NULL>   ERROR_ARG5                    <NULL>   ERROR_ARG6                    <NULL>   ERROR_ARG7                    <NULL>   ERROR_ARG8                    <NULL>   ERROR_ARG9                    <NULL>   ERROR_ARG10                   <NULL>   ERROR_ARG11                   <NULL>   ERROR_ARG12                   <NULL>   SIGNALLING_COMPONENT          <NULL>   SIGNALLING_SUBCOMPONENT       <NULL>   SUSPECT_COMPONENT             <NULL>   SUSPECT_SUBCOMPONENT          <NULL>   ECID                          5162744c6a2eea5e:155ff445:13ac0aae7cb:-8000-0000000000000325   IMPACTS                       01 rows fetched e)  Create a logical package IPS CREATE PACKAGE INCIDENT incident_number e.g. adrci> IPS CREATE PACKAGE INCIDENT 1Created package 1 based on incident id 1, correlation level typical f) Generate the package IPS GENERATE PACKAGE package_number IN path e.g. adrci> IPS GENERATE PACKAGE 1 IN /tmp Generated package 1 in file /tmp/DFW99998j_20121102113633_COM_1.zip, mode complete Note: If the generate package command hangs, ADRCI may be experiencing an issue when running RDA. To avoid such trouble, exit ADRCI and point the ORACLE_HOME environment variable at WL_HOME/server/adr 3) Upload the package zip to Oracle Support via your Service Request a) Log into My Oracle Support and locate your Service Request b) Click on "Add Attachments c) And upload the zip file

    Read the article

  • CodePlex Daily Summary for Thursday, June 21, 2012

    CodePlex Daily Summary for Thursday, June 21, 2012Popular ReleasesuComponents: uComponents v3.1.1: Continuing on from 84817, we are proud to announce our 3.1.1 release! The following issues have been resolved: 14640 14696 14704 14724 Please note: This release is not to be confused with the upcoming 80410 (which will support .NET 4.0)MVVM Light Toolkit: V4RTM (binaries only) including Windows 8 RP: This package contains all the latest DLLs for MVVM Light V4 RTM. It includes the DLLs for Windows 8 Release Preview. An updated Nuget package is also available at http://nuget.org/packages/MvvmLightLibs An installer with binaries, snippets and templates will follow ASAP.Weapsy - ASP.NET MVC CMS: 1.0.0: - Some changes to Layout and CSS - Changed version number to 1.0.0.0 - Solved Cache and Session items handler error in IIS 7 - Created the Modules, Plugins and Widgets Areas - Replaced CKEditor with TinyMCE - Created the System Info page - Minor changesAuto Proxy Configuration: Windows Proxy Setup V1.2: Bug FixesXDA ROM Hub: XDA ROM HUB v0.5: Added XRH Backup -- Backup & restore data, system and cache USE AT YOUR OWN RISK! - USE ONLY IN RECOVERY!AcDown????? - AcDown Downloader Framework: AcDown????? v3.11.7: ?? ●AcDown??????????、??、??????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDown?????"????????? ...Apex: Apex 1.4: Apex 1.4Apex 1.4 provides a framework for rapid MVVM development. Download Apex 1.4 to get the core binaries, Visual Studio Extensions, Project Templates, Samples and Documentation. The 1.4 Release provides a vast number of enhancements via the Apex Broker. The Apex Broker is an object that can be used to retrieve models, get the view for a view model and more, much like an IoC container. The new Zune Style application templates for WPF and Silverlight give a great starting point for makin...NShader - HLSL - GLSL - CG - Shader Syntax Highlighter AddIn for Visual Studio: NShader 1.3 - VS2010 + VS2012: This is a small maintenance release to support new VS2012 as well as VS2010. This release is also fixing the issue The "Comment Selection" include the first line after the selection If the new NShader version doesn't highlight your shader, you can try to: Remove the registry entry: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\FontAndColors\Cache Remove all lines using "fx" or "hlsl" in file C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\CommonExtensions\Micr...JSON Toolkit: JSON Toolkit 4.0: Up to 2.5x performance improvement in stringify operations Up to 1.7x performance improvement in parse operations Improved error messages when parsing invalid JSON strings Extended support to .Net 2.0, .Net 3.5, .Net 4.0, Silverlight 4, Windows Phone, Windows 8 metro apps and Xbox JSON namespace changed to ComputerBeacon.Json namespaceXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0: System Requirements OS Windows 7 Windows Vista Windows Server 2008 Windows Server 2008 R2 Web Server Internet Information Service 7.0 or above .NET Framework .NET Framework 4.0 WCF Activation feature HTTP Activation Non-HTTP Activation for net.pipe/net.tcp WCF bindings ASP.NET MVC ASP.NET MVC 3.0 Database Microsoft SQL Server 2005 Microsoft SQL Server 2008 Microsoft SQL Server 2008 R2 Additional Deployment Configuration Started Windows Process Activation service Start...ASP.NET REST Services Framework: Release 1.3 - Standard version: The REST-services Framework v1.3 has important functional changes allowing to use complex data types as service call parameters. Such can be mapped to form or query string variables or the HTTP Message Body. This is especially useful when REST-style service URLs with POST or PUT HTTP method is used. Beginning from v1.1 the REST-services Framework is compatible with ASP.NET Routing model as well with CRUD (Create, Read, Update, and Delete) principle. These two are often important when buildin...NanoMVVM: a lightweight wpf MVVM framework: v0.10 stable beta: v0.10 Minor fixes to ui and code, added error example to async commands, separated project into various releases (mainly into logical wholes), removed expression blend satellite assembliesMFCMAPI: June 2012 Release: Build: 15.0.0.1034 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeMonoGame - Write Once, Play Everywhere: MonoGame 2.5.1: Release Notes The MonoGame team are pleased to announce that MonoGame v2.5.1 has been released. This release contains important bug fixes and minor updates. Recent additions include project templates for iOS and MacOS. The MonoDevelop.MonoGame AddIn also works on Linux. We have removed the dependency on the thirdparty GamePad library to allow MonoGame to be included in the debian/ubuntu repositories. There have been a major bug fix to ensure textures are disposed of correctly as well as some ...????: ????2.0.2: 1、???????????。 2、DJ???????10?,?????????10?。 3、??.NET 4.5(Windows 8)????????????。 4、???????????。 5、??????????????。 6、???Windows 8????。 7、?????2.0.1???????????????。 8、??DJ?????????。Azure Storage Explorer: Azure Storage Explorer 5 Preview 1 (6.17.2012): Azure Storage Explorer verison 5 is in development, and Preview 1 provides an early look at the new user interface and some of the new features. Here's what's new in v5 Preview 1: New UI, similar to the new Windows Azure HTML5 portal Support for configuring and viewing storage account logging Support for configuring and viewing storage account monitoring Uses the Windows Azure 1.7 SDK libraries Bug fixesCodename 'Chrometro': Developer Preview: Welcome to the Codename 'Chrometro' Developer Preview! This is the very first public preview of the app. Please note that this is a highly primitive build and the app is not even half of what it is meant to be. The Developer Preview sports the following: 1) An easy to use application setup. 2) The Assistant which simplifies your task of customization. 3) The partially complete Metro UI. 4) A variety of settings 5) A partially complete web browsing experience To get started, download the Ins...KangaModeling: Kanga Modeling 1.0: This is the public release 1.0 of Kanga Modeling. -Cosmos (C# Open Source Managed Operating System): Release 92560: Prerequisites Visual Studio 2010 - Any version including Express. Express users must also install Visual Studio 2010 Integrated Shell runtime VMWare - Cosmos can run on real hardware as well as other virtualization environments but our default debug setup is configured for VMWare. VMWare Player (Free). or Workstation VMWare VIX API 1.11AutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.1: Release Notes New feature added that allows user to select remind later interval.New Projects.NET Heatmap: This is a simple project using C#, JQuery, and heatmap.js that allows you to create a heatmap for a web page using static data from a SQL database.Advanced Data Server: Advanced Data Server (ADS) is a library that enables you to create powerful server applications with little code.ARPAMISproject: This is an ambitious project that aims to provide an extensive business solution to schools, universities or any academic institutions alike. ArraySegments (by Stephen Cleary): Lightweight extension methods for ArraySegment<T>, particularly useful for byte arrays.Auto Proxy Configuration: This Tool sets proxy server automatically according to the DNSDomain.Bongiozzo Photosite: Simple photo site written using ASP.NET MVC 4.0 over Flickr API and Galleria image gallery framework. Cloud Media SharePoint Extension: With this extensions you can easily add media from the cloud like YouTube or Vimeo. Metadata from Vimeo or YouTube are also .and will be added tooContent Organizer Rule Manager SharePoint 2010: Create and manage content organizer rules faster for SharePoint 2010.Crm Customization Manager: Crm Customization Manager (CCM) by N.JL helps Dynamics CRM System Adiminstrators to easly Import Customisations and Treanslations with scheduling possibilitydemoHello: asp.net Ghost Puzzle: Protect your files with Ghost Puzzle.IonoWumpus: A simple Hunt the Wumpus implementation.LargeSky Personal Project: This a personal project, just for code place.Maze Game: Maze Game is a game for children/ computer beginners to practice the mouse movement with fun.NASM Develop IDE (The Open Source NASM Development Environment For Windows): A simple, light weight, all one in application that can help developers develop NASM applications in Windows without the need of remember fancy commands.Navigational: Gator brings navigation to projects built around life situations.Real Folders for Visual Studio: Real Folders for Visual Studio is a free plugin which makes Solution Folders map to real file system folders. With Real Folders you have the opportunity to organize your files in a simpler way than standard Visual Studio Solution Folders behave (completely uncommited to any folder on your file system). SaltFx: SaltFx is an N-Layered Domain Driven Design (DDD) framework for .NET development.SharePoint Location-Based Weather Webpart: Uses UPS information to display local weather for the user via the Yahoo Weather service.SharePoint Publishing: The Project is aimed at supplying Publishers with tools & design elements to provide a means of publishing through SharePoint.SpellLight - Lightweight Silverlight Spell Checker Library: SpellLight is a lightweight Silverlight Spell Check librarySUDOKU APP: NoneTrainer: This is an application used for logging runs, nutrition, weight, and other goals.weibospace: An ios projecct

    Read the article

  • OK - What now? How do we become a Social Business?

    - by Michael Snow
    We hope that those of you that attended yesterday's Webcast with Brian Solis enjoyed Brian's discussion with Christian Finn for our last Webcast of the season for the Oracle Social Business Thought Leaders Series.  For those of you that may have missed the webcast or were stuck at a company holiday party - you'll be glad to hear that the webcast will be available On-Demand starting later today (12/14/12). And any of you who'd like to listen to a quick but informative podcast with Brian - can listen to that here. Some of you may still be left with questions about how to get from point A to point B and even more confused than when you started thinking about this new world of Digital Darwinism. The post below, grabbed from an abundance of great thought leadership prose on Brian's blog may help you frame the path you need to start walking sooner versus later to stay off of the endangered species list.  As you explore your path forward, please keep Oracle in mind - we do offer a wide range of solutions to help your organization 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} optimize the engagement for your customers, employees and partners. The Path from a Social Brand to a Social Business Brian Solis Originally posted May 2, 2012 I’ve been a long-time supporter of MediaTemple’s (MT)Residence program along with Gary Vaynerchuk, Neil Patel, and many others whom I respect. I wanted to share my “7 questions to answer to become a social business” with you here.. Social Media is pervasive and is becoming the new normal in corporate marketing. Brands who get this right are starting to build their own media networks rich with customer connections numbering in the millions. Right now, Coca-Cola has over 34 million fans on Facebook, but they’re hardly alone. Disney follows just behind with 29 million fans, Starbucks boasts 25 million, and Oreo, Red Bull, and Converse play host to over 20 million fans. If we were to look at other networks such as Twitter and Youtube, we would see a recurring theme. People are connecting en masse with the businesses they support and new media represents the ability to cultivate consumer relationships in ways not possible with traditional earned or paid media. Sounds great right? This might sound abrupt, but the truth is that we’re hardly realizing the potential of what lies before us. Everything begins with understanding not just how other brands are marketing themselves in social media, but also seeing what they’re not doing and envisioning what’s possible. We’re already approaching the first of many crossroads that new media will present. Do we take the path of a social brand or that of a social business? What’s the difference? A social brand is just that, a business that is remodeling or retrofitting its existing marketing practices to new media. A social business is something altogether different as it embraces introspection and extrospection to reevaluate internal and external processes, systems, and opportunities to transform into a living, breathing entity that adapts to market conditions and opportunities. It’s a tough decision to make right now especially at a time when all we read about is how much success many businesses are finding without having to answer this very question. With all of the newfound success in social networks, the truth is that we’re only just beginning to learn what’s possible and that’s where you come in. When compared to the investment in time and resources across the board, social media represents only a small part of the mix. But with your help, that’s all about to change. The CMO Survey, an organization that disseminates the opinions of top marketers in order to predict the future of markets, recently published a report that gave credence to the fact that social media is taking off. One of the most profound takeaways from the report was this gem; “The “like button” [in Facebook] packs more customer-acquisition punch than other demand-generating activities.” With insights like this, it’s easy to see why the race to social is becoming heated. The report also highlighted exactly where social fits in the marketing mix today and as you can see, despite all of the hype, it’s not a dominant focus yet. As of August 2011, the percentage of overall marketing budgets dedicated to social media hovered at around 7%. However, in 2012 the investment in social media will climb to 10%. And, in five years, social media is expected to represent almost 18% of the total marketing budget. Think about that for a moment. In 2016, social media will only represent 18%? Queue the sound of a record scratching here. With businesses finding success in social networks, why are businesses failing to realize the true opportunity brought forth by the ability to listen to, connect with, and engage with customers? While there’s value in earning views, driving traffic, and building connections through the 3F’s (friends, fans and followers), success isn’t just defined simply by what really amounts to low-hanging fruit. The truth is that businesses cannot measure what it is they don’t know to value. As a result, innovation in new engagement initiatives is stifled because we’re applying dated or inflexible frameworks to new paradigms. Social media isn’t owned by marketing, but instead the entire organization. This changes everything and makes your role so much more important. It’s up to you to learn how to think outside of the proverbial social media box to see what others don’t, the ability to improve customers experiences through the evolution of a social brand into a social business. Doing so will translate customer insights from what they do and don’t share in social networks into better products, services, and processes. See, customers want something more from their favorite businesses than creative campaigns, viral content, and everyday dialogue in social networks. Customers want to be heard and they want to know that you’re listening. How businesses use social media must remind them that they’re more than just an audience, consumer, or a conduit to “trigger” a desired social effect. Herein lies both the challenge and opportunity of social media. It’s bigger than marketing. It’s also bigger than customer service. It’s about building relationships with customers that improve experiences and more importantly, teaches businesses how to re-imagine products and internal processes to better adapt to potential crises and seize new opportunities. When it comes down to it, Twitter, Facebook, Youtube, Foursquare, are all channels for listening, learning, and engaging. It’s what you do within each channel that builds a community around your brand. And, at the end of the day, the value of the community you build counts for everything. It’s important to understand that we cannot assume that these networks simply exist for people to lineup for our marketing messages or promotional campaigns. Nor can we assume that they’re reeling in anticipation for simple dialogue. They want value. They want recognition. They want access to exclusive information and offers. They need direction, answers and resolution. What we’re talking about here is the multidimensional makeup of consumers and how a one-sided approach to social media forces the needs for social media to expand beyond traditional marketing to socialize the various departments, lines of business, and functions to engage based on the nature of the situation or opportunity. In the same CMO study, it was revealed that marketers believe that social media has a long way to go toward integrating into the overall company strategy. On a scale of 1-7, with one being “not integrated at all” and seven being “very integrated,” 22% chose “one.” Critical functions such as service, HR, sales, R&D, product marketing and development, IR, CSR, etc. are either not engaged or are operating social media within a silo disconnected from other efforts or possibilities. The problem is that customers don’t view a company by silo, instead they see one company, one brand, and their experience in social media forms an impression that eventually contributes to their view of your brand. The first step here is to understand business priorities and objectives to assess how social media can be additive in achieving these goals. Additionally, surveying the landscape to determine other areas of interest as its specifically related to your business. • Are customers seeking help or direction? • Who are your most valuable customers and what are they sharing? • How can you use social media to acquire and retain customers? - What ideas are circulating and how can you harness user generated activity and content to innovate or adapt to better meet the needs of customers? - How can you broaden a single customer view to recognize the varying needs of customers and how your organization can organize around each circumstance? - What insights exist based on how consumers are interacting with one another? How can this intelligence inform marketing, service, products and other important business initiatives? - How can your business extend their current efforts to deliver better customer experiences and in turn more effectively unit internal collaboration and communication? Customer demands far exceed the capabilities of the marketing department. While creating a social brand is a necessary endeavor, building a social business is an investment in customer relevance now and over time. Beyond relevance, a social business fosters a culture of change that unites employees and customers and sets a foundation for meaningful and beneficial relationships. Innovation, communication, and creativity are the natural byproducts of engagement and transformation. As a social brand, we are competing for the moment. As a social business, we are competing the future in all that we do today.

    Read the article

  • Code Behaviour via Unit Tests

    - by Dewald Galjaard
    Normal 0 false false false EN-ZA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Some four months ago my car started acting up. Symptoms included a sputtering as my car’s computer switched between gears intermittently. Imagine building up speed, then when you reach 80km/h the car magically and mysteriously decide to switch back to third or even second gear. Clearly it was confused! I managed to track down a technician, an expert in his field to help me out. As he fitted his handheld computer to some hidden port under the dash, he started to explain “These cars are quite intelligent, you know. When they sense something is wrong they run in a restrictive program which probably account for how you managed to drive here in the first place...”  I was surprised and thought this was certainly going to be an interesting test drive. The car ran smoothly down the first couple of stretches as the technician ran through routine checks. Then he said “Ok, all looking good. We need to start testing aspects of the gearbox. Inside the gearbox there are a couple of sensors. One of them is a speed sensor which talks to the computer, which in turn will decide which gear to switch to. The restrictive program avoid these sensors altogether and allow the computer to obtain its input from other [non-affected] sources”. Then, as soon as he forced the speed sensor to come back online the symptoms and ill behaviour re-emerged... What an incredible analogy for getting into a discussion on unit testing software? Besides I should probably put my ill fortune to some good use, right? This example provide a lot of insight into how and why we should conduct unit tests when writing code. More importantly, it captures what is easily and unfortunately often the most overlooked goal of writing unit tests by those new to the art and those who oppose it alike - The goal of writing unit tests is to test the behaviour of our code under predefined conditions. Although it is very possible to test the intrinsic workings of each and every component in your code, writing several tests for each method in practise will soon prove to be an exhausting and ultimately fruitless exercise given the certain and ever changing nature of business requirements. Consequently it is true and quite possible whilst conducting proper unit tests, to call any single method several times as you examine and contemplate different scenarios. Let’s write some code to demonstrate what I mean. In my example I make use of the Moq framework and NUnit to create my tests. Truly you can use whatever you’re comfortable with. First we’ll create an ISpeedSensor interface. This is to represent the speed sensor located in the gearbox.  Then we’ll create a Gearbox class which we’ll pass to a constructor when we instantiate an object of type Computer. All three are described below.   ISpeedSensor.cs namespace AutomaticVehicle {     public interface ISpeedSensor     {         int ReportCurrentSpeed();     } }   Gearbox.cs namespace AutomaticVehicle {      public class Gearbox     {         private ISpeedSensor _speedSensor;           public Gearbox( ISpeedSensor gearboxSpeedSensor )         {             _speedSensor = gearboxSpeedSensor;         }         /// <summary>         /// This method obtain it's reading from the speed sensor.         /// </summary>         /// <returns></returns>         public int ReportCurrentSpeed()         {             return _speedSensor.ReportCurrentSpeed();         }     } } Computer.cs namespace AutomaticVehicle {     public class Computer     {         private Gearbox _gearbox;         public Computer( Gearbox gearbox )         {                     }          public int GetCurrentSpeed()         {             return _gearbox.ReportCurrentSpeed( );         }     } } Since this post is about Unit testing, that is exactly what we’ll create next. Create a second project in your solution. I called mine AutomaticVehicleTests and I immediately referenced the respective nunit, moq and AutomaticVehicle dll’s. We’re going to write a test to examine what happens inside the Computer class. ComputerTests.cs namespace AutomaticVehicleTests {     [TestFixture]     public class ComputerTests     {         [Test]         public void Computer_Gearbox_SpeedSensor_DoesThrow()         {             // Mock ISpeedSensor in gearbox             Mock< ISpeedSensor > speedSensor = new Mock< ISpeedSensor >( );             speedSensor.Setup( n => n.ReportCurrentSpeed() ).Throws<Exception>();             Gearbox gearbox = new Gearbox( speedSensor.Object );               // Create Computer instance to test it's behaviour  towards an exception in gearbox             Computer carComputer = new Computer( gearbox );             // For simplicity let’s assume for now the car only travels at 60 km/h.             Assert.AreEqual( 60, carComputer.GetCurrentSpeed( ) );          }     } }   What is happening in this test? We have created a mocked object using the ISpeedsensor interface which we've passed to our Gearbox object. Notice that I created the mocked object using an interface, not the implementation. I’ll talk more about this in future posts but in short I do this to accentuate the fact that I'm not not really concerned with how SpeedSensor work internally at this particular point in time. Next I’ve gone ahead and created a scenario where I’ve declared the speed sensor in Gearbox to be faulty by forcing it to throw an exception should we ask Gearbox to report on its current speed. Sneaky, sneaky. This test is a simulation of how things may behave in the real world. Inevitability things break, whether it’s caused by mechanical failure, some logical error on your part or a fellow developer which didn’t consult the documentation (or the lack thereof ) - whether you’re calling a speed sensor, making a call to a database, calling a web service or just trying to write a file to disk. It’s a scenario I’ve created and this test is about how the code within the Computer instance will behave towards any such error as I’ve depicted. Now, if you’ve followed closely in my final assert method you would have noticed I did something quite unexpected. I might be getting ahead of myself now but I’m testing to see if the value returned is equal to what I expect it to be under perfect conditions – I’m not testing to see if an error has been thrown! Why is that? Well, in short this is TDD. Test Driven Development is about first writing your test to define the result we want, then to go back and change the implementation within your class to obtain the desired output (I need to make sure I can drive back to the repair shop. Remember? ) So let’s go ahead and run our test as is. It’s fails miserably... Good! Let’s go back to our Computer class and make a small change to the GetCurrentSpeed method.   Computer.cs public int GetCurrentSpeed() {   try   {     return _gearbox.ReportCurrentSpeed( );   }   catch   {     RunRestrictiveProgram( );   } }     This is a simple solution, I know, but it does provide a way to allow for different behaviour. You’re more than welcome to provide an implementation for RunRestrictiveProgram should you feel the need to. It's not within the scope of this post or related to the point I'm trying to make. What is important is to notice how the focus has shifted in our approach from how things can break - to how things behave when broken.   Happy coding!

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • CodePlex Daily Summary for Thursday, March 17, 2011

    CodePlex Daily Summary for Thursday, March 17, 2011Popular ReleasesLeage of Legends Masteries Tool: LoLMasterSave_v1.6.1.274: -Addresses resent LoL update that interfered with the way MasterSave sets / reads masteries - Removed Shift windows since some people experiencing issues If your interested in this function i can provide it as small separate tool.ASP.NET Comet Ajax Library (Reverse Ajax - Server Push): ASP.NET Server Push Samples: This package contains 14 sample projects.LogExpert: 1.4 build 4092: TabControl: Tooltip on dropdown list shows full path names now New menu item "Lock instance" in Options menu. Only available when "Allow only one instance" is disabled in the settings. "Lock instance" will temporary enable the single instance mode. The locked instance will receive all new launched files Some NullPtrExceptions fixed (e.g. in the settings dialog) Note: The debug build is identical to the release build. But the debug version writes a log file. It also contains line numbers ...Facebook C# SDK: 5.0.6 (BETA): This is seventh BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. New in this release: Version 5.0.6 is almost completely backward compatible with 4.2.1 and 5.0.3 (BETA) Bug fixes and helpers to simplify many common scenarios For more information about this release see the following blog posts: F...SQLCE Code Generator: Build 1.0.3: New beta of the SQLCE Code Generator. New features: - Generates an IDataRepository interface that contains the generated repository interfaces that represents each table - Visual Studio 2010 Custom Tool Support Custom Tool: The custom tool is called SQLCECodeGenerator. Write this in the Custom Tool field in the Properties Window of an SDF file included in your project, this should create a code-behind file for the generated data access codeDotNetNuke® Community Edition: 06.00.00 CTP: CTP 1 (Build 155) is firmly focused around our conversion to C#. As many people have noted, this is a significant change to the platform and affects all areas of the product. This is one of the driving factors in why we felt it was important to get this release into your hands as soon as possible. We have already done quite a bit of testing on this feature internally and have fixed a number of issues in this area. We also recognize that there are probably still some more bugs to be found ...Kooboo CMS: Kooboo 3.0 RC: Bug fixes Inline editing toolbar positioning for websites with complicate CSS. Inline editing is turned on by default now for the samplesite template. MongoDB version content query for multiple filters. . Add a new 404 page to guide users to login and create first website. Naming validation for page name and datarule name. Files in this download kooboo_CMS.zip: The Kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL,SQLCE, RavenDB ...SQL Monitor - tracking sql server activities: SQL Monitor 3.2: 1. introduce sql color syntax highlighting with http://www.codeproject.com/KB/edit/FastColoredTextBox_.aspxUmbraco CMS: Umbraco 4.7.0: Service release fixing 50+ issues! Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to check the free foundation videos on how to get started building Umbraco sites. They're available from: Introduction for webmasters: http://umbraco.tv/help-and-support/video-tutorials/getting-started Understand the Umbraco concepts: http://umbraco.tv/help-and-support...ProDinner - ASP.NET MVC EF4 Code First DDD jQuery Sample App: first release: ProDinner is an ASP.NET MVC sample application, it uses DDD, EF4 Code First for Data Access, jQuery and MvcProjectAwesome for Web UI, it has Multi-language User Interface Features: CRUD and search operations for entities Multi-Language User Interface upload and crop Images (make thumbnail) for meals pagination using "more results" button very rich and responsive UI (using Mvc Project Awesome) Multiple UI themes (using jQuery UI themes)BEPUphysics: BEPUphysics v0.15.1: Latest binary release. Version HistoryIronRuby: 1.1.3: IronRuby 1.1.3 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. The main purpose of this release is to sync with IronPython 2.7 release, i.e. to keep the Dynamic Language Runtime that both these languages build on top shareable. This release also fixes a few bugs: 5763 Use...SQL Server PowerShell Extensions: 2.3.2.1 Production: Release 2.3.2.1 implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 13 modules with 163 advanced functions, 2 cmdlets and 7 scripts for working with ADO.NET, SMO, Agent, RMO, SSIS, SQL script files, PBM, Performance Counters, SQLProfiler, Oracle and MySQL and using Powershell ISE as a SQL and Oracle query tool. In addition optional backend databases and SQL Server Reporting Services 2008 reports are provided with SQLServer and PBM modules. See readme file for details.IronPython: 2.7: On behalf of the IronPython team, I'm very pleased to announce the release of IronPython 2.7. This release contains all of the language features of Python 2.7, as well as several previously missing modules and numerous bug fixes. IronPython 2.7 also includes built-in Visual Studio support through IronPython Tools for Visual Studio. IronPython 2.7 requires .NET 4.0 or Silverlight 4. To download IronPython 2.7, visit http://ironpython.codeplex.com/releases/view/54498. Any bugs should be report...XML Explorer: XML Explorer 4.0.2: Changes in 4.0: This release is built on the Microsoft .NET Framework 4 Client Profile. Changed XSD validation to use the schema specified by the XML documents. Added a VS style Error List, double-clicking an error takes you to the offending node. XPathNavigator schema validation finally gives SourceObject (was fixed in .NET 4). Added Namespaces window and better support for XPath expressions in documents with a default namespace. Added ExpandAll and CollapseAll toolbar buttons (in a...Mobile Device Detection and Redirection: 1.0.0.0: Stable Release 51 Degrees.mobi Foundation has been in beta for some time now and has been used on thousands of websites worldwide. We’re now highly confident in the product and have designated this release as stable. We recommend all users update to this version. New Capabilities MappingsTo improve compatibility with other libraries some new .NET capabilities are now populated with wurfl data: “maximumRenderedPageSize” populated with “max_deck_size” “rendersBreaksAfterWmlAnchor” populated ...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.3: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager added interactive search for the lookupWPF Inspector: WPF Inspector 0.9.7: New Features in Version 0.9.7 - Support for .NET 3.5 and 4.0 - Multi-inspection of the same process - Property-Filtering for multiple keywords e.g. "Height Width" - Smart Element Selection - Select Controls by clicking CTRL, - Select Template-Parts by clicking CTRL+SHIFT - Possibility to hide the element adorner (over the context menu on the visual tree) - Many bugfixes??????????: All-In-One Code Framework ??? 2011-03-10: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??,????。??????????All-In-One Code Framework ???,??20?Sample!!????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ASP.NET ??: CSASPNETBingMaps VBASPNETRemoteUploadAndDownload CS/VBASPNETSerializeJsonString CSASPNETIPtoLocation CSASPNETExcelLikeGridView ....... Winform??: FTPDownload FTPUpload MultiThreadedWebDownloader...Rawr: Rawr 4.1.0: Note: This release may say 4.0.21 in the version bar. This is a typo and the version is actually 4.1.0, not to be confused with 4.0.10 which was released a while back. Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a Relea...New ProjectsAMWC: AMWCAny Repository Membership Provider: Any Repository Membership is about easing the writing of Custom Membership Providers for ASP.NET. With Any Repository Membership you will simply need Implement an interface with few methods to configure a Data Store. Conectors for SQL Server CE, Azure and MySql Planned.Award Citation Helper: This simple Silverlight application demonstrates key concepts including using isolated storage, Silverlight to HTML communications, observable collections and context menus. It is also useful as a tool for creating award citations for your own use or on the BrainyAwards.com site.Azure Storage Backup: Provides a backup and restore function for a Windows Azure storage account, including Table Storage and Blob Storage.Babil: Babil is aimed to be an open source web based portal for software localization. It will support importing and exporting different formats such as .resx files and GNU gettext. Another goal is to have some basic project management features for managing translators and changes. BuildExplorer: Analyze a TFS2010 build whether it is finished, hanging or in progress.DHM17438: DHM17438Flowmarks Events: Data entry tool for simple time series.Go to Browser - Visual Studio 2010 Extension: This is a visual studio 2010 extension, that goes to a web browser on your repository. In order to specify the url format, you have to go solution context menu > "Go to Browser...". It is also available through Visual Studio Gallery. (Comming soon)Hierarchical State Machine Compiler: This projects helps you to easily create and generate hierarchical state machine in .NET. State, transitions, conditions, action, state inheritance Silverlight generation MVVM support iLBC.NET: A port of the iLBC (internet Low Bitrate Codec) speech codec for .NET platform.JAMBE: Simple test project for JA-BulgariaSAY IT WITH CSS! Slide Show implemented as pure CSS3 / HTML 5 solution: The solution demonstrates CSS3 coding technique, which allows to implement online slide show with "darkbox" (or "lightbox") effects using pure CSS3 and HTML5; it does not require any Javascript/jQuery coding.SharePoint Test Data Tool: This project is basically a small window based application to be able to quickly populate items in a list/library. This can be helpful while testing your code bits ( WP's etc ) Shiros: Una idea que empezó en un Japo de Seattle.TwitterOnSQLAzure: Twitter Parser used for 24 Hours of SQL Pass presentation March 2011. Pulls and parses tweet stream, based on parameters, builds database (locally). All features compliant with SQL Azure. Use SQL Azure migration wizard to move data / schema to cloud - or chg conn stringVisio Forward Engineer Addin: Somehow Microsoft decided not to include this feature in 2010 version of Visio. Visio Forward Engineer Addin for Visio 2010, adds the ability to generate the database scripts directly from the database model defined in Visio 2010. It is developed in C#.Winwise Surface & Tablet AR Drone: This is the Surface & Tablet client to pilot a Parrot AR Drone.WNY: Online gaming system that support multiple usersWow Project: Wow Project is a tool for wow users to easy manage quests and items. It's writen in VB.NET or C#

    Read the article

  • CodePlex Daily Summary for Wednesday, March 16, 2011

    CodePlex Daily Summary for Wednesday, March 16, 2011Popular ReleasesuComponents: uComponents v2.1 RTM: What's new in v2.1? 5 new DataTypes JSON Datasource DropDown Multiple Textstring Similarity Text Image XPath DropDownList Please note that the release of DataType Grid has been postponed until v2.2. 3 new XSLT extensions Media Nodes Search Multi-node tree picker updates XPath start node selectors From Global or Relative start node selectors Max & Min node selection limits Bug fixes If you find a bug or have an issue, please raise a ticket here on CodePlex for us and we'l...Facebook C# SDK: 5.0.6 (BETA): This is seventh BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. New in this release: Version 5.0.6 is almost completely backward compatible with 4.2.1 and 5.0.3 (BETA) Bug fixes and helpers to simplify many common scenarios For more information about this release see the following blog posts: F...SQLCE Code Generator: Build 1.0.3: New beta of the SQLCE Code Generator. New features: - Generates an IDataRepository interface that contains the generated repository interfaces that represents each table - Visual Studio 2010 Custom Tool Support Custom Tool: The custom tool is called SQLCECodeGenerator. Write this in the Custom Tool field in the Properties Window of an SDF file included in your project, this should create a code-behind file for the generated data access codeKooboo CMS: Kooboo 3.0 RC: Bug fixes Inline editing toolbar positioning for websites with complicate CSS. Inline editing is turned on by default now for the samplesite template. MongoDB version content query for multiple filters. . Add a new 404 page to guide users to login and create first website. Naming validation for page name and datarule name. Files in this download kooboo_CMS.zip: The Kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL,SQLCE, RavenDB ...SQL Monitor - tracking sql server activities: SQL Monitor 3.2: 1. introduce sql color syntax highlighting with http://www.codeproject.com/KB/edit/FastColoredTextBox_.aspxUmbraco CMS: Umbraco 4.7.0: Service release fixing 50+ issues! Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to check the free foundation videos on how to get started building Umbraco sites. They're available from: Introduction for webmasters: http://umbraco.tv/help-and-support/video-tutorials/getting-started Understand the Umbraco concepts: http://umbraco.tv/help-and-support...ProDinner - ASP.NET MVC EF4 Code First DDD jQuery Sample App: first release: ProDinner is an ASP.NET MVC sample application, it uses DDD, EF4 Code First for Data Access, jQuery and MvcProjectAwesome for Web UI, it has Multi-language User Interface Features: CRUD and search operations for entities Multi-Language User Interface upload and crop Images (make thumbnail) for meals pagination using "more results" button very rich and responsive UI (using Mvc Project Awesome) Multiple UI themes (using jQuery UI themes)BEPUphysics: BEPUphysics v0.15.0: BEPUphysics v0.15.0!LiveChat Starter Kit: LCSK v1.1: This release contains couple of new features and bug fixes including: Features: Send chat transcript via email Operator can now invite visitor to chat (pro-active chat request) Bug Fixes: Operator management (Save and Delete) bug fixes Operator Console chat small fixesIronRuby: 1.1.3: IronRuby 1.1.3 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. The main purpose of this release is to sync with IronPython 2.7 release, i.e. to keep the Dynamic Language Runtime that both these languages build on top shareable. This release also fixes a few bugs: 5763 Use...SQL Server PowerShell Extensions: 2.3.2.1 Production: Release 2.3.2.1 implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 13 modules with 163 advanced functions, 2 cmdlets and 7 scripts for working with ADO.NET, SMO, Agent, RMO, SSIS, SQL script files, PBM, Performance Counters, SQLProfiler, Oracle and MySQL and using Powershell ISE as a SQL and Oracle query tool. In addition optional backend databases and SQL Server Reporting Services 2008 reports are provided with SQLServer and PBM modules. See readme file for details.IronPython: 2.7: On behalf of the IronPython team, I'm very pleased to announce the release of IronPython 2.7. This release contains all of the language features of Python 2.7, as well as several previously missing modules and numerous bug fixes. IronPython 2.7 also includes built-in Visual Studio support through IronPython Tools for Visual Studio. IronPython 2.7 requires .NET 4.0 or Silverlight 4. To download IronPython 2.7, visit http://ironpython.codeplex.com/releases/view/54498. Any bugs should be report...XML Explorer: XML Explorer 4.0.2: Changes in 4.0: This release is built on the Microsoft .NET Framework 4 Client Profile. Changed XSD validation to use the schema specified by the XML documents. Added a VS style Error List, double-clicking an error takes you to the offending node. XPathNavigator schema validation finally gives SourceObject (was fixed in .NET 4). Added Namespaces window and better support for XPath expressions in documents with a default namespace. Added ExpandAll and CollapseAll toolbar buttons (in a...Mobile Device Detection and Redirection: 1.0.0.0: Stable Release 51 Degrees.mobi Foundation has been in beta for some time now and has been used on thousands of websites worldwide. We’re now highly confident in the product and have designated this release as stable. We recommend all users update to this version. New Capabilities MappingsTo improve compatibility with other libraries some new .NET capabilities are now populated with wurfl data: “maximumRenderedPageSize” populated with “max_deck_size” “rendersBreaksAfterWmlAnchor” populated ...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.3: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager added interactive search for the lookupWPF Inspector: WPF Inspector 0.9.7: New Features in Version 0.9.7 - Support for .NET 3.5 and 4.0 - Multi-inspection of the same process - Property-Filtering for multiple keywords e.g. "Height Width" - Smart Element Selection - Select Controls by clicking CTRL, - Select Template-Parts by clicking CTRL+SHIFT - Possibility to hide the element adorner (over the context menu on the visual tree) - Many bugfixes??????????: All-In-One Code Framework ??? 2011-03-10: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??,????。??????????All-In-One Code Framework ???,??20?Sample!!????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ASP.NET ??: CSASPNETBingMaps VBASPNETRemoteUploadAndDownload CS/VBASPNETSerializeJsonString CSASPNETIPtoLocation CSASPNETExcelLikeGridView ....... Winform??: FTPDownload FTPUpload MultiThreadedWebDownloader...Rawr: Rawr 4.1.0: Note: This release may say 4.0.21 in the version bar. This is a typo and the version is actually 4.1.0, not to be confused with 4.0.10 which was released a while back. Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a Relea...PHP Manager for IIS: PHP Manager 1.1.2 for IIS 7: This is a localization release of PHP Manager for IIS 7. It contains all the functionality available in 56962 plus a few bug fixes (see change list for more details). Most importantly this release is translated into five languages: German - the translation is provided by Christian Graefe Dutch - the translation is provided by Harrie Verveer Turkish - the translation is provided by Yusuf Oztürk Japanese - the translation is provided by Kenichi Wakasa Russian - the translation is provid...TweetSharp: TweetSharp v2.0.0: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Beta ChangesAdded user streams support Serialization is not attempted for Twitter 5xx errors Fixes based on feedback Third Party Library VersionsHammock v1.2.0: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comNew ProjectsABSC - Automatic Battle System Configurator: ABSC - Automatic Battle System Configurator Este é um aplicativo que auxilia os usuários na configuração de diversos script's de batalha disponíveis atualmente para o Rpg Maker. O aplicativo irá gerar um script com toda a configuração especificada pelo usuário.Active Directory Monkey: Designed for IS support teams to easily reset active directoy passwords.Ag-Light: Artesis projectAnito.ORM: Anito ORMblogengine customizations: Customizations for dotnetblogengineCool Tool: Cool Tool is a Visual studio add-in for generating business entities. Generator provides functionality to be able easily and comfortable generate class elements and implement chosen interfaces. Project is developed in VS 2010 (C# 4.0). Enjoy!csmpfit - A Least Squares Optimization Library in C# (C Sharp): A C# port of the C-based mpfit Levenberg Marquardt solver at Argonne National Labs (http://cow.physics.wisc.edu/~craigm/idl/cmpfit.html), including both desktop .NET and Silverlight project libraries.DirectX 11 Framework for Experimentation: Basic Framework for DirectX 11 (without DXUT) containing basic stuffs like Text Rendering, Quad Render, Model Loading, basic Skinning animation, Shader framework (substitute for the effect API) and lots of random stuffs !!!Doanvien code project: DoanVien code projectdtweet - a dashing way of tweeting: dtweet is developed in ASP.NET MVC 3 RTM (Razor) C# JQuery 1.5.1FormMail.NET: This is a project to support emailing form data from a .NET page, and storing that form data in an XML file.LRU Cache: This project implements the LRU Cache using C#. It uses a Dictionary and a LinkedList. Dictionary ensures fast access to the data, and the linkedlist controls which objects are to be removed first.Magelia WebStore Open-source e-commerce software: Magelia WebStore is a customizable, multilingual and multi-currency open-source e-commerce software for the .net environment. WebStore was developped C# and Aspx and only requires an SQL Server Express. MDA.Net: MDA.Net is the .Net/Silverlight port of mda-vst instruments and effects. Currently it includes just the MDA piano and an overdrive with basic interfaces to build on. Just PM me if you have some time to help - we are in no rush, aim to migrate everything one-by-one. MVC NGShop: NGShop ????????????,????Asp.net MVC 2.0 + Jquery + SQL Server 2008, ???Castle Windsor IOC、entity framework,??????visual studio 2010??,??????vs2010,?????js???。Orchard - Photo Albums module: This module allows you to create photo albums with various effects: lightbox, slideshow, etc. PowerShell Workflow: PowerShell Workflow helps organizations to define their operational processes through the power of Workflow and Powershell. Project Dark: Early version of our project. Made in Torque 3dReFSharp: ReFSharp is pretty printer, analyzing tool and translator source code texts between F#, C# and Java. Current version has full functional pretty printer for F# and pre-alpha translator of C# to F#.Relate Intranet Templates: <project name> is an open source package for EPiServer Relate which creates a foundation for an intranet.Service monitor: Service monitor is a simple utility that lets you monitor and manage the states of services of multiple machines. It allows starting/stopping and restarting. It is Windows 7 UAC aware.Sharp Console: Sharp Console is a Windows Command Line (WCL)alternative written in C#. It targets those who lack access to the WCL or simply wish to use the NET framework instead. It aims to provide the same (and more!) functionality as the WCL. Contribute anything you feel will make it better!Stone2: New version of Stone, but using TFS for versioningStudent Database Management System: Student Database Management System is a sample Online Web based and also desktop application which helps school to maintain their records free online. this project is open source project and Free available for all schools to check and send their response..

    Read the article

  • Having trouble binding a ksoap object to an ArrayList in Android

    - by Maskau
    I'm working on an app that calls a web service, then the webservice returns an array list. My problem is I am having trouble getting the data into the ArrayList and then displaying in a ListView. Any ideas what I am doing wrong? I know for a fact the web service returns an ArrayList. Everything seems to be working fine, just no data in the ListView or the ArrayList.....Thanks in advance! EDIT: So I added more code to the catch block of run() and now it's returning "org.ksoap2.serialization.SoapObject".....no more no less....and I am even more confused now... package com.maskau; import java.util.ArrayList; import org.ksoap2.SoapEnvelope; import org.ksoap2.serialization.PropertyInfo; import org.ksoap2.serialization.SoapObject; import org.ksoap2.serialization.SoapSerializationEnvelope; import org.ksoap2.transport.AndroidHttpTransport; import android.app.*; import android.os.*; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.ListView; import android.widget.TextView; import android.view.View; import android.view.View.OnClickListener; public class Home extends Activity implements Runnable{ /** Called when the activity is first created. */ public static final String SOAP_ACTION = "http://bb.mcrcog.com/GetArtist"; public static final String METHOD_NAME = "GetArtist"; public static final String NAMESPACE = "http://bb.mcrcog.com"; public static final String URL = "http://bb.mcrcog.com/karaoke/service.asmx"; String wt; public static ProgressDialog pd; TextView text1; ListView lv; static EditText myEditText; static Button but; private ArrayList<String> Artist_Result = new ArrayList<String>(); @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.main); myEditText = (EditText)findViewById(R.id.myEditText); text1 = (TextView)findViewById(R.id.text1); lv = (ListView)findViewById(R.id.lv); but = (Button)findViewById(R.id.but); but.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { wt = ("Searching for " + myEditText.getText().toString()); text1.setText(""); pd = ProgressDialog.show(Home.this, "Working...", wt , true, false); Thread thread = new Thread(Home.this); thread.start(); } } ); } public void run() { try { SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); PropertyInfo pi = new PropertyInfo(); pi.setName("ArtistQuery"); pi.setValue(Home.myEditText.getText().toString()); request.addProperty(pi); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope(SoapEnvelope.VER11); envelope.dotNet = true; envelope.setOutputSoapObject(request); AndroidHttpTransport at = new AndroidHttpTransport(URL); at.call(SOAP_ACTION, envelope); java.util.Vector<Object> rs = (java.util.Vector<Object>)envelope.getResponse(); if (rs != null) { for (Object cs : rs) { Artist_Result.add(cs.toString()); } } } catch (Exception e) { // Added this line, throws "org.ksoap2.serialization.SoapObject" when run Artist_Result.add(e.getMessage()); } handler.sendEmptyMessage(0); } private Handler handler = new Handler() { @Override public void handleMessage(Message msg) { ArrayAdapter<String> aa; aa = new ArrayAdapter<String>(Home.this, android.R.layout.simple_list_item_1, Artist_Result); lv.setAdapter(aa); try { if (Artist_Result.isEmpty()) { text1.setText("No Results"); } else { text1.setText("Complete"); myEditText.setText("Search Artist"); } } catch(Exception e) { text1.setText(e.getMessage()); } aa.notifyDataSetChanged(); pd.dismiss(); } }; }

    Read the article

  • SUPER CSV write bean to CSV.

    - by ButtersB
    Here is my class, public class FreebasePeopleResults { public String intendedSearch; public String weight; public Double heightMeters; public Integer age; public String type; public String parents; public String profession; public String alias; public String children; public String siblings; public String spouse; public String degree; public String institution; public String wikipediaId; public String guid; public String id; public String gender; public String name; public String ethnicity; public String articleText; public String dob; public String getWeight() { return weight; } public void setWeight(String weight) { this.weight = weight; } public Double getHeightMeters() { return heightMeters; } public void setHeightMeters(Double heightMeters) { this.heightMeters = heightMeters; } public String getParents() { return parents; } public void setParents(String parents) { this.parents = parents; } public Integer getAge() { return age; } public void setAge(Integer age) { this.age = age; } public String getProfession() { return profession; } public void setProfession(String profession) { this.profession = profession; } public String getAlias() { return alias; } public void setAlias(String alias) { this.alias = alias; } public String getChildren() { return children; } public void setChildren(String children) { this.children = children; } public String getSpouse() { return spouse; } public void setSpouse(String spouse) { this.spouse = spouse; } public String getDegree() { return degree; } public void setDegree(String degree) { this.degree = degree; } public String getInstitution() { return institution; } public void setInstitution(String institution) { this.institution = institution; } public String getWikipediaId() { return wikipediaId; } public void setWikipediaId(String wikipediaId) { this.wikipediaId = wikipediaId; } public String getGuid() { return guid; } public void setGuid(String guid) { this.guid = guid; } public String getId() { return id; } public void setId(String id) { this.id = id; } public String getGender() { return gender; } public void setGender(String gender) { this.gender = gender; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getEthnicity() { return ethnicity; } public void setEthnicity(String ethnicity) { this.ethnicity = ethnicity; } public String getArticleText() { return articleText; } public void setArticleText(String articleText) { this.articleText = articleText; } public String getDob() { return dob; } public void setDob(String dob) { this.dob = dob; } public String getType() { return type; } public void setType(String type) { this.type = type; } public String getSiblings() { return siblings; } public void setSiblings(String siblings) { this.siblings = siblings; } public String getIntendedSearch() { return intendedSearch; } public void setIntendedSearch(String intendedSearch) { this.intendedSearch = intendedSearch; } } Here is my CSV writer method import java.io.FileWriter; import java.io.IOException; import java.util.ArrayList; import org.supercsv.io.CsvBeanWriter; import org.supercsv.prefs.CsvPreference; public class CSVUtils { public static void writeCSVFromList(ArrayList<FreebasePeopleResults> people, boolean writeHeader) throws IOException{ //String[] header = new String []{"title","acronym","globalId","interfaceId","developer","description","publisher","genre","subGenre","platform","esrb","reviewScore","releaseDate","price","cheatArticleId"}; FileWriter file = new FileWriter("/brian/brian/Documents/people-freebase.csv", true); // write the partial data CsvBeanWriter writer = new CsvBeanWriter(file, CsvPreference.EXCEL_PREFERENCE); for(FreebasePeopleResults person:people){ writer.write(person); } writer.close(); // show output } } I keep getting output errors. Here is the error: There is no content to write for line 2 context: Line: 2 Column: 0 Raw line: null Now, I know it is now totally null, so I am confused.

    Read the article

  • MDI WinForm application and duplicate child form memory leak

    - by Steve
    This is a WinForm MDI application problem (.net framework 3.0). It will be described in C#. Sorry it is a bit long because I try to make things as clear as possible. I have a MDI application. At some point I find that one MDI child form is never released. There is a menu that creates the MDI child form and show it. When the MDI child form is closed, it is supposed to be destroyed and the memory taken by it should be given back to .net. But to my surprise, this is not true. All the MDI child form instances are kept in memory. This is obviously a "memory leak". Well, it is not a real leak in .net. It is just that I think the closed form should be dead but somehow there is at least one unknown reference from outside world that still connect with the closed form. I read some articles on the Web. Some says that when the MDI child form is closing, I should unwire all the event handlers, otherwise some event handlers may keep my form alive. Some says that DataBindings should be cleaned before the form is closing otherwise the DataBindings will add references to some global Hashtable and thus keep my form alive. My form contains quite a lot things. Many event handlers and many DataBindings and many BindingSources and few suspected controls containing user control and HelpProvider. I create a big method that unwires all the event handlers from all the relevant controls, clear all the DataBindings and DataSources. The HelpProvider and user controls are disposed carefully. At the end, I find that, I don't have to clear DataBindings and DataSources. Event handlers are definitely causing the problem. And MDI form structure also contributes to something. During my experiments, I find that, if you create a MDI child form, even if you close it, there will still be one instance in the memory. The reference is from PropertyStore of the main form. This means, unless the main form is closed (application ends), there will always be one instance of MDI child form in the memory. The good news is that, no matter how many times you open and close the child form, there will be only one instance, not a big "leak". When it comes to event handlers, things become more tricky. I have to address that, all the event handlers on my form are anonymous event handlers. Here is an example code: //On MDI child form's design code... Button btnSave = new Button(); btnSave.Click += new System.EventHandler(btnSave_Click); Where btnSave_Click is also a method in MDI child form. The above is always the case for various controls and various types of event. To me, this is a bi-directional circular reference. btnSave keeps a reference of MDI child form via the event handler. MDI child form keeps a reference of btnSave instance. To me again, such bi-directional circular reference should not cause any problem for .net's garbage collector. This means that I do not have to explicitly unwire the event when the form is being disposed: btnSave.Click -= btnSave_Click; But the truth is not so. For some event handlers, they are safe. Ignoring them do not cause any duplicate instance. For some other event handlers, they will cause one instance remaining in the memory (similar effect as the MDI form structure, but this time caused by the hanging event handlers). For some other event handlers, they will cause every instance opened in the memory. I am totally confused about the differences between these three types of event handlers. The controls are created in the same way and the event is attached in the same way. What is the difference? (Don't tell me it is the event handle methods that make difference.) Anyone has experience of this wired scenario and has an answer for me? Thanks a lot. So now, for safety issue, I will have to unwire all the event handlers when the form is being disposed. That will be a long list of similar code for each control. Is there a general way of removing events from controls in recursive way using reflection? What about performance issue? That's the end of my story and I am still in the middle of my problem. For any help, I thank you.

    Read the article

  • How to debug manifest errors?

    - by Rryk
    I am creating an application that depends on third-party library, which in turn depends on MSVCP90D.dll (it was compiled with debug symbols). While starting the application it fails to start and provides an error message: I have found such library in C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\redist\Debug_NonRedist\amd64\Microsoft.VC90.DebugCRT and C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\redist\Debug_NonRedist\x86\Microsoft.VC90.DebugCRT. As you can see one of them is 64-bit, while the other is 32-bit. When I have placed 32-bit into the directory of the application the application silently crashes while loading (log from Visual Studio Output window is below). With the 32-bit one I get another error message: If I press Abort -- programs shuts down, Retry results in breaking into debug session for crt0msg.c file. This is system file and I have no idea how to debug it. If I press Ignore I get yet another error message: So the question is how to debug such errors? Please give me some links where I can read more about it or point me out what exactly I should do in such cases. I know this relates to manifest problems -- please give me a good resource where I can read about resources, since what I have found have confused me even more. This is log for 64-bit version of the MSVCP90D.dll library: 'chrome.exe': Loaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\chromium-xml3d-rtsg2\chrome.exe', Symbols loaded. 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ntdll.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\kernel32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\KernelBase.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\user32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\gdi32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\lpk.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\usp10.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\msvcrt.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\advapi32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\sechost.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\rpcrt4.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\sspicli.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\cryptbase.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\shell32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\shlwapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\winmm.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\version.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\psapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\imm32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\msctf.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\chromium-xml3d-rtsg2\chrome.dll', Symbols loaded. 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ole32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\oleaut32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\winsxs\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16385_none_421189da2b7fabfc\comctl32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\oleacc.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\opengl32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\glu32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ddraw.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\dciman32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\setupapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\cfgmgr32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\devobj.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\dwmapi.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'C:\Windows\SysWOW64\secur32.dll', Symbols loaded (source information stripped). 'chrome.exe': Loaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\rtsg2\bin\RTSG2.dll', Symbols loaded. 'chrome.exe': Unloaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\chromium-xml3d-rtsg2\chrome.dll' 'chrome.exe': Unloaded 'D:\Projects\Chromium\devenv\install\build-msvc-debug\rtsg2\bin\RTSG2.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\secur32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\opengl32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\ddraw.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\dwmapi.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\setupapi.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\devobj.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\cfgmgr32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\dciman32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\glu32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\oleacc.dll' 'chrome.exe': Unloaded 'C:\Windows\winsxs\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16385_none_421189da2b7fabfc\comctl32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\oleaut32.dll' 'chrome.exe': Unloaded 'C:\Windows\SysWOW64\ole32.dll' 'chrome.exe': Loaded 'C:\Windows\SysWOW64\ole32.dll', Symbols loaded (source information stripped). The program '[1152] chrome.exe: Native' has exited with code 9 (0x9).

    Read the article

  • "Expected initializer before '<' token" in header file

    - by Sarah
    I'm pretty new to programming and am generally confused by header files and includes. I would like help with an immediate compile problem and would appreciate general suggestions about cleaner, safer, slicker ways to write my code. I'm currently repackaging a lot of code that used to be in main() into a Simulation class. I'm getting a compile error with the header file for this class. I'm compiling with gcc version 4.2.1. // Simulation.h #ifndef SIMULATION_H #define SIMULATION_H #include <cstdlib> #include <iostream> #include <cmath> #include <string> #include <fstream> #include <set> #include <boost/multi_index_container.hpp> #include <boost/multi_index/hashed_index.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/mem_fun.hpp> #include <boost/multi_index/composite_key.hpp> #include <boost/shared_ptr.hpp> #include <boost/tuple/tuple_comparison.hpp> #include <boost/tuple/tuple_io.hpp> #include "Parameters.h" #include "Host.h" #include "rng.h" #include "Event.h" #include "Rdraws.h" typedef multi_index_container< // line 33 - first error boost::shared_ptr< Host >, indexed_by< hashed_unique< const_mem_fun<Host,int,&Host::getID> >, // 0 - ID index ordered_non_unique< tag<age>,const_mem_fun<Host,int,&Host::getAgeInY> >, // 1 - Age index hashed_non_unique< tag<household>,const_mem_fun<Host,int,&Host::getHousehold> >, // 2 - Household index ordered_non_unique< // 3 - Eligible by age & household tag<aeh>, composite_key< Host, const_mem_fun<Host,int,&Host::getAgeInY>, const_mem_fun<Host,bool,&Host::isEligible>, const_mem_fun<Host,int,&Host::getHousehold> > >, ordered_non_unique< // 4 - Eligible by household (all single adults) tag<eh>, composite_key< Host, const_mem_fun<Host,bool,&Host::isEligible>, const_mem_fun<Host,int,&Host::getHousehold> > >, ordered_non_unique< // 5 - Household & age tag<ah>, composite_key< Host, const_mem_fun<Host,int,&Host::getHousehold>, const_mem_fun<Host,int,&Host::getAgeInY> > > > // end indexed_by > HostContainer; typedef std::set<int> HHSet; class Simulation { public: Simulation( int sid ); ~Simulation(); // MEMBER FUNCTION PROTOTYPES void runDemSim( void ); void runEpidSim( void ); void ageHost( int id ); int calcPartnerAge( int a ); void executeEvent( Event & te ); void killHost( int id ); void pairHost( int id ); void partner2Hosts( int id1, int id2 ); void fledgeHost( int id ); void birthHost( int id ); void calcSI( void ); double beta_ij_h( int ai, int aj, int s ); double beta_ij_nh( int ai, int aj, int s ); private: // SIMULATION OBJECTS double t; double outputStrobe; int idCtr; int hholdCtr; int simID; RNG rgen; HostContainer allHosts; // shared_ptr to Hosts - line 102 - second error HHSet allHouseholds; int numInfecteds[ INIT_NUM_AGE_CATS ][ INIT_NUM_STYPES ]; EventPQ currentEvents; // STREAM MANAGEMENT void writeOutput(); void initOutput(); void closeOutput(); std::ofstream ageDistStream; std::ofstream ageDistTStream; std::ofstream hhDistStream; std::ofstream hhDistTStream; std::string ageDistFile; std::string ageDistTFile; std::string hhDistFile; std::string hhDistTFile; }; #endif I'm hoping the other files aren't so relevant to this problem. When I compile with g++ -g -o -c a.out -I /Applications/boost_1_42_0/ Host.cpp Simulation.cpp rng.cpp main.cpp Rdraws.cpp I get Simulation.h:33: error: expected initializer before '<' token Simulation.h:102: error: 'HostContainer' does not name a type and then a bunch of other errors related to not recognizing the HostContainer. It seems like I have all the right Boost #includes for the HostContainer to be understood. What else could be going wrong? I would appreciate immediate suggestions, troubleshooting tips, and other advice about my code. My plan is to create a "HostContainer.h" file that includes the typedef and structs that define its tags, similar to what I'm doing in "Event.h" for the EventPQ container. I'm assuming this is legal and good form.

    Read the article

  • Why is my WCF RIA Services custom object deserializing with an extra list member?

    - by oasasaurus
    I have been developing a Silverlight WCF RIA Services application dealing with mock financial transactions. To more efficiently send summary data to the client without going overboard with serialized entities I have created a summary class that isn’t in my EDM, and figured out how to serialize and send it over the wire to the SL client using DataContract() and DataMember(). Everything seemed to be working out great, until I tried to bind controls to a list inside my custom object. The list seems to always get deserialized with an extra, almost empty entity in it that I don’t know how to get rid of. So, here are some of the pieces. First the relevant bits from the custom object class: <DataContract()> _ Public Class EconomicsSummary Public Sub New() RecentTransactions = New List(Of Transaction) TotalAccountHistory = New List(Of Transaction) End Sub Public Sub New(ByVal enUser As EntityUser) Me.UserId = enUser.UserId Me.UserName = enUser.UserName Me.Accounts = enUser.Accounts Me.Jobs = enUser.Jobs RecentTransactions = New List(Of Transaction) TotalAccountHistory = New List(Of Transaction) End Sub <DataMember()> _ <Key()> _ Public Property UserId As System.Guid <DataMember()> _ Public Property NumTransactions As Integer <DataMember()> _ <Include()> _ <Association("Summary_RecentTransactions", "UserId", "User_UserId")> _ Public Property RecentTransactions As List(Of Transaction) <DataMember()> _ <Include()> _ <Association("Summary_TotalAccountHistory", "UserId", "User_UserId")> _ Public Property TotalAccountHistory As List(Of Transaction) End Class Next, the relevant parts of the function called to return the object: Public Function GetEconomicsSummary(ByVal guidUserId As System.Guid) As EconomicsSummary Dim objOutput As New EconomicsSummary(enUser) For Each objTransaction As Transaction In (From t As Transaction In Me.ObjectContext.Transactions.Include("Account") Where t.Account.aspnet_User_UserId = guidUserId Select t Order By t.TransactionDate Descending Take 10) objTransaction.User_UserId = objOutput.UserId objOutput.RecentTransactions.Add(objTransaction) Next objOutput.NumTransactions = objOutput.RecentTransactions.Count … Return objOutput End Function Notice that I’m collecting the NumTransactions count before serialization. Should be 10 right? It is – BEFORE serialization. The DataGrid is bound to the data source as follows: <sdk:DataGrid AutoGenerateColumns="False" Height="100" MaxWidth="{Binding ElementName=aciSummary, Path=ActualWidth}" ItemsSource="{Binding Source={StaticResource EconomicsSummaryRecentTransactionsViewSource}, Mode=OneWay}" Name="gridRecentTransactions" RowDetailsVisibilityMode="VisibleWhenSelected" IsReadOnly="True"> <sdk:DataGrid.Columns> <sdk:DataGridTextColumn x:Name="TransactionDateColumn" Binding="{Binding Path=TransactionDate, StringFormat=\{0:d\}}" Header="Date" Width="SizeToHeader" /> <sdk:DataGridTextColumn x:Name="AccountNameColumn" Binding="{Binding Path=Account.Title}" Header="Account" Width="SizeToCells" /> <sdk:DataGridTextColumn x:Name="CurrencyAmountColumn" Binding="{Binding Path=CurrencyAmount, StringFormat=\{0:c\}}" Header="Amount" Width="SizeToHeader" /> <sdk:DataGridTextColumn x:Name="TitleColumn" Binding="{Binding Path=Title}" Header="Description" Width="SizeToCells" /> <sdk:DataGridTextColumn x:Name="ItemQuantityColumn" Binding="{Binding Path=ItemQuantity}" Header="Qty" Width="SizeToHeader" /> </sdk:DataGrid.Columns> </sdk:DataGrid> You might be wondering where the ItemsSource is coming from, that looks like this: <CollectionViewSource x:Key="EconomicsSummaryRecentTransactionsViewSource" Source="{Binding Path=DataView.RecentTransactions, ElementName=EconomicsSummaryDomainDataSource}" /> When I noticed that the DataGrid had the extra row I tried outputting some data after the data source finishes loading, as follows: Private Sub EconomicsSummaryDomainDataSource_LoadedData(ByVal sender As System.Object, ByVal e As System.Windows.Controls.LoadedDataEventArgs) Handles EconomicsSummaryDomainDataSource.LoadedData If e.HasError Then System.Windows.MessageBox.Show(e.Error.ToString, "Load Error", System.Windows.MessageBoxButton.OK) e.MarkErrorAsHandled() End If Dim objSummary As EconomicsSummary = CType(EconomicsSummaryDomainDataSource.Data(0), EconomicsSummary) Dim sb As New StringBuilder("") sb.AppendLine(String.Format("Num Transactions: {0} ({1})", objSummary.RecentTransactions.Count.ToString(), objSummary.NumTransactions.ToString())) For Each objTransaction As Transaction In objSummary.RecentTransactions sb.AppendLine(String.Format("Recent TransactionId {0} dated {1} CurrencyAmount {2} NewBalance {3}", objTransaction.TransactionId.ToString, objTransaction.TransactionDate.ToString("d"), objTransaction.CurrencyAmount.ToString("c"), objTransaction.NewBalance.ToString("c"))) Next txtDebug.Text = sb.ToString() End Sub Output from that looks like this: Num Transactions: 11 (10) Recent TransactionId 2283 dated 6/1/2010 CurrencyAmount $31.00 NewBalance $392.00 Recent TransactionId 2281 dated 5/31/2010 CurrencyAmount $33.00 NewBalance $361.00 Recent TransactionId 2279 dated 5/28/2010 CurrencyAmount $8.00 NewBalance $328.00 Recent TransactionId 2277 dated 5/26/2010 CurrencyAmount $22.00 NewBalance $320.00 Recent TransactionId 2275 dated 5/24/2010 CurrencyAmount $5.00 NewBalance $298.00 Recent TransactionId 2273 dated 5/21/2010 CurrencyAmount $19.00 NewBalance $293.00 Recent TransactionId 2271 dated 5/20/2010 CurrencyAmount $20.00 NewBalance $274.00 Recent TransactionId 2269 dated 5/19/2010 CurrencyAmount $48.00 NewBalance $254.00 Recent TransactionId 2267 dated 5/18/2010 CurrencyAmount $42.00 NewBalance $206.00 Recent TransactionId 2265 dated 5/14/2010 CurrencyAmount $5.00 NewBalance $164.00 Recent TransactionId 0 dated 6/1/2010 CurrencyAmount $0.00 NewBalance $361.00 So I have a few different questions: -First and foremost, where the devil is that extra Transaction entity coming from and how do I get rid of it? Does it have anything to do with the other list of Transaction entities being serialized as part of the EconomicsSummary class (TotalAccountHistory)? Do I need to decorate the EconomicsSummary class members a little more/differently? -Second, where are the peculiar values coming from on that extra entity? PRE-POSTING UPDATE 1: I did a little checking, it looks like that last entry is the first one in the TotalAccountHistory list. Do I need to do something with CollectionDataContract()? PRE-POSTING UPDATE 2: I fixed one bug in TotalAccountHistory, since the objects weren’t coming from the database their keys weren’t unique. So I set the keys on the Transaction entities inside TotalAccountHistory to be unique and guess what? Now, after deserialization RecentTransactions contains all its original items, plus every item in TotalAccountHistory. I’m pretty sure this has to do with the deserializer getting confused by two collections of the same type. But I don’t yet know how to resolve it…

    Read the article

  • jquery hover not working properly other than IE6

    - by Kranthi
    Hi All, We developed navigation bar using jQuery 1.4.2. Functionality is to show submneus for different menu items when user hovers on it. It is working perfectly in IE6 but we see some weird problems in other browsers. In Firefox, when the page gets loaded, it works fine but when we hit f5, the submenu wont appear on hover. To get submenu we need to click on any other menu item. In Chrome, its the same to add on, some time even we click on any menu item, and hover submenu wont show up. In Safari, nothing shows up most of the times, but on clicking 5-6 menu items, submenu is shown.When we see loading text in safari it shows the submenu. but on every click the loading text wont appear.. We are very much confused..is it the browser behavior/code/jquery?? Below is the snippet: Html: <ul> <li><a class="mainLinks" href="google.com">Support</a> <ul><li>Sublink1</li></ul> </ul> Html code is absolutely fine. Jquery: var timeout = null; var ie = (document.all) ? true : false; $(document).ready(function(){ var $mainLink = null; var $subLink = null; $(".mainLinks").each(function(){ if ($(this).hasClass("current")) { $(this).mouseout(function() { var $this = $(this); timeout = setTimeout(function() { $(".popUpNav", $this.parent()).css({ visibility : 'hidden' }); $('.popUpArrow').hide(); ieCompat('show'); }, 200); }); } else { $(this).hover(function() { reset(); ieCompat('hide'); // Saving this for later use in the popUpNav hover event $mainLink = $(this); $popUpNav = $(".popUpNav", $mainLink.parent()); // Default width is width of one column var popupWidth = $('.popUpNavSection').width() + 20; // Calculate popup width depending on the number of columns var numColumns = $popUpNav.find('.popUpNavSection').length; if (numColumns != 0) { popupWidth *= numColumns; } var elPos = $mainLink.position(); var leftOffset = 0; if (elPos.left + popupWidth > 950) { leftOffset = elPos.left + popupWidth - 948; } $popUpNav.css({ top : elPos.top + 31 + 'px', left : elPos.left - leftOffset + 'px', visibility : 'visible', width : popupWidth + 'px' }); $('.popUpArrow').css({ left : elPos.left + Math.round(($mainLink.width() / 2)) + 20 + 'px', top : '27px' }).show(); }, function() { var $this = $(this); timeout = setTimeout(function() { $(".popUpNav", $this.parent()).css({ visibility : 'hidden' }); $('.popUpArrow').hide() ieCompat('show'); }, 200); } ); } }); $(".subLinks").hover( function(e) { $subLink = $(this); var elPos = $subLink.position(); var popupWidth = $(".popUpNavLv2",$subLink.parent()).width(); var leftOffset = 0; ieCompat('hide'); $(".popUpNavLv2",$subLink.parent()).css({ top : elPos.top + 32 + 'px', left : elPos.left - leftOffset + 'px', visibility : 'visible' }); }, function() { var $this = $(this); timeout = setTimeout(function() { $(".popUpNavLv2", $this.parent()).css({ visibility : 'hidden' }); }, 200); ieCompat('show'); } ); $('.popUpNav').hover( function() { clearTimeout(timeout); $mainLink.addClass('current'); $(this).css('visibility', 'visible'); $('.popUpArrow').show(); }, function() { $mainLink.removeClass('current'); $(this).css('visibility', 'hidden'); $('.popUpArrow').hide(); ieCompat('show'); } ); $('.popUpNavLv2').hover( function() { clearTimeout(timeout); $(this).css('visibility', 'visible'); ieCompat('hide'); }, function() { ieCompat('show'); $(this).css('visibility', 'hidden'); } ); // If on mac, reduce left padding on the tabs if (/mac os x/.test(navigator.userAgent.toLowerCase())) { $('.mainLinks, .mainLinksHome').css('padding-left', '23px'); } }); Thanks a lot in advance for looking into it. Thanks | Kranthi

    Read the article

  • BlackBerry/J2ME - SAX parse collection of objects with attributes

    - by Changqi Guo
    I have a problem with using the SAX parser to parse a XML file. It is a complex XML file, it is like the following. <Objects> <Object no="1"> <field name="PID">ilives:87877</field> <field name="dc.coverage">Charlottetown</field> <field name="fgs.ownerId">fedoraAdmin</field> </Object> <Object no="2">...... I am confused how to get the names in each field, and how to store the information of each object. import java.util.Enumeration; import java.util.Hashtable; public class XMLObject { private Hashtable mFields = new Hashtable(); private int mN = -1; public int getN() { return mN; } public void setN(int n) { mN = n; } public String getStringField(String key) { return (String) mFields.get(key); } public void setStringField(String key, String value) { mFields.put(key, value); } public String getPID() { return getStringField("PID"); } public void setPID(String pid) { setStringField("PID", pid); } public String getDcCoverage() { return getStringField("dc.coverage"); } public void setDcCoverage(String dcCoverage) { setStringField("dc.coverage", dcCoverage); } public String getFgsOwnerId() { return getStringField("fgs.ownerId"); } public void setFgsOwnerId(String fgsOwnerId) { setStringField("fgs.ownerId", fgsOwnerId); } public String dccreator() { return getStringField("dc.creator"); } public void dccreator(String dccreator) { setStringField("dc.creator", dccreator); } public String getdcformat() { return getStringField("dc.format"); } public void setdcformat(String dcformat) { setStringField("dc.format", dcformat); } public String getdcidentifier() { return getStringField("dc.identifier"); } public void setdcidentifier(String dcidentifier) { setStringField("dc.identifier", dcidentifier); } public String getdclanguage() { return getStringField("dc.language"); } public void setdclanguage(String dclanguage) { setStringField("dc.language", dclanguage); } public String getdcpublisher() { return getStringField("dc.publisher"); } public void setdcpublisher(String dcpublisher) { setStringField("dc.publisher",dcpublisher); } public String getdcsubject() { return getStringField("dc.subject"); } public void setdcsubject(String dcsubject) { setStringField("dc.subject",dcsubject); } public String getdctitle() { return getStringField("dc.title"); } public void setdctitle(String dctitle) { setStringField("dc.title",dctitle); } public String getdctype() { return getStringField("dc.type"); } public void setdctype(String dctype) { setStringField("dc.type",dctype); } public String toString() { StringBuffer sb = new StringBuffer(); sb.append("N:"+mN+";"); Enumeration keys = mFields.keys(); while (keys.hasMoreElements()) { String key = (String) keys.nextElement(); sb.append(key+":"+mFields.get(key)+";"); } return sb.toString(); } } i used the same handler class you provided import java.io.*; import net.rim.device.api.system.Bitmap; import javax.xml.parsers.ParserConfigurationException; import javax.xml.parsers.SAXParser; import javax.xml.parsers.SAXParserFactory; import java.io.InputStream; import net.rim.device.api.ui.component.*; import net.rim.device.api.ui.container.MainScreen; import net.rim.device.api.xml.parsers.*; import org.w3c.dom.*; import org.xml.sax.*; import org.xml.sax.helpers.DefaultHandler; public class xmlparsermainscreen extends MainScreen{ private static String xmlres = "/xml/xml1.xml"; private RichTextField textOutputField; public xmlparsermainscreen() throws ParserConfigurationException, net.rim.device.api.xml.parsers.ParserConfigurationException, IOException { InputStream inputStream = getClass().getResourceAsStream(xmlres); ByteArrayOutputStream baos = new ByteArrayOutputStream(); byte[] buffer = new byte[10000]; int bytesRead = inputStream.read(buffer); while (bytesRead > 0) { baos.write(buffer, 0, bytesRead); bytesRead = inputStream.read(buffer); } baos.close(); String result=baos.toString(); ByteArrayInputStream bais = new ByteArrayInputStream(result.getBytes()); XMLObject[] xmlObjects = getXMLObjects(bais); for (int i = 0; i < xmlObjects.length; i++) { XMLObject o = xmlObjects[i]; textOutputField = new RichTextField(); add(textOutputField); textOutputField.setText(o.toString()); // add(new LabelField(o.toString())); } LabelField resultdis=new LabelField("resultdisplay"); add(resultdis); //textOutputField = new RichTextField(); //add(textOutputField); //textOutputField.setText(result); } static XMLObject[] getXMLObjects(InputStream is) throws ParserConfigurationException { XMLObjectHandler xmlObjectHandler = new XMLObjectHandler(); try { SAXParser parser = SAXParserFactory.newInstance() .newSAXParser(); parser.parse(is, xmlObjectHandler); } catch (ParserConfigurationException e) { e.printStackTrace(); } catch (SAXException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } return xmlObjectHandler.getXMLObjects(); } } import java.io.IOException; import javax.xml.parsers.ParserConfigurationException; import net.rim.device.api.ui.UiApplication; public class xmlparser extends UiApplication { private xmlparser() throws ParserConfigurationException, net.rim.device.api.xml.parsers.ParserConfigurationException, IOException { pushScreen( new xmlparsermainscreen() ); } public static void main( String[] args ) throws ParserConfigurationException, net.rim.device.api.xml.parsers.ParserConfigurationException, IOException { new xmlparser().enterEventDispatcher(); } }

    Read the article

< Previous Page | 204 205 206 207 208 209 210  | Next Page >