Search Results

Search found 1292 results on 52 pages for 'martin klier'.

Page 49/52 | < Previous Page | 45 46 47 48 49 50 51 52  | Next Page >

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

  • Patterns for non-layered applications

    - by Paul Stovell
    In Patterns of Enterprise Application Architecture, Martin Fowler writes: This book is thus about how you decompose an enterprise application into layers and how those layers work together. Most nontrivial enterprise applications use a layered architecture of some form, but in some situations other approaches, such as pipes and filters, are valuable. I don't go into those situations, focussing instead on the context of a layered architecture because it's the most widely useful. What patterns exist for building non-layered applications/parts of an application? Take a statistical modelling engine for a financial institution. There might be a layer for data access, but I expect that most of the code would be in a single layer. Would you still expect to see Gang of Four patterns in such a layer? How about a domain model? Would you use OO at all, or would it be purely functional? The quote mentions pipes and filters as alternate models to layers. I can easily imagine a such an engine using pipes as a way to break down the data processing. What other patterns exist? Are there common patterns for areas like task scheduling, results aggregation, or work distribution? What are some alternatives to MapReduce?

    Read the article

  • PHP: Aggregate Model Classes or Uber Model Classes?

    - by sunwukung
    In many of the discussions regarding the M in MVC, (sidestepping ORM controversies for a moment), I commonly see Model classes described as object representations of table data (be that an Active Record, Table Gateway, Row Gateway or Domain Model/Mapper). Martin Fowler warns against the development of an anemic domain model, i.e. a class that is nothing more than a wrapper for CRUD functionality. I've been working on an MVC application for a couple of months now. The DBAL in the application I'm working on started out simple (on account of my understanding - oh the benefits of hindsight), and is organised so that Controllers invoke Business Logic classes, that in turn access the database via DAO/Transaction Scripts pertinent to the task at hand. There are a few "Entity" classes that aggregate these DAO objects to provide a convenient CRUD wrapper, but also embody some of the "behaviour" of that Domain concept (for example, a user - since it's easy to isolate). Taking a look at some of the code, and thinking along refactoring some of the code into a Rich Domain Model, it occurred to me that were I to try and wrap the CRUD routines and behaviour of say, a Company into a single "Model" class, that would be a sizeable class. So, my question is this: do Models represent domain objects, business logic, service layers, all of the above combined? How do you go about defining the responsibilities for these components?

    Read the article

  • What guidelines should be followed when using an unstable/testing/stable branching scheme?

    - by Elliot
    My team is currently using feature branches while doing development. For each user story in our sprint, we create a branch and work it in isolation. Hence, according to Martin Fowler, we practice Continuous Building, not Continuous Integration. I am interested in promoting an unstable/testing/stable scheme, similar to that of Debian, so that code is promoted from unstable = testing = stable. Our definition of done, I'd recommend, is when unit tests pass (TDD always), minimal documentation is complete, automated functional tests pass, and feature has been demo'd and accepted by PO. Once accepted by the PO, the story will be merged into the testing branch. Our test developers spend most of their time in this branch banging on the software and continuously running our automated tests. This scares me, however, because commits from another incomplete story may now make it into the testing branch. Perhaps I'm missing something because this seems like an undesired consequence. So, if moving to a code promotion strategy to solve our problems with feature branches, what strategy/guidelines do you recommend? Thanks.

    Read the article

  • C# connect to domain SQL Server 2005 from non-domain machine

    - by user304582
    Hi, I asked a question a few days ago (http://stackoverflow.com/questions/2795723/access-to-sql-server-2005-from-a-non-domain-machine-using-windows-authentication) which got some interesting, but not usable suggestions. I'd like to ask the question again, but make clear what my constraints are: I have a Windows domain within which a machine is running SQL Server 2005 and which is configured to support only Windows authentication. I would like to run a C# client application on a machine on the same network, but which is NOT on the domain, and access a database on the SQL Server 2005 instance. I CANNOT create or modify OS or SQL Server users on either machine, and I CANNOT make any changes to permissions or impersonation, and I CANNOT make use of runas. I know that I can write Perl and Java applications that can connect to the SQL Server database using only these four parameters: server name, database name, username (in the form domain\user), and password. In C# I have tried various things around: string connectionString = "Data Source=server;Initial Catalog=database;User Id=domain\user;Password=password"; SqlConnection connection = new SqlConnection(connectionString); connection.Open(); and tried setting integrated security to true and false, but nothing seems to work. Is what I am trying to do simply impossible in C#? Thanks for any help, Martin

    Read the article

  • Display pdf file inline in Rails app

    - by Martas
    Hi, I have a pdf file attachment saved in the cloud. The file is attached using attachment_fu. All I do to display it in the view is: <%= image_tag @model.pdf_attachment.public_filename %> When I load the page with this code in the browser, it does what I want: it displays the attached pdf file. But only on Mac. On Windows, browsers will display a broken image placeholder. Chrome's Developer Tools report: "Resource interpreted as image but transferred with MIME type application/pdf." I also tried sending the file from controller: in PdfAttachmentController: def send_pdf_attachment pdf_attachment = PdfAttachment.find params[:id] send_file pdf_attachment.public_filename, :type => pdf_attachment.content_type, :file_name => pdf_attachment.filename, :disposition => 'inline' end in routes.rb: map.send_pdf_attachment '/pdf_attachments/send_pdf_attachment/:id', :controller => 'pdf_attachments', :action => 'send_pdf_attachment' and in the view: <%= send_pdf_attachment_path @model.pdf_attachment %> or <%= image_tag( send_pdf_attachment_path @model.pdf_attachment ) %> And that doesn't display the file on Mac (I didn't try on Windows), it displays the path: pdf_attachments/send_pdf_attachment/35 So, my question is: what do I do to properly display a pdf file inline? Thanks martin

    Read the article

  • How Does One Make Scala Control Abstraction in Repeat Until?

    - by peter_pilgrim
    Hi I am Peter Pilgrim. I watched Martin Odersky create a control abstraction in Scala. However I can not yet seem to repeat it inside IntelliJ IDEA 9. Is it the IDE? package demo class Control { def repeatLoop ( body: = Unit ) = new Until( body ) class Until( body: = Unit ) { def until( cond: = Boolean ) { body; val value: Boolean = cond; println("value="+value) if ( value ) repeatLoop(body).until(cond) // if (cond) until(cond) } } def doTest2(): Unit = { var y: Int = 1 println("testing ... repeatUntil() control structure") repeatLoop { println("found y="+y) y = y + 1 } { until ( y < 10 ) } } } The error message reads: Information:Compilation completed with 1 error and 0 warnings Information:1 error Information:0 warnings C:\Users\Peter\IdeaProjects\HelloWord\src\demo\Control.scala Error:Error:line (57)error: Control.this.repeatLoop({ scala.this.Predef.println("found y=".+(y)); y = y.+(1) }) of type Control.this.Until does not take parameters repeatLoop { In the curried function the body can be thought to return an expression (the value of y+1) however the declaration body parameter of repeatUntil clearly says this can be ignored or not? What does the error mean?

    Read the article

  • Problems compiliing c++ code using cygwin

    - by user343403
    I am trying to compile some source code in cygwin (in windows 7) and get the following error when I run the make file g++ -DHAVE_CONFIG_H -I. -I.. -I.. -Wall -Wextra -Werror -g -O2 -MT libcommon_a Fcntl.o -MD -MP -MF .deps/libcommon_a-Fcntl.Tpo -c -o libcommon_a-Fcntl.o `test -f 'Fcntl.cpp' || echo './'`Fcntl.cpp Fcntl.cpp: In function int setCloexec(int): Fcntl.cpp:8: error: 'F_GETFD' was not declared in this scope Fcntl.cpp:8: error: 'fcntl' was not declared in this scope Fcntl.cpp:11: error: 'FD_CLOEXEC' was not declared in this scope Fcntl.cpp:12: error: 'F_SETFD' was not declared in this scope make[4]: *** [libcommon_a-Fcntl.o] Error 1 make[4]: Leaving directory `/abyss-1.1.2/Common' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/abyss-1.1.2' make[2]: *** [all] Error 2 make[2]: Leaving directory `/abyss-1.1.2' make[1]: *** [.build-conf] Error 2 make[1]: Leaving directory `/cygdrive/c/Users/Martin/Documents/NetBeansProjects/abyss-1.1.2_1' make: *** [.build-impl] Error 2 The problem file is:- #include "Fcntl.h" #include <fcntl.h> /* Set the FD_CLOEXEC flag of the specified file descriptor. */ int setCloexec(int fd) { int flags = fcntl(fd, F_GETFD, 0); if (flags == -1) return -1; flags |= FD_CLOEXEC; return fcntl(fd, F_SETFD, flags); } I don't understand what is going on, the file fcntl.h is available and the varaiables that it says were not declared in this scope do not give an error when I compile the file on its own Any help would be much appreciated Many Thanks

    Read the article

  • MS hotfix delayed delivery.

    - by MOE37x3
    I just requested a hotfix from support.microsoft.com and put in my email address, but I haven't received the email yet. The splash page I got after I requested the hotfix said: Hotfix Confirmation We will send these hotfixes to the following e-mail address: (my correct email address) Usually, our hotfix e-mail is delivered to you within five minutes. However, sometimes unforeseen issues in e-mail delivery systems may cause delays. We will send the e-mail from the “[email protected]” e-mail account. If you use an e-mail filter or a SPAM blocker, we recommend that you add “[email protected]” or the “microsoft.com” domain to your safe senders list. (The safe senders list is also known as a whitelist or an approved senders list.) This will help prevent our e-mail from going into your junk e-mail folder or being automatically deleted. I'm sure that the email is not getting caught in a spam catcher. How long does it normally take to get one of these hotfixes? Am I waiting for some human to approve it, or something? Should I just give up and try to get the file I need some other way? (Update: Replaced "[email protected]" with "(my correct email address)" to resolve Martín Marconcini's ambiguity.)

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • Is Assert.Fail() considered bad practice?

    - by Mendelt
    I use Assert.Fail a lot when doing TDD. I'm usually working on one test at a time but when I get ideas for things I want to implement later I quickly write an empty test where the name of the test method indicates what I want to implement as sort of a todo-list. To make sure I don't forget I put an Assert.Fail() in the body. When trying out xUnit.Net I found they hadn't implemented Assert.Fail. Of course you can always Assert.IsTrue(false) but this doesn't communicate my intention as well. I got the impression Assert.Fail wasn't implemented on purpose. Is this considered bad practice? If so why? @Martin Meredith That's not exactly what I do. I do write a test first and then implement code to make it work. Usually I think of several tests at once. Or I think about a test to write when I'm working on something else. That's when I write an empty failing test to remember. By the time I get to writing the test I neatly work test-first. @Jimmeh That looks like a good idea. Ignored tests don't fail but they still show up in a separate list. Have to try that out. @Matt Howells Great Idea. NotImplementedException communicates intention better than assert.Fail() in this case @Mitch Wheat That's what I was looking for. It seems it was left out to prevent it being abused in another way I abuse it.

    Read the article

  • MemoryFailPoint fires to early in WinXP 64

    - by msedi
    Hello, I have created a volume class (called VoxelVolume) with a self-organizing memory management, since the GC in C# didn't provide a good mechanism for managing contents of the volume for mapping, unmapping and remapping. Although I could have used the mechanisms of virtual memory, the problem is that the files are often too large to fit into the page file and I don't want to force the users to increase the pagefile size. Currently this system is working quite well and there is no problem in lacking resources and OutOfMemoryExceptions since the InsufficientMemoryException using the MemoryFailPoint works quite well. This was all testes on a 32bit WinXP system with 2GB of main memory. Running the same mechanism on 64bit system with 32GB of main memory also works well, but when the application runs the MemoryFailPoint suddenly throws an exception although 24GB of main memory are still free. Another point is when the MemoryFailPoint has fired once, it fires everytime and there is no chance to get rid of it. What I have read so far, that there is a small object and a large object heap (SOH and LOH). But only for the SOH the GC takes real care of and I can free the SOH from unused objects by applying GC.Collect() and GC.WaitForPendingFinalizers. The MemoryFailPoint is obviously the only way to get a little bit of control for the LOH, but since there is enough memory left on the system I see no reason why the MemoryFilePoint should fire. Is there any experience around here using the MemoryFailPoint? Thank you for your help Martin

    Read the article

  • Handling RSS Tags with NSXMLParser for iPhone

    - by MartinW
    I've found the following code for parsing through RSS but it does not seem to allow for nested elements: - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName{ NSLog(@"ended element: %@", elementName); if ([elementName isEqualToString:@"item"]) { // save values to an item, then store that item into the array... [item setObject:currentTitle forKey:@"title"]; [item setObject:currentLink forKey:@"link"]; [item setObject:currentSummary forKey:@"summary"]; [item setObject:currentDate forKey:@"date"]; [item setObject:currentImage forKey:@"media:thumbnail"]; The RSS to used is: <item><title>Knife robberies and burglaries up</title> <description>The number of robberies carried out at knife-point has increased sharply and burglaries are also up, latest crime figures indicate</description> <link>http://news.bbc.co.uk/go/rss/-/1/hi/uk/7844455.stm</link> <guid isPermaLink="false">http://news.bbc.co.uk/1/hi/uk/7844455.stm</guid> <pubDate>Thu, 22 Jan 2009 13:02:03 GMT</pubDate><category>UK</category> <media:thumbnail width="66" height="49" url="http://newsimg.bbc.co.uk/media/images/45400000/jpg/_45400861_policegeneric_pa.jpg"/> </item> I need to extract the "url" element from the "media" tag. Thanks Martin

    Read the article

  • How To Read A Remote Text File

    - by XcodeDev
    Hi, I would like to read a remote text file called posts.txt on my website. An example of the insides of the posts.txt file would be this: <div style="width : 300px; position : relative"><font face="helvetica, geneva, sans serif" size="6"><b>2</b></font><font face="helvetica, geneva, sans serif" size="4"><i> scored by iSDK</i></font><br><img src="Bar.png" /></div><div style="width : 300px; position : relative"><font face="helvetica, geneva, sans serif" size="6"><b>2</b></font><font face="helvetica, geneva, sans serif" size="4"><i> scored by martin</i></font><br><img src="Bar.png" /></div> What I wanted to know is how can I get the score, and scored by text from the .txt file? The score is (in this case) the: <b>2</b>, and the scored by text in this case would be: "scored by iSDK". Any code telling me how to do this is twice as helpful! Thanks in advanced XcodeDev

    Read the article

  • Rate my C# code (~300 SLOC) using GDI+/Backgroundworker

    - by sebastianlarsson
    Hi, I want to get some feedback on my code! Below is some background info. I am taking a pre-certification course in C# (Sweden, 15 ECTS). The focus of the course is theoretical and only limited practical work. I dont find the assignments very hard at all to tell you the truth, but since I only have very limited work experience as a developer (I have worked 15h/week at Ericsson since November) I think I would benefit from having the certificate (70-536 and more probably). I am currently reading Martin Fowler's "Refactoring: Improving the design of existing code" and I tried to apply his techniques to my latest lab in the course. I have been on the lookout for a website which have the idea of providing feedback on code, but so far I have yet to discover any. Please take a look on my code and tell me what you think. It is only roughly 300 lines of code divided into a couple of classes. GDI+, backgroundworker and user controls are what the lab is about. I reckon you may have to spend as little as a couple of minutes on looking on the solution. Link to solution: http://www.filefactory.com/file/b18h7d5/n/Lab4_Lab5_SebastianLarsson.zip Regards and thank you, Sebastian

    Read the article

  • Processor, OS : 32bit, 64 bit

    - by Sandbox
    I am new to programming and come from a non-CS background (no formal degree). I mostly program winforms using C#. I am confused about 32 bit and 64 bit.... I mean, have heard about 32 bit OS, 32 bit processor and based on which a program can have maximum memory. How it affects the speed of a program. There are lot more questions which keep coming to mind. I tried to go through some Computer Organization and Architecture books. But, either I am too dumb to understand what is written in there or the writers assume that the reader has some CS background. Can someone explain me these things in a plain simple English or point me to something which does that. EDIT: I have read things like In 32-bit mode, they can access up to 4GB memory; in 64-bit mode, they can access much much more....I want to know WHY to all such things. BOUNTY: Answers below are really good....esp one by Martin. But, I am looking at a thorough explanation, but in plain simple English.

    Read the article

  • Access to SQL Server 2005 from a non-domain machine using Windows authentication

    - by user304582
    Hi, I have a Windows domain within which a machine is running SQL Server 2005 and which is configured to support only Windows authentication. I would like to run a C# client application on a machine on the same network, but which is NOT on the domain, and access a database on the SQL Server 2005 instance. I thought that it would be a simple matter of doing something like this: string connectionString = "Data Source=server;Initial Catalog=database;User Id=domain\user;Password=password"; SqlConnection connection = new SqlConnection(connectionString); connection.Open(); However, this fails: the client-side error is: System.Data.SqlClient.SqlException: Login failed for user 'domain\user' and the server-side error is: Error 18456, Severity 14, State 5 I have tried various things including setting integrated security to true and false, and \ instead of \ in the User Id, but without success. In general, I know that it possible to connect to the SQL Server 2005 instance from a non-domain machine (for example, I am working with a Linux-based application which happily does this), but I don't seem to be able to work out how to do it from a Windows machine. Help would be appreciated! Thanks, Martin

    Read the article

  • Emacs - nxhtml-mode - memory full

    - by mbutz
    working with nxhtml-mode in emacs, I get problems since a few weeks. While working emacs pauses unexpectingly until showing a message in the mode line "!MEM FULL!"; obviously nxhtml-mode is filling up the memory until emacs stopps to work. I am working with html, php and css files. I have no idea how I could debug this problem in a meaningfull way. Also I seem to be the only one to have this problem, because googling did not deliver any answers to this question. I am using emacs 2.32 on an Linux Mint 11 system. I can not find out the verson of nxhtml, it says revision 829 downloaded from http://bazaar.launchpad.net/~nxhtml/nxhtml/main/revision/829. I set up a test scenario with a minimal dot-emacs just to test the nxhtml-mode. It seemed to be alright, but it does not reflect my productive set up. It would probably take a week or so to gradually include everything I used to use within emacs (e.g. org-mode) while testing whether nxhtml-mode does not like anything, which is called in my dot-emacs file. Is there another way? Can I find out, what causes the memory overload? Does anyone has similar problems using nxhtml-mode? Greetings Martin

    Read the article

  • How to prepare for a telephone interview: ‘Develop an Interview Cheat Sheet’

    - by Maria Sandu
    At Oracle we often do telephone interviews in different stages of the process with candidates, due to the fact that we hire native speakers into other countries. On this blog we already have an article with tips and tricks for phone interviews that can help you during the telephone interviews. To help you prepare even better for a telephone interview we would like to introduce you the basics of developing a cheat sheet. The benefit of a telephone interview is that you will be sitting at home, at your table or desk, during the interview, and not in front of someone. So use this to your advantage. The Monster website has some useful and interesting tips and tricks for developing a cheat sheet. Carole Martin, who wrote this article, says that a cheat sheet will help you feel more prepared and confident when speaking to managers over the phone. Important to keep in mind is that you shouldn't memorise what's on the sheet or check it off during the interview. Only use your cheat sheet to remind you of key facts. Here are some suggestions to include on it: • Divide a piece of paper in 2 by drawing a line. Write on one side of the paper a list of requirements as mentioned in the job description. On the other side list your qualities to fulfill the requirements of the employer. This will help you in answering questions about why you are the best candidate for the job and how you fit the role. • Do research on the company, the industry sector and the competitors, so you will get a feeling for the company’s business and can ask more in-depth questions. • Be prepared for the most used introduction question: “Tell me a bit about yourself”. Prepare a 60-second personal statement or pitch in which you summarise who you are and what you can offer, so you will be able to sell yourself from on the very beginning. • Write down a minimum of 5 good examples to answer behavioral interview questions ("Tell me about a time when..." or "Give me an example of a time..." ). These questions are used by interviewers to see how you deal with similar situations as you might encounter in the job. Interviewers use this question as past behaviour is scientifically proven to be the best predictor for future behaviour. • List five questions to ask the interviewer about the job, the company and the industry to help you get a good understanding if the role and company really fit your needs and wants. To get some inspiration check this article on inc.com • Find out how much you are worth on the job market and determine your needs based on your living expenses, especially when moving abroad. • Ask for permission from the people you plan to use as a reference. Also make sure you have your CV at hand and an overview of your grades. Feel free to comment on this article and let us know what your experience is with developing a cheat sheet for a telephone interview. Good luck with the preparation of your sheet.

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • XNA Notes 008

    - by George Clingerman
    This week has been a rough one. I’ve been sick and then in some kind of slump for my afternoon coding sessions. It could be from the cold, could be I’m still tired from writing that Windows Phone 7 game development book (which is out now!) or it could just be I’m tired of winter and want some sunshine. All I know is that even while I’m stick, the XNA world keeps going along at it’s whirlwind pace. Below are the things I caught in between my coughing fits.. Time Critical XNA News: The 2011 MVP summit is almost here so pass along your feelings and thoughts so the MVPs can take them and share them with the team in person http://forums.create.msdn.com/forums/p/76317/464136.aspx#464136 Dream Build Play - there’s no new announcement yet, but you can’t get much more to the end of February than this! http://www.dreambuildplay.com/Main/Home.aspx XNA Team: Dean Johnson from the XNA team shares an excellent way of handling Guide.IsTrialMode on WP7 http://blogs.msdn.com/b/dejohn/archive/2011/02/21/calling-guide-istrialmode-on-windows-phone-7.aspx Nick Gravelyn tries a new tactic in deciding if there’s enough interest to develop a sequel or not. Don’t YOU want Pixel Man 2 to come out? http://nickgravelyn.com/pixelman2/ XNA MVPs: Andy “The ZMan” Dunn finally shares what he’s been secretly working on these past 4 months http://twitter.com/#!/The_Zman/status/40590269392887808 http://www.youtube.com/watch?v=Rg8Z0ZdYbvg&feature=youtu.be Joel Martinez lets developers around NYC know they should by signing up for Game Hack Day http://twitter.com/joelmartinez/statuses/41118590862102528 http://gamehackday.org/71fdk XNA Developers: Michael McLaughlin shares an XNA RenderTarget2D Sample http://geekswithblogs.net/mikebmcl/archive/2011/02/18/xna-rendertarget2d-sample.aspx Martin Caine starts a new series on Deferred Rendering in XNA 4.0 http://twitter.com/#!/MartinCaine/status/39735221339291648 http://martincaine.com/xna/deferred_rendering_in_xna_4_introduction ElemenyCy posts about his fun time with the IntermediateSerializer http://www.ubergamermonkey.com/xna/holy-bloated-xml-batman/ Ben Kane releases a narrated dev diary video for Project Splice. Let him know if you’d like to see more! (I know I do!) http://twitter.com/#!/benkane/status/39846959498002432 http://www.youtube.com/watch?v=1EmziXZUo08&feature=youtu.be Jason Swearingen (of Novaleaf) posts his part 1 of Spatial Partitioning solutions http://altdevblogaday.org/2011/02/21/spatial-partitioning-part-1-survey-of-spatial-partitioning-solutions/ Brian Lawson of Dark Flow Studios shares what his been up to lately with lots of pretty screenshots and hints of announcements from Microsoft... http://www.darkflowstudios.com/entry/short-and-sweet-part-1 Luke Avery starts a new blog where he plans on making XNA tutorials for beginners (and he’s got a few started already!) http://programmingwithovery.wordpress.com/ Xbox LIVE Indie Games (XBLIG): GameMarx Episode 10 http://www.gamemarx.com/video/the-show/24/ep-10-february-18-2010.aspx Minecraft clone FortressCraft coming to XBLIG http://www.eurogamer.net/articles/2011-02-23-minecraft-clone-fortresscraft-hits-xblig ezMuze+ starts an IndieGoGo fundraiser campaign to help fund their second game and get it onto even more devices! http://www.indiegogo.com/ezmuze Gamergeddon XBLIG round up http://www.gamergeddon.com/2011/02/20/xbox-indie-game-round-up-february-20th/?utm_campaign=twitter&utm_medium=twitter&utm_source=twitter JForce Games loses their Ego http://jforcegames.com/blog/index.php?itemid=121&catid=4 XNA Game Development: @BallerIndustry reminds all XNA developers that the Maths are important ;) http://twitter.com/#!/BallerIndustry/status/39317618280243200 http://www.youtube.com/watch?v=MjV3XDFsjP4&feature=player_embedded#at=106 @suhinini stumbles on an older but extremely useful post on XNA Content Pipeline debugging http://twitter.com/#!/suhinini/status/39270189476352000 http://badcorporatelogo.wordpress.com/2010/10/31/xna-content-pipeline-debugging-4-0/ XNA Game Development Workshops at Singapore Universities http://innovativesingapore.com/2011/02/xna-game-development-workshops-at-singapore-universities/ Indiefreaks announces that IGF v0.3 is out with Xbox 360 support, SunBurn 2.0.12 and it’s now Open Source! http://twitter.com/#!/indiefreaks/status/39391953971982336 @liotral announces a new series on properly designing a game http://twitter.com/#!/liortal53/status/39466905081217024 http://liortalblog.wordpress.com/2011/02/20/hello-cosmos/ Indies and XNA at CodeStock 2011 http://www.gamemarx.com/news/2011/02/20/indies-and-xna-at-codestock-2011.aspx Train Frontier Express posts about XNA Content Hotloading http://trainfrontierexpress.blogspot.com/2011/02/xna-content-hotloading-overview.html Slyprid announces a new character editor in Transmute http://twitter.com/#!/slyprid/status/40146992818696192 http://www.youtube.com/watch?v=OKhFAc78LDs&feature=youtu.be The XNA 2D from the ground up tutorial series http://xna-uk.net/blogs/darkgenesis/archive/2011/02/23/recap-the-xna-2d-from-the-ground-up-tutorial-series.aspx Sgt.Conker posts a “Clingerman” (hey that’s me!) to stay relevant http://www.sgtconker.com/2011/02/posting-a-clingerman-to-stay-relevant/

    Read the article

  • SQL SERVER – Merge Operations – Insert, Update, Delete in Single Execution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by Jorge Segarra (aka SQLChicken). I have been very active using these Merge operations in my development. However, I have found out from my consultancy work and friends that these amazing operations are not utilized by them most of the time. Here is my attempt to bring the necessity of using the Merge Operation to surface one more time. MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. One of the most important advantages of MERGE statement is that the entire data are read and processed only once. In earlier versions, three different statements had to be written to process three different activities (INSERT, UPDATE or DELETE); however, by using MERGE statement, all the update activities can be done in one pass of database table. I have written about these Merge Operations earlier in my blog post over here SQL SERVER – 2008 – Introduction to Merge Statement – One Statement for INSERT, UPDATE, DELETE. I was asked by one of the readers that how do we know that this operator was doing everything in single pass and was not calling this Merge Operator multiple times. Let us run the same example which I have used earlier; I am listing the same here again for convenience. --Let’s create Student Details and StudentTotalMarks and inserted some records. USE tempdb GO CREATE TABLE StudentDetails ( StudentID INTEGER PRIMARY KEY, StudentName VARCHAR(15) ) GO INSERT INTO StudentDetails VALUES(1,'SMITH') INSERT INTO StudentDetails VALUES(2,'ALLEN') INSERT INTO StudentDetails VALUES(3,'JONES') INSERT INTO StudentDetails VALUES(4,'MARTIN') INSERT INTO StudentDetails VALUES(5,'JAMES') GO CREATE TABLE StudentTotalMarks ( StudentID INTEGER REFERENCES StudentDetails, StudentMarks INTEGER ) GO INSERT INTO StudentTotalMarks VALUES(1,230) INSERT INTO StudentTotalMarks VALUES(2,255) INSERT INTO StudentTotalMarks VALUES(3,200) GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Merge Statement MERGE StudentTotalMarks AS stm USING (SELECT StudentID,StudentName FROM StudentDetails) AS sd ON stm.StudentID = sd.StudentID WHEN MATCHED AND stm.StudentMarks > 250 THEN DELETE WHEN MATCHED THEN UPDATE SET stm.StudentMarks = stm.StudentMarks + 25 WHEN NOT MATCHED THEN INSERT(StudentID,StudentMarks) VALUES(sd.StudentID,25); GO -- Select from Table SELECT * FROM StudentDetails GO SELECT * FROM StudentTotalMarks GO -- Clean up DROP TABLE StudentDetails GO DROP TABLE StudentTotalMarks GO The Merge Join performs very well and the following result is obtained. Let us check the execution plan for the merge operator. You can click on following image to enlarge it. Let us evaluate the execution plan for the Table Merge Operator only. We can clearly see that the Number of Executions property suggests value 1. Which is quite clear that in a single PASS, the Merge Operation completes the operations of Insert, Update and Delete. I strongly suggest you all to use this operation, if possible, in your development. I have seen this operation implemented in many data warehousing applications. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Merge

    Read the article

  • Expression Studio 4 - without SketchFlow&hellip;

    - by mbcrump
    is kinda like an explosion with no “Ka-Boom”… I was excited to hear the news yesterday at Microsoft Teched that Expression Studio 4 had officially launched. MSDN subscribers could log in and download the full release. So, I logged into my MSDN account and started downloading Expression Studio 4 Premium thinking that I was only minutes away from trying out SketchFlow 4. To my dismay, I launched Blend 4 and noticed it did not say SketchFlow on the splash screen. So, I went to New Project and the template was not available. After some digging around on the net, I learned my premium MSDN subscription did not include SketchFlow and I would need to purchase the Ultimate Edition. Below is a excerpt directly from Microsoft: Q: What products are included in the Microsoft Expression Studio 4 Ultimate? A: Expression Studio 4 Ultimate is comprised of 4 products, Expression Web 4, Microsoft Expression Blend® 4 + SketchFlow, Expression Encoder 4 Pro and Expression Design 4. Expression Blend 4 includes SketchFlow in Expression Studio 4 Ultimate product only. Q: What products are included in the Microsoft Expression Studio 4 Premium? A: Expression Studio 4 Premium is comprised of 4 products, Expression Web 4, Microsoft Expression Blend 4, Expression Encoder 4 and Expression Design 4. Expression Studio 4 Premium is not available for retail purchase. Q: What products are included in the Microsoft Expression Studio 4 Web Professional? A: Expression Studio 4 Web Professional is comprised of 3 products, Expression Web 4, Expression Encoder 4 and Expression Design 4. As you can see, we got screwed on this deal and plenty of people are complaining: Kiran Says: 6.07.2010 at 5:07 PM No SketchFlow for Expression Studio 4 Premium? What a bumper for Microsoft Partners!! Martin Says: 6.07.2010 at 6:18 PM Why does Expression Professional Subscription not include upgrades and new releases of Expression Studio. Good question hey. I bought my subscription 5 days ago thinking I would get what i purchased but no Expression upgrades or new releases for me, what a waste of money. I think I am not the only long term user of this software that feels disgruntled. Sorry john just had to tell someone. shaggygi Says: 6.07.2010 at 7:31 PM SketchFlow NOT included in Studio 4? WTF! I repeat.... WT...Freaking.... F! This is totally unacceptable. My development team purchased VS 2010 Premium w/ MSDN with the impression by Adam Kinney, Scott Guthrie, etc. that this would be included in the Premium package or some sort of free upgrade. I understand this is a Marketing thing, but come on! I believe, at very least, this should have been explained in detail before this release. John Papa... as a rep to give feedback to the team... Please please and please.... tell powers-at-be to fix this problem. Sorry for the rant. Besides this issue, I believe it is a very good product:) Thanks Vaclav Elias Says: 6.08.2010 at 4:30 AM Well, I am also not happy that SketchFlow is only for the chosen ones :-) It is very nice product. Actually, kind of foundation for web development so they could really support any MSDN subscribers.. :-( I am hoping that Microsoft will make this right for all of us with MSDN premier subscriptions. In the meantime,  you can check out the 5 day training series available here.

    Read the article

  • Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud

    - by Asian Angel
    Backing up important files is something that all of us should do on a regular basis, but may not have given as much thought to as we should. This week we would like to know if you use local storage, cloud storage, or a combination of both to back your files up. Photo by camknows. For some people local storage media may be the most convenient and/or affordable way to back up their files. Having those files stored on media under your control can also provide a sense of security and peace of mind. But storing your files locally may also have drawbacks if something happens to your storage media. So how do you know whether the benefits outweigh the disadvantages or not? Here are some possible pros and cons that may affect your decision to use local storage to back up your files: Local Storage Pros You are in control of your data Your files are portable and can go with you when needed if using external or flash drives Files are accessible without an internet connection You can easily add more storage capacity as needed (additional drives, etc.) Cons You need to arrange room for your storage media (if you have multiple externals drives, etc.) Possible hardware failure No access to your files if you forget to bring your storage media with you or it is too bulky to bring along Theft and/or loss of home with all contents due to circumstances like fire If you are someone who is always on the go and needs to travel as lightly as possible, cloud storage may be the perfect way for you to back up and access your files. Perhaps your laptop has a hard-drive failure or gets stolen…unhappy events to be sure, but you will still have a copy of your files available. Perhaps a company wants to make sure their records, files, and other information are backed up off site in case of a major hardware or system failure…expensive and/or frustrating to fix if it happens, but once again there is a nice backup ready to go once things are fixed. As with local storage, here are some possible pros and cons that may influence your choice of cloud storage to back up your files: Cloud Storage Pros No need to carry around flash or bulky external drives All of your files are accessible wherever there is an internet connection No need to deal with local storage media (or its’ upkeep) Your files are still safe if your home is broken into or other unfortunate circumstances occur Cons Your files and data are not 100% under your control Possible hardware failure or loss of files on the part of your cloud storage provider (this could include a disgruntled employee wreaking havoc) No access to your files if you do not have an internet connection The cloud storage provider may eventually shutdown due to financial hardship or other unforeseen circumstances The possibility of your files and data being stolen by hackers due to a security breach on the part of your cloud storage provider You may also prefer to try and cover all of the possibilities by using both local and cloud storage to back up your files. If something happens to one, you always have the other to fall back on. Need access to those files at or away from home? As long as you have access to either your storage media or an internet connection, you are good to go. Maybe you are getting ready to choose a backup solution but are not sure which one would work better for you. Here is your chance to ask your fellow HTG readers which one they would recommend. Got a great backup solution already in place? Then be sure to share it with your fellow readers! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know Winter Sunset by a Mountain Stream Wallpaper Add Sleek Style to Your Desktop with the Aston Martin Theme for Windows 7 Awesome WebGL Demo – Flight of the Navigator from Mozilla Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0

    Read the article

  • Reconciling the Boy Scout Rule and Opportunistic Refactoring with code reviews

    - by t0x1n
    I am a great believer in the Boy Scout Rule: Always check a module in cleaner than when you checked it out." No matter who the original author was, what if we always made some effort, no matter how small, to improve the module. What would be the result? I think if we all followed that simple rule, we'd see the end of the relentless deterioration of our software systems. Instead, our systems would gradually get better and better as they evolved. We'd also see teams caring for the system as a whole, rather than just individuals caring for their own small little part. I am also a great believer in the related idea of Opportunistic Refactoring: Although there are places for some scheduled refactoring efforts, I prefer to encourage refactoring as an opportunistic activity, done whenever and wherever code needs to cleaned up - by whoever. What this means is that at any time someone sees some code that isn't as clear as it should be, they should take the opportunity to fix it right there and then - or at least within a few minutes Particularly note the following excerpt from the refactoring article: I'm wary of any development practices that cause friction for opportunistic refactoring ... My sense is that most teams don't do enough refactoring, so it's important to pay attention to anything that is discouraging people from doing it. To help flush this out be aware of any time you feel discouraged from doing a small refactoring, one that you're sure will only take a minute or two. Any such barrier is a smell that should prompt a conversation. So make a note of the discouragement and bring it up with the team. At the very least it should be discussed during your next retrospective. Where I work, there is one development practice that causes heavy friction - Code Review (CR). Whenever I change anything that's not in the scope of my "assignment" I'm being rebuked by my reviewers that I'm making the change harder to review. This is especially true when refactoring is involved, since it makes "line by line" diff comparison difficult. This approach is the standard here, which means opportunistic refactoring is seldom done, and only "planned" refactoring (which is usually too little, too late) takes place, if at all. I claim that the benefits are worth it, and that 3 reviewers will work a little harder (to actually understand the code before and after, rather than look at the narrow scope of which lines changed - the review itself would be better due to that alone) so that the next 100 developers reading and maintaining the code will benefit. When I present this argument my reviewers, they say they have no problem with my refactoring, as long as it's not in the same CR. However I claim this is a myth: (1) Most of the times you only realize what and how you want to refactor when you're in the midst of your assignment. As Martin Fowler puts it: As you add the functionality, you realize that some code you're adding contains some duplication with some existing code, so you need to refactor the existing code to clean things up... You may get something working, but realize that it would be better if the interaction with existing classes was changed. Take that opportunity to do that before you consider yourself done. (2) Nobody is going to look favorably at you releasing "refactoring" CRs you were not supposed to do. A CR has a certain overhead and your manager doesn't want you to "waste your time" on refactoring. When it's bundled with the change you're supposed to do, this issue is minimized. The issue is exacerbated by Resharper, as each new file I add to the change (and I can't know in advance exactly which files would end up changed) is usually littered with errors and suggestions - most of which are spot on and totally deserve fixing. The end result is that I see horrible code, and I just leave it there. Ironically, I feel that fixing such code not only will not improve my standings, but actually lower them and paint me as the "unfocused" guy who wastes time fixing things nobody cares about instead of doing his job. I feel bad about it because I truly despise bad code and can't stand watching it, let alone call it from my methods! Any thoughts on how I can remedy this situation ?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52  | Next Page >