Search Results

Search found 810 results on 33 pages for 'phil jackson'.

Page 6/33 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • How do I stop XNA/Visual Studio from rebuilding my content project every time I build?

    - by Phil Quinn
    My group and I are working on a game in XNA 4.0 with Visual Studio 2010/2012. The main solution has 6 projects: 2 XNA game projects (1 executable/ 1 class library), 1 WPF executable for the level editor, 2 standard class libraries, and a content project. Originally, the editor and engine XNA game projects had a content reference to separate content projects. Recently, I consolidated the content projects into one to simplify asset additions. Since pushing these changes to our git repo, certain members of my group have been experiencing weird build issues. Every time they run the project, they have to re-build all of the assets. This happens regardless of whether any changes were made, even if they just run the project directly after building. I've taken a few steps to figure out why this is happening. Below is the MSBuild output set on Normal verbosity. The seemingly important part is at 4, with the line 4> Rebuilding all content because build settings have changed 1>------ Build started: Project: Engine.Core, Configuration: Debug x86 ------ 1>Build started 11/29/2012 3:24:24 AM. 1>ResolveAssemblyReferences: 1> A TargetFramework profile exclusion list will be generated. 1>EmbedXnaFrameworkRuntimeProfile: 1>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 1>GenerateTargetFrameworkMonikerAttribute: 1>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 1>CoreCompile: 1>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 1>XnaWriteCacheFile: 1>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 1>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 1> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 1>_CopyAppConfigFile: 1>Skipping target "_CopyAppConfigFile" because all output files are up-to-date with respect to the input files. 1>CopyFilesToOutputDirectory: 1> Engine.Core -> <solution-dir>\src\Engine.Core\bin\x86\Debug\TimeSink.Engine.Core.dll 1> 1>Build succeeded. 1> 1>Time Elapsed 00:00:00.13 2>------ Build started: Project: TimeSink.Entities, Configuration: Debug x86 ------ 2>Build started 11/29/2012 3:24:25 AM. 2>ResolveAssemblyReferences: 2> A TargetFramework profile exclusion list will be generated. 2>EmbedXnaFrameworkRuntimeProfile: 2>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 2>GenerateTargetFrameworkMonikerAttribute: 2>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 2>CoreCompile: 2>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 2>XnaWriteCacheFile: 2>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 2>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 2> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 2>CopyFilesToOutputDirectory: 2> TimeSink.Entities -> <solution-dir>\src\TimeSink.Entities\bin\x86\Debug\TimeSink.Entities.dll 2> 2>Build succeeded. 2> 2>Time Elapsed 00:00:00.11 3>------ Build started: Project: Editor (Editor\Editor), Configuration: Debug x86 ------ 4>------ Build started: Project: Engine.Game, Configuration: Debug x86 ------ 3>Build started 11/29/2012 3:24:25 AM. 3>CoreCompile: 3> All content is already up to date 3>ResolveAssemblyReferences: 3> A TargetFramework profile exclusion list will be generated. 3>EmbedXnaFrameworkRuntimeProfile: 3>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 3>GenerateTargetFrameworkMonikerAttribute: 3>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 3>CoreCompile: 3>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 3>XnaWriteCacheFile: 3>Skipping target "XnaWriteCacheFile" because all output files are up-to-date with respect to the input files. 3>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 3> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 3>_CopyOutOfDateNestedContentItemsToOutputDirectory: 3>Skipping target "_CopyOutOfDateNestedContentItemsToOutputDirectory" because all output files are up-to-date with respect to the input files. 3>CopyFilesToOutputDirectory: 3> Editor -> <solution-dir>\src\Editor\Editor\bin\x86\Debug\Editor.dll 3> 3>Build succeeded. 3> 3>Time Elapsed 00:00:00.39 4>Build started 11/29/2012 3:24:25 AM. 4>CoreCompile: 4> Rebuilding all content because build settings have changed 4> Building Textures\circle.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\circle.xnb 4> Importing Textures\circle.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\circle.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\circle.xnb 4> Building Textures\giroux.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\giroux.xnb 4> Importing Textures\giroux.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\giroux.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\giroux.xnb 4> Building Textures\Body_Neutral.png -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\Body_Neutral.xnb 4> Importing Textures\Body_Neutral.png with Microsoft.Xna.Framework.Content.Pipeline.TextureImporter 4> Processing Textures\Body_Neutral.png with Microsoft.Xna.Framework.Content.Pipeline.Processors.TextureProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\Textures\Body_Neutral.xnb 4> Building font.spritefont -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\font.xnb 4> Importing font.spritefont with Microsoft.Xna.Framework.Content.Pipeline.FontDescriptionImporter 4> Processing font.spritefont with Microsoft.Xna.Framework.Content.Pipeline.Processors.FontDescriptionProcessor 4> Compiling <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Content\font.xnb 4>ResolveAssemblyReferences: 4> A TargetFramework profile exclusion list will be generated. 4>EmbedXnaFrameworkRuntimeProfile: 4>Skipping target "EmbedXnaFrameworkRuntimeProfile" because all output files are up-to-date with respect to the input files. 4>GenerateTargetFrameworkMonikerAttribute: 4>Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output files are up-to-date with respect to the input files. 4>CoreCompile: 4>Skipping target "CoreCompile" because all output files are up-to-date with respect to the input files. 4>_CopyOutOfDateSourceItemsToOutputDirectoryAlways: 4> Copying file from "<solution-dir>\src\Engine.Core\DialoguePrototypeTestDB.s3db" to "bin\x86\Debug\DialoguePrototypeTestDB.s3db". 4>_CopyOutOfDateNestedContentItemsToOutputDirectory: 4>Skipping target "_CopyOutOfDateNestedContentItemsToOutputDirectory" because all output files are up-to-date with respect to the input files. 4>_CopyAppConfigFile: 4>Skipping target "_CopyAppConfigFile" because all output files are up-to-date with respect to the input files. 4>CopyFilesToOutputDirectory: 4> Engine.Game -> <solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Engine.Game.exe 4>IncrementalClean: 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\circle.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\giroux.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\Body_Neutral.xnb". 4> Deleting file "<solution-dir>\src\Engine.Game\Engine.Game\bin\x86\Debug\font.xnb". 4> 4>Build succeeded. 4> 4>Time Elapsed 00:00:01.72 ========== Build: 4 succeeded, 0 failed, 1 up-to-date, 0 skipped ========== I can't think of how build settings could change between consecutive executions. Like I said, this only happens for half our group. One member is on a 32-bit Windows 7 Prof bootcamp partition on a Mac. Everyone else, including those who don't have the issue, are running straight 64-bit Windows 7 Prof. Both have tried using VS 2010 and VS 2012. Any insight would be greatly appreciated. Also, I can post more details upon request if this isn't thorough enough.

    Read the article

  • How to manage a multiplayer asynchronous environment in a game

    - by Phil
    I'm working on a game where players can setup villages, which can contain defending units. Any of these units (each on their own tiles) can be set to "campaign" which means they are no longer defending but can now be used to attack other villages. And each unit on a tile can have up to a 100 health. So far so good. Oh and it's all asynchronous so even though the server will be aware that your village is being attacked, you won't be until the attack is over. The issue I'm struggling with, is the following situation. Let's say a unit on a tile is being attacked by a player from another village. The other player see's your village and is attacking your units. You don't know this is happening though, so you set your unit to campaign and off you go to attack another village, with the unit which itself is actually being attacked by this other player. The other player stops attacking your village and leaves your unit with say a health of 1, which is then saved to the server. You however have this same unit are attacking another village with it, but now you discover that even though it started off with a 100 health, now mysteriously it only has 1... Solutions? Ideas? Edit The simplest solutions are often the best. I referred to Clash of clans below, well after a bit more digging it seems that in CoC you can only attack players that are offline! ha, that almost solves the problem. I say almost because there's still the situation where a players village could be in the process of being attacked when they come back online, still need to address that. Edit 2 A solution to the "What happens when a player is attacking your village and you come online" issue, could be the attacking player just get's kicked out of the village at that point and just get's whatever they had won up to that point, it's a bit of a fudge but it might work.

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

  • Microsoft Dev Centre accounts

    - by Phil Murray
    Looks like Microsoft is offering a special offer of 95% of the yearly subscription for the Phone Dev Centre (I didn't say anything about desperate). What I was wondering is do you need a seperate account to publish to the Windows Phone app centre and the Windows App Centre? Also I heard some horror stories about the time it takes to get application published on the Windows phone marketplace, does anyone have any experience with this? Windows Phone Dev Centre Windows App Dev Centre

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

  • Evolution of an Application: how to manage and improve core engine?

    - by Phil Carter
    The web application I work on has been live for a year now, but it's time for it to evolve and one of the ways in which it is evolving is into a multi-brand application - in this case several different companies using the application, different templates/content and some slight business logic changes between them. The problem I'm facing is implementing a best practice across the site where there are differences in business logic for each brand. These will mostly be very superficial, using a an alternative mailing list provider or capturing some extra data in a form. I don't want to have if(brand === x) { ... } else { ... } all over the site especially as most of what needs to be changed can be handled with extending the existing class. I've thought of several methods that could be used to instantiate the correct class, but I'm just not sure which is going to be best especially as some seem to lead to duplication of more code than should be necessary. Here's what I've considered: 1) Use a Static Loader similar to Zend_Loader which can take the class being requested, and has knowledge of the Brand and can then return the correct object. $class = App_Loader::getObject('User', $brand); 2) Factory classes. We use these in the application already for Products but we could utilise them here also to provide a transparent interface to the class. 3) Routing the page request to a specific brand controller. This however seems like it would duplicate a lot of code/logic. Is there a pattern or something else I should be considering to solve this problem? 4) How to manage a growing project that has multiple custom instances in production? Update This is a PHP application so the decisions on which class to load are made per request. There could be upwards of 100+ different 'brands' running.

    Read the article

  • Restricting joystick within a radius of center

    - by Phil
    I'm using Unity3d iOs and am using the example joysticks that came with one of the packages. It works fine but the area the joystick moves in is a rectangle which is unintuitive for my type of game. I can figure out how to see if the distance between the center and the current point is too far but I can't figure out how to constrain it to a certain distance without interrupting the finger tracking. Here's the relevant code: using UnityEngine; using System.Collections; public class Boundary { public Vector2 min = Vector2.zero; public Vector2 max = Vector2.zero; } public class Joystick : MonoBehaviour{ static private Joystick[] joysticks; // A static collection of all joysticks static private bool enumeratedJoysticks=false; static private float tapTimeDelta = 0.3f; // Time allowed between taps public bool touchPad; // Is this a TouchPad? public Rect touchZone; public Vector2 deadZone = Vector2.zero; // Control when position is output public bool normalize = false; // Normalize output after the dead-zone? public Vector2 position; // [-1, 1] in x,y public int tapCount; // Current tap count private int lastFingerId = -1; // Finger last used for this joystick private float tapTimeWindow; // How much time there is left for a tap to occur private Vector2 fingerDownPos; private float fingerDownTime; private float firstDeltaTime = 0.5f; private GUITexture gui; // Joystick graphic private Rect defaultRect; // Default position / extents of the joystick graphic private Boundary guiBoundary = new Boundary(); // Boundary for joystick graphic public Vector2 guiTouchOffset; // Offset to apply to touch input private Vector2 guiCenter; // Center of joystick private Vector3 tmpv3; private Rect tmprect; private Color tmpclr; public float allowedDistance; public enum JoystickType { movement, rotation } public JoystickType joystickType; public void Start() { // Cache this component at startup instead of looking up every frame gui = (GUITexture) GetComponent( typeof(GUITexture) ); // Store the default rect for the gui, so we can snap back to it defaultRect = gui.pixelInset; if ( touchPad ) { // If a texture has been assigned, then use the rect ferom the gui as our touchZone if ( gui.texture ) touchZone = gui.pixelInset; } else { // This is an offset for touch input to match with the top left // corner of the GUI guiTouchOffset.x = defaultRect.width * 0.5f; guiTouchOffset.y = defaultRect.height * 0.5f; // Cache the center of the GUI, since it doesn't change guiCenter.x = defaultRect.x + guiTouchOffset.x; guiCenter.y = defaultRect.y + guiTouchOffset.y; // Let's build the GUI boundary, so we can clamp joystick movement guiBoundary.min.x = defaultRect.x - guiTouchOffset.x; guiBoundary.max.x = defaultRect.x + guiTouchOffset.x; guiBoundary.min.y = defaultRect.y - guiTouchOffset.y; guiBoundary.max.y = defaultRect.y + guiTouchOffset.y; } } public void Disable() { gameObject.active = false; enumeratedJoysticks = false; } public void ResetJoystick() { if (joystickType != JoystickType.rotation) { //Don't do anything if turret mode // Release the finger control and set the joystick back to the default position gui.pixelInset = defaultRect; lastFingerId = -1; position = Vector2.zero; fingerDownPos = Vector2.zero; if ( touchPad ){ tmpclr = gui.color; tmpclr.a = 0.025f; gui.color = tmpclr; } } else { //gui.pixelInset = defaultRect; lastFingerId = -1; position = position; fingerDownPos = fingerDownPos; if ( touchPad ){ tmpclr = gui.color; tmpclr.a = 0.025f; gui.color = tmpclr; } } } public bool IsFingerDown() { return (lastFingerId != -1); } public void LatchedFinger( int fingerId ) { // If another joystick has latched this finger, then we must release it if ( lastFingerId == fingerId ) ResetJoystick(); } public void Update() { if ( !enumeratedJoysticks ) { // Collect all joysticks in the game, so we can relay finger latching messages joysticks = (Joystick[]) FindObjectsOfType( typeof(Joystick) ); enumeratedJoysticks = true; } //CHeck if distance is over the allowed amount //Get centerPosition //Get current position //Get distance //If over, don't allow int count = iPhoneInput.touchCount; // Adjust the tap time window while it still available if ( tapTimeWindow > 0 ) tapTimeWindow -= Time.deltaTime; else tapCount = 0; if ( count == 0 ) ResetJoystick(); else { for(int i = 0;i < count; i++) { iPhoneTouch touch = iPhoneInput.GetTouch(i); Vector2 guiTouchPos = touch.position - guiTouchOffset; bool shouldLatchFinger = false; if ( touchPad ) { if ( touchZone.Contains( touch.position ) ) shouldLatchFinger = true; } else if ( gui.HitTest( touch.position ) ) { shouldLatchFinger = true; } // Latch the finger if this is a new touch if ( shouldLatchFinger && ( lastFingerId == -1 || lastFingerId != touch.fingerId ) ) { if ( touchPad ) { tmpclr = gui.color; tmpclr.a = 0.15f; gui.color = tmpclr; lastFingerId = touch.fingerId; fingerDownPos = touch.position; fingerDownTime = Time.time; } lastFingerId = touch.fingerId; // Accumulate taps if it is within the time window if ( tapTimeWindow > 0 ) { tapCount++; print("tap" + tapCount.ToString()); } else { tapCount = 1; print("tap" + tapCount.ToString()); //Tell gameobject that player has tapped turret joystick if (joystickType == JoystickType.rotation) { //TODO: Call! } tapTimeWindow = tapTimeDelta; } // Tell other joysticks we've latched this finger foreach ( Joystick j in joysticks ) { if ( j != this ) j.LatchedFinger( touch.fingerId ); } } if ( lastFingerId == touch.fingerId ) { // Override the tap count with what the iPhone SDK reports if it is greater // This is a workaround, since the iPhone SDK does not currently track taps // for multiple touches if ( touch.tapCount > tapCount ) tapCount = touch.tapCount; if ( touchPad ) { // For a touchpad, let's just set the position directly based on distance from initial touchdown position.x = Mathf.Clamp( ( touch.position.x - fingerDownPos.x ) / ( touchZone.width / 2 ), -1, 1 ); position.y = Mathf.Clamp( ( touch.position.y - fingerDownPos.y ) / ( touchZone.height / 2 ), -1, 1 ); } else { // Change the location of the joystick graphic to match where the touch is tmprect = gui.pixelInset; tmprect.x = Mathf.Clamp( guiTouchPos.x, guiBoundary.min.x, guiBoundary.max.x ); tmprect.y = Mathf.Clamp( guiTouchPos.y, guiBoundary.min.y, guiBoundary.max.y ); //Check distance float distance = Vector2.Distance(new Vector2(defaultRect.x, defaultRect.y), new Vector2(tmprect.x, tmprect.y)); float angle = Vector2.Angle(new Vector2(defaultRect.x, defaultRect.y), new Vector2(tmprect.x, tmprect.y)); if (distance < allowedDistance) { //Ok gui.pixelInset = tmprect; } else { //This is where I don't know what to do... } } if ( touch.phase == iPhoneTouchPhase.Ended || touch.phase == iPhoneTouchPhase.Canceled ) ResetJoystick(); } } } if ( !touchPad ) { // Get a value between -1 and 1 based on the joystick graphic location position.x = ( gui.pixelInset.x + guiTouchOffset.x - guiCenter.x ) / guiTouchOffset.x; position.y = ( gui.pixelInset.y + guiTouchOffset.y - guiCenter.y ) / guiTouchOffset.y; } // Adjust for dead zone float absoluteX = Mathf.Abs( position.x ); float absoluteY = Mathf.Abs( position.y ); if ( absoluteX < deadZone.x ) { // Report the joystick as being at the center if it is within the dead zone position.x = 0; } else if ( normalize ) { // Rescale the output after taking the dead zone into account position.x = Mathf.Sign( position.x ) * ( absoluteX - deadZone.x ) / ( 1 - deadZone.x ); } if ( absoluteY < deadZone.y ) { // Report the joystick as being at the center if it is within the dead zone position.y = 0; } else if ( normalize ) { // Rescale the output after taking the dead zone into account position.y = Mathf.Sign( position.y ) * ( absoluteY - deadZone.y ) / ( 1 - deadZone.y ); } } } So the later portion of the code handles the updated position of the joystick thumb. This is where I'd like it to track the finger position in a direction it still is allowed to move (like if the finger is too far up and slightly to the +X I'd like to make sure the joystick is as close in X and Y as allowed within the radius) Thanks for reading!

    Read the article

  • Normalisation and 'Anima notitia copia' (Soul of the Database)

    - by Phil Factor
    (A Guest Editorial for Simple-Talk) The other day, I was staring  at the sys.syslanguages  table in SQL Server with slightly-raised eyebrows . I’d just been reading Chris Date’s  interesting book ‘SQL and Relational Theory’. He’d made the point that you’re not necessarily doing relational database operations by using a SQL Database product.  The same general point was recently made by Dino Esposito about ASP.NET MVC.  The use of ASP.NET MVC doesn’t guarantee you a good application design: It merely makes it possible to test it. The way I’d describe the sentiment in both cases is ‘you can hit someone over the head with a frying-pan but you can’t call it cooking’. SQL enables you to create relational databases. However,  even if it smells bad, it is no crime to do hideously un-relational things with a SQL Database just so long as it’s necessary and you can tell the difference; not only that but also only if you’re aware of the risks and implications. Naturally, I’ve never knowingly created a database that Codd would have frowned at, but around the edges are interfaces and data feeds I’ve written  that have caused hissy fits amongst the Normalisation fundamentalists. Part of the problem for those who agonise about such things  is the misinterpretation of Atomicity.  An atomic value is one for which, in the strange virtual universe you are creating in your database, you don’t have any interest in any of its component parts.  If you aren’t interested in the electrons, neutrinos,  muons,  or  taus, then  an atom is ..er.. atomic. In the same way, if you are passed a JSON string or XML, and required to store it in a database, then all you need to do is to ask yourself, in your role as Anima notitia copia (Soul of the database) ‘have I any interest in the contents of this item of information?’.  If the answer is ‘No!’, or ‘nequequam! Then it is an atomic value, however complex it may be.  After all, you would never have the urge to store the pixels of images individually, under the misguided idea that these are the atomic values would you?  I would, of course,  ask the ‘Anima notitia copia’ rather than the application developers, since there may be more than one application, and the applications developers may be designing the application in the absence of full domain knowledge, (‘or by the seat of the pants’ as the technical term used to be). If, on the other hand, the answer is ‘sure, and we want to index the XML column’, then we may be in for some heavy XML-shredding sessions to get to store the ‘atomic’ values and ensure future harmony as the application develops. I went back to looking at the sys.syslanguages table. It has a months column with the months in a delimited list January,February,March,April,May,June,July,August,September,October,November,December This is an ordered list. Wicked? I seem to remember that this value, like shortmonths and days, is treated as a ‘thing’. It is merely passed off to an external  C++ routine in order to format a date in a particular language, and never accessed directly within the database. As far as the database is concerned, it is an atomic value.  There is more to normalisation than meets the eye.

    Read the article

  • A Slice of Raspberry Pi

    - by Phil Factor
    Guest editorial for the ITPro/SysAdmin newsletter The Raspberry Pi Foundation has done a superb design job on their new $35 network-enabled Linux computer. This tiny machine, incorporating an ARM processor on a Broadcom BCM2835 multimedia chip, aims to put the fun back into learning computing. The public response has been overwhelmingly positive.Note that aim: "…to put the fun back". Education in Information Technology is in dire straits. It always has been, but seems to have deteriorated further still, even in the face of improved provision of equipment.In many countries, the government controls the curriculum. It predicted a shortage in office-based IT skills, and so geared the ICT curriculum toward mind-numbing training in word-processing and spreadsheet skills. Instead, the shortage has turned out to be in people with an engineering-mindset, who can solve problems with whatever technologies are available and learn new techniques quickly, in a rapidly-changing field.In retrospect, the assumption that specific training was required rather than an education was an idiotic response to the arrival of mainstream information technology. As a result, ICT became a disaster area, which discouraged a generation of youngsters from a career in IT, and thereby led directly to the shortage of people with the skills that are required to exploit the potential of Information Technology..Raspberry Pi aims to reverse the trend. This is a rig that is geared to fast graphics in high resolution. It is no toy. It should be a superb games machine. However, the use of Fedora, Debian, or Arch Linux ARM shows the more serious educational intent behind the Foundation's work. It looks like it will even do some office work too!So, get hold of any power supply that provides a 5VDC source at the required 700mA; an old Blackberry charger will do or, alternatively, it will run off four AA cells. You'll need a USB hub to support the mouse and keyboard, and maybe a hard drive. You'll want a DVI monitor (with audio out) or TV (sound and video). You'll also need to be able to cope with wired Ethernet 10/100, if you want networking.With this lot assembled, stick the paraphernalia on the back of the HDTV with Blu Tack, get a nice keyboard, and you have a classy Linux-based home computer. The major cost is in the T.V and the keyboard. If you're not already writing software for this platform, then maybe, at a time when some countries are talking of orders in the millions, you should consider it.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Getting away from a customized Magento 1.4 installation - Magento 1.6, OpenCart, or others?

    - by Phil
    I'm dealing with a Magento 1.4.0.0 Community Edition installation with various undocumented changes to the core (mostly integration with an ERP system), an outdated Sweet Tooth Points & Rewards module and some custom payment providers. It also doubles as a mediocre blogging/CMS system. It has one store each for 3 different languages, with about 40 product categories for a few hundred products. [rant] With no prior experience with any PHP e-commerce systems, I find it very difficult to work with. I attempted to install Magento 1.4.0.0 on my local WAMP dev machine, it installs fine, but the main page or search do not show any products no matter what I do in the backend admin panel. I don't know what's wrong with it, and whatever information I googled is either too old or too new from Magento 1.4. Later I'm given FTP access to the testing server, which neither my manager or I have permission to install XDebug on, as apparantly it runs on the same server as the production server (yikes). Trying to learn how Magento works is torture. I spent a week trying to add some fields into the Onepage Checkout before giving up and went to work on something else. The template system, just like the rest of Magento, is a bloated mishmash of overcomplicated directory structures, weird config xml files and EAV databases. I went into 6 different models and several content blocks in the backend just to change what the front page looks like. With little-to-none helpful and clear documentation (unlike CodeIgniter) and various breaking changes between minor point revisions which makes it hard to find useful information, Magento 1.4 is a developer killer. [/rant] The client is planning to redesign the site and has decided it might as well as move on from this unsustainable, hacky, upgrade-unfriendly, developer-unfriendly mess. Magento 1.4 is starting to show its age, with Magento 1.7 coming soon, the client is considering upgrading to Magento 1.6 or 1.7 if it has improved from 1.4. The customizations done to the current Magento 1.4 installation will have to be redone, and a new license for the Sweet Tooth Points & Rewards module will have to be bought. The client is also open to other e-commerce systems. I've looked at OpenCart and it seems to be quite developer friendly with a fairly simple structure. I found some complaints regarding its performance when the shop has thousands of categories or products, but this is not an issue with the current number of products my client has. It seems to be solid ground for easy customization to bring the rewards system and ERP integration over. What should the client upgrade to in this case?

    Read the article

  • TypeScript or JavaScript for noob web developer [closed]

    - by Phil Murray
    Following the recent release by Microsoft of TypeScript I was wondering if this is something that should be considered for a experienced WinForm and XAML developer looking to get into more web development. From reviewing a number of sites and videos online it appears that the type system for TypeScript makes more sense to me as a thick client developer than the dynamic type system in Javascript. I understand that Typescript compiles down to JavaScript but it appears that the learning curve is shallower due to the current tooling provided by Microsoft. What are your thoughts?

    Read the article

  • Navigating Libgdx Menu with arrow keys or controller

    - by Phil Royer
    I'm attempting to make my menu navigable with the arrow keys or via the d-pad on a controller. So Far I've had no luck. The question is: Can someone walk me through how to make my current menu or any libgdx menu keyboard accessible? I'm a bit noobish with some stuff and I come from a Javascript background. Here's an example of what I'm trying to do: http://dl.dropboxusercontent.com/u/39448/webgl/qb/qb.html For a simple menu that you can just add a few buttons to and it run out of the box use this: http://www.sadafnoor.com/blog/how-to-create-simple-menu-in-libgdx/ Or you can use my code but I use a lot of custom styles. And here's an example of my code: import aurelienribon.tweenengine.Timeline; import aurelienribon.tweenengine.Tween; import aurelienribon.tweenengine.TweenManager; import com.badlogic.gdx.Game; import com.badlogic.gdx.Gdx; import com.badlogic.gdx.Screen; import com.badlogic.gdx.graphics.GL20; import com.badlogic.gdx.graphics.Texture; import com.badlogic.gdx.graphics.g2d.Sprite; import com.badlogic.gdx.graphics.g2d.SpriteBatch; import com.badlogic.gdx.graphics.g2d.TextureAtlas; import com.badlogic.gdx.math.Vector2; import com.badlogic.gdx.scenes.scene2d.Actor; import com.badlogic.gdx.scenes.scene2d.InputEvent; import com.badlogic.gdx.scenes.scene2d.InputListener; import com.badlogic.gdx.scenes.scene2d.Stage; import com.badlogic.gdx.scenes.scene2d.ui.Skin; import com.badlogic.gdx.scenes.scene2d.ui.Table; import com.badlogic.gdx.scenes.scene2d.ui.TextButton; import com.badlogic.gdx.scenes.scene2d.utils.Align; import com.badlogic.gdx.scenes.scene2d.utils.ClickListener; import com.project.game.tween.ActorAccessor; public class MainMenu implements Screen { private SpriteBatch batch; private Sprite menuBG; private Stage stage; private TextureAtlas atlas; private Skin skin; private Table table; private TweenManager tweenManager; @Override public void render(float delta) { Gdx.gl.glClearColor(0, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); batch.begin(); menuBG.draw(batch); batch.end(); //table.debug(); stage.act(delta); stage.draw(); //Table.drawDebug(stage); tweenManager.update(delta); } @Override public void resize(int width, int height) { menuBG.setSize(width, height); stage.setViewport(width, height, false); table.invalidateHierarchy(); } @Override public void resume() { } @Override public void show() { stage = new Stage(); Gdx.input.setInputProcessor(stage); batch = new SpriteBatch(); atlas = new TextureAtlas("ui/atlas.pack"); skin = new Skin(Gdx.files.internal("ui/menuSkin.json"), atlas); table = new Table(skin); table.setBounds(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); // Set Background Texture menuBackgroundTexture = new Texture("images/mainMenuBackground.png"); menuBG = new Sprite(menuBackgroundTexture); menuBG.setSize(Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); // Create Main Menu Buttons // Button Play TextButton buttonPlay = new TextButton("START", skin, "inactive"); buttonPlay.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new LevelMenu()); } }); buttonPlay.addListener(new InputListener() { public boolean keyDown (InputEvent event, int keycode) { System.out.println("down"); return true; } }); buttonPlay.padBottom(12); buttonPlay.padLeft(20); buttonPlay.getLabel().setAlignment(Align.left); // Button EXTRAS TextButton buttonExtras = new TextButton("EXTRAS", skin, "inactive"); buttonExtras.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new ExtrasMenu()); } }); buttonExtras.padBottom(12); buttonExtras.padLeft(20); buttonExtras.getLabel().setAlignment(Align.left); // Button Credits TextButton buttonCredits = new TextButton("CREDITS", skin, "inactive"); buttonCredits.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new Credits()); } }); buttonCredits.padBottom(12); buttonCredits.padLeft(20); buttonCredits.getLabel().setAlignment(Align.left); // Button Settings TextButton buttonSettings = new TextButton("SETTINGS", skin, "inactive"); buttonSettings.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { ((Game) Gdx.app.getApplicationListener()).setScreen(new Settings()); } }); buttonSettings.padBottom(12); buttonSettings.padLeft(20); buttonSettings.getLabel().setAlignment(Align.left); // Button Exit TextButton buttonExit = new TextButton("EXIT", skin, "inactive"); buttonExit.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { Gdx.app.exit(); } }); buttonExit.padBottom(12); buttonExit.padLeft(20); buttonExit.getLabel().setAlignment(Align.left); // Adding Heading-Buttons to the cue table.add().width(190); table.add().width((table.getWidth() / 10) * 3); table.add().width((table.getWidth() / 10) * 5).height(140).spaceBottom(50); table.add().width(190).row(); table.add().width(190); table.add(buttonPlay).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonExtras).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonCredits).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonSettings).spaceBottom(20).width(460).height(110); table.add().row(); table.add().width(190); table.add(buttonExit).width(460).height(110); table.add().row(); stage.addActor(table); // Animation Settings tweenManager = new TweenManager(); Tween.registerAccessor(Actor.class, new ActorAccessor()); // Heading and Buttons Fade In Timeline.createSequence().beginSequence() .push(Tween.set(buttonPlay, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonExtras, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonCredits, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonSettings, ActorAccessor.ALPHA).target(0)) .push(Tween.set(buttonExit, ActorAccessor.ALPHA).target(0)) .push(Tween.to(buttonPlay, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonExtras, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonCredits, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonSettings, ActorAccessor.ALPHA, .5f).target(1)) .push(Tween.to(buttonExit, ActorAccessor.ALPHA, .5f).target(1)) .end().start(tweenManager); tweenManager.update(Gdx.graphics.getDeltaTime()); } public static Vector2 getStageLocation(Actor actor) { return actor.localToStageCoordinates(new Vector2(0, 0)); } @Override public void dispose() { stage.dispose(); atlas.dispose(); skin.dispose(); menuBG.getTexture().dispose(); } @Override public void hide() { dispose(); } @Override public void pause() { } }

    Read the article

  • The Presentation Isn't Over Until It's Over

    - by Phil Factor
    The senior corporate dignitaries settled into their seats looking important in a blue-suited sort of way. The lights dimmed as I strode out in front to give my presentation.  I had ten vital minutes to make my pitch.  I was about to dazzle the top management of a large software company who were considering the purchase of my software product. I would present them with a dazzling synthesis of diagrams, graphs, followed by  a live demonstration of my software projected from my laptop.  My preparation had been meticulous: It had to be: A year’s hard work was at stake, so I’d prepared it to perfection.  I stood up and took them all in, with a gaze of sublime confidence. Then the laptop expired. There are several possible alternative plans of action when this happens     A. Stare at the smoking laptop vacuously, flapping ones mouth slowly up and down     B. Stand frozen like a statue, locked in indecision between fright and flight.     C. Run out of the room, weeping     D. Pretend that this was all planned     E. Abandon the presentation in favour of a stilted and tedious dissertation about the software     F. Shake your fist at the sky, and curse the sense of humour of your preferred deity I started for a few seconds on plan B, normally referred to as the ‘Rabbit in the headlamps of the car’ technique. Suddenly, a little voice inside my head spoke. It spoke the famous inane words of Yogi Berra; ‘The game isn't over until it's over.’ ‘Too right’, I thought. What to do? I ran through the alternatives A-F inclusive in my mind but none appealed to me. I was completely unprepared for this. Nowadays, longevity has since taught me more than I wanted to know about the wacky sense of humour of fate, and I would have taken two laptops. I hadn’t, but decided to do the presentation anyway as planned. I started out ignoring the dead laptop, but pretending, instead that it was still working. The audience looked startled. They were expecting plan B to be succeeded by plan C, I suspect. They weren’t used to denial on this scale. After my introductory talk, which didn’t require any visuals, I came to the diagram that described the application I’d written.  I’d taken ages over it and it was hot stuff. Well, it would have been had it been projected onto the screen. It wasn’t. Before I describe what happened then, I must explain that I have thespian tendencies.  My  triumph as Professor Higgins in My Fair Lady at the local operatic society is now long forgotten, but I remember at the time of my finest performance, the moment that, glancing up over the vast audience of  moist-eyed faces at the during the poignant  scene between Eliza and Higgins at the end, I  realised that I had a talent that one day could possibly  be harnessed for commercial use I just talked about the diagram as if it was there, but throwing in some extra description. The audience nodded helpfully when I’d done enough. Emboldened, I began a sort of mime, well, more of a ballet, to represent each slide as I came to it. Heaven knows I’d done my preparation and, in my mind’s eye, I could see every detail, but I had to somehow project the reality of that vision to the audience, much the same way any actor playing Macbeth should do the ghost of Banquo.  My desperation gave me a manic energy. If you’ve ever demonstrated a windows application entirely by mime, gesture and florid description, you’ll understand the scale of the challenge, but then I had nothing to lose. With a brief sentence of description here and there, and arms flailing whilst outlining the size and shape of  graphs and diagrams, I used the many tricks of mime, gesture and body-language  learned from playing Captain Hook, or the Sheriff of Nottingham in pantomime. I set out determinedly on my desperate venture. There wasn’t time to do anything but focus on the challenge of the task: the world around me narrowed down to ten faces and my presentation: ten souls who had to be hypnotized into seeing a Windows application:  one that was slick, well organized and functional I don’t remember the details. Eight minutes of my life are gone completely. I was a thespian berserker.  I know however that I followed the basic plan of building the presentation in a carefully controlled crescendo until the dazzling finale where the results were displayed on-screen.  ‘And here you see the results, neatly formatted and grouped carefully to enhance the significance of the figures, together with running trend-graphs!’ I waved a mime to signify an animated  window-opening, and looked up, in my first pause, to gaze defiantly  at the audience.  It was a sight I’ll never forget. Ten pairs of eyes were gazing in rapt attention at the imaginary window, and several pairs of eyes were glancing at the imaginary graphs and figures.  I hadn’t had an audience like that since my starring role in  Beauty and the Beast.  At that moment, I realized that my desperate ploy might work. I sat down, slightly winded, when my ten minutes were up.  For the first and last time in my life, the audience of a  ‘PowerPoint’ presentation burst into spontaneous applause. ‘Any questions?’ ‘Yes,  Have you got an agent?’ Yes, in case you’re wondering, I got the deal. They bought the software product from me there and then. However, it was a life-changing experience for me and I have never ever again trusted technology as part of a presentation.  Even if things can’t go wrong, they’ll go wrong and they’ll kill the flow of what you’re presenting.  if you can’t do something without the techno-props, then you shouldn’t do it.  The greatest lesson of all is that great presentations require preparation and  ‘stage-presence’ rather than fancy graphics. They’re a great supporting aid, but they should never dominate to the point that you’re lost without them.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Adding complexity to remove duplicate code

    - by Phil
    I have several classes that all inherit from a generic base class. The base class contains a collection of several objects of type T. Each child class needs to be able to calculate interpolated values from the collection of objects, but since the child classes use different types, the calculation varies a tiny bit from class to class. So far I have copy/pasted my code from class to class and made minor modifications to each. But now I am trying to remove the duplicated code and replace it with one generic interpolation method in my base class. However that is proving to be very difficult, and all the solutions I have thought of seem way too complex. I am starting to think the DRY principle does not apply as much in this kind of situation, but that sounds like blasphemy. How much complexity is too much when trying to remove code duplication? EDIT: The best solution I can come up with goes something like this: Base Class: protected T GetInterpolated(int frame) { var index = SortedFrames.BinarySearch(frame); if (index >= 0) return Data[index]; index = ~index; if (index == 0) return Data[index]; if (index >= Data.Count) return Data[Data.Count - 1]; return GetInterpolatedItem(frame, Data[index - 1], Data[index]); } protected abstract T GetInterpolatedItem(int frame, T lower, T upper); Child class A: public IGpsCoordinate GetInterpolatedCoord(int frame) { ReadData(); return GetInterpolated(frame); } protected override IGpsCoordinate GetInterpolatedItem(int frame, IGpsCoordinate lower, IGpsCoordinate upper) { double ratio = GetInterpolationRatio(frame, lower.Frame, upper.Frame); var x = GetInterpolatedValue(lower.X, upper.X, ratio); var y = GetInterpolatedValue(lower.Y, upper.Y, ratio); var z = GetInterpolatedValue(lower.Z, upper.Z, ratio); return new GpsCoordinate(frame, x, y, z); } Child class B: public double GetMph(int frame) { ReadData(); return GetInterpolated(frame).MilesPerHour; } protected override ISpeed GetInterpolatedItem(int frame, ISpeed lower, ISpeed upper) { var ratio = GetInterpolationRatio(frame, lower.Frame, upper.Frame); var mph = GetInterpolatedValue(lower.MilesPerHour, upper.MilesPerHour, ratio); return new Speed(frame, mph); }

    Read the article

  • Strange javascript error when using Kongregates API

    - by Phil
    In the hopes of finding a fellow unity3d developer also aiming for the Kongregate contest.. I've implemented the Kongregate API and can see that the game receives a call with my username and presents it ingame. I'm using Application.ExternalCall("kongregate.stats.submit",type,amount); where type is a string "Best Score" and amount is an int (1000 or something). This is the error I'm getting: You are trying to call recursively into the Flash Player which is not allowed. In most cases the JavaScript setTimeout function, can be used as a workaround. callASFunction:function(a,b){if(FABrid...tion, can be used as a workaround."); I'm wondering, has anyone else had this error or am I somehow doing something stupid? Thanks!

    Read the article

  • If I use my own normal values, should I turn off winding order culling?

    - by Phil
    I've discovered that I managed to program a series of boxes with indexed vertices in such a way that every other triangle (Half of each face) has a backwards winding order. As a result, XNA is culling half of them. However, my Vertex objects contain normal data that I have explicitly set, and I am going to implement my own backface culling shortly to reduce the size of the VertexBuffer. Should I turn off winding order culling and manage it myself, or should I make sure the winding order is consistent and let XNA handle it?

    Read the article

  • Weird rotation problem

    - by Phil
    I'm creating a simple tank game. No matter what I do, the turret keeps facing the target with it's side. I just can't figure out how to turn it 90 degrees in Y once so it faces it correctly. I've checked the pivot in Maya and it doesn't matter how I change it. This is the code I use to calculate how to face the target: void LookAt() { var forwardA = transform.forward; var forwardB = (toLookAt.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff > 20) { //Rotate to transform.Rotate(new Vector3(0, (-turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff < 20) { transform.Rotate(new Vector3(0, (turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else { } } I'm using Unity3d and would appreciate any help I can get! Thanks!

    Read the article

  • Learn Many Languages

    - by Phil Factor
    Around twenty-five years ago, I was trying to solve the problem of recruiting suitable developers for a large business. I visited the local University (it was a Technical College then). My mission was to remind them that we were a large, local employer of technical people and to suggest that, as they were in the business of educating young people for a career in IT, we should work together. I anticipated a harmonious chat where we could suggest to them the idea of mentioning our name to some of their graduates. It didn’t go well. The academic staff displayed a degree of revulsion towards the whole topic of IT in the world of commerce that surprised me; tweed met charcoal-grey, trainers met black shoes. However, their antipathy to commerce was something we could have worked around, since few of their graduates were destined for a career as university lecturers. They asked me what sort of language skills we needed. I tried ducking the invidious task of naming computer languages, since I wanted recruits who were quick to adapt and learn, with a broad understanding of IT, including development methodologies, technologies, and data. However, they pressed the point and I ended up saying that we needed good working knowledge of C and BASIC, though FORTRAN and COBOL were, at the time, still useful. There was a ghastly silence. It was as if I’d recommended the beliefs and practices of the Bogomils of Bulgaria to a gathering of Cardinals. They stared at me severely, like owls, until the head of department broke the silence, informing me in clipped tones that they taught only Modula 2. Now, I wouldn’t blame you if at this point you hurriedly had to look up ‘Modula 2′ on Wikipedia. Based largely on Pascal, it was a specialist language for embedded systems, but I’ve never ever come across it in a commercial business application. Nevertheless, it was an excellent teaching language since it taught modules, scope control, multiprogramming and the advantages of encapsulating a set of related subprograms and data structures. As long as the course also taught how to transfer these skills to other, more useful languages, it was not necessarily a problem. I said as much, but they gleefully retorted that the biggest local employer, a defense contractor specializing in Radar and military technology, used nothing but Modula 2. “Why teach any other programming language when they will be using Modula 2 for all their working lives?” said a complacent lecturer. On hearing this, I made my excuses and left. There could be no meeting of minds. They were providing training in a specific computer language, not an education in IT. Twenty years later, I once more worked nearby and regularly passed the long-deserted ‘brownfield’ site of the erstwhile largest local employer; the end of the cold war had led to lean times for defense contractors. A digger was about to clear the rubble of the long demolished factory along with the accompanying growth of buddleia and thistles, in order to lay the infrastructure for ‘affordable housing’. Modula 2 was a distant memory. Either those employees had short working lives or they’d retrained in other languages. The University, by contrast, was thriving, but I wondered if their erstwhile graduates had ever cursed the narrow specialization of their training in IT, as they struggled with the unexpected variety of their subsequent careers.

    Read the article

  • I want to host clients' websites, but not their email. What's the easiest way to handle this?

    - by Phil
    My company lets non-technical users build their own niche industry websites on our server, which we host. they can currently point their nameservers at their registrar to us, which ends up with them no longer having access to their email if they've already set it up through said registrar. We don't want to interfere with their existing email, nor do we want to get into the business of setting up email for them through our service. Thus, having them point A records/cname to us would work, but is this too complex for a non-technie user? We thought of having them point nameservers to us but pointing the MX records back to them, but this is also beyond their scope. Is there an easy way to 'point records' at their initial state? Any other ideas/feedback?

    Read the article

  • Sitting Pretty

    - by Phil Factor
    Guest Editorial for Simple-Talk IT Pro newsletter'DBAs and SysAdmins generally prefer an expression of calmness under adversity. It is a subtle trick, and requires practice in front of a mirror to get it just right. Too much adversity and they think you're not coping; too much calmness and they think you're under-employed' I dislike the term 'avatar', when used to describe a portrait photograph. An avatar, in the sense of a picture, is merely the depiction of one's role-play alter-ego, often a ridiculous bronze-age deity. However, professional image is important. The choice and creation of online photos has an effect on the way your message is received and it is important to get that right. It is fine to use that photo of you after ten lagers on holiday in an Ibiza nightclub, but what works on Facebook looks hilarious on LinkedIn. My splendid photograph that I use online was done by a professional photographer at great expense and I've never had the slightest twinge of regret when I remember how much I paid for it. It is me, but a more pensive and dignified edition, oozing trust and wisdom. One gasps at the magical skill that a professional photographer can conjure up, without digital manipulation, to make the best of a derisory noggin (ed: slang for a head). Even if he had offered to depict me as a semi-naked, muscle-bound, sword-wielding hero, I'd have demurred. No, any professional person needs a carefully cultivated image that looks right. I'd never thought of using that profile shot, though I couldn't help noticing the photographer flinch slightly when he first caught sight of my face. There is a problem with using an avatar. The use of a single image doesn't express the appropriate emotion. At the moment, it is weird to see someone with a laughing portrait writing something solemn. A neutral cast to the face, somewhat like a passport photo, is probably the best compromise. Actually, the same is true of a working life in IT. One of the first skills I learned was not to laugh at managers, but, instead, to develop a facial expression that promoted a sense of keenness, energy and respect. Every profession has its own preferred facial cast. A neighbour of mine has the natural gift of a face that displays barely repressed grief. Though he is characteristically cheerful, he earns a remarkable income as a pallbearer. DBAs and SysAdmins generally prefer an expression of calmness under adversity. It is a subtle trick, and requires practice in front of a mirror to get it just right. Too much adversity and they think you're not coping; too much calmness and they think you're under-employed. With an appropriate avatar, you could do away with a lot of the need for 'smilies' to give clues as to the meaning of what you've written on forums and blogs. If you had a set of avatars, showing the full gamut of human emotions expressible in writing: Rage, fear, reproach, joy, ebullience, apprehension, exasperation, dissembly, irony, pathos, euphoria, remorse and so on. It would be quite a drop-down list on forums, but given the vast prairies of space on the average hard drive, who cares? It would cut down on the number of spats in Forums just as long as one picks the right avatar. As an unreconstructed geek, I find it hard to admit to the value of image in the workplace, but it is true. Just as we use professionals to tidy up and order our CVs and job applications, we should employ experts to enhance our professional image. After all you don't perform surgery or dentistry on yourself do you?

    Read the article

  • No boot record when installing 13.04 from USB on WinXP

    - by Phil Leslie
    I'm replacing WinXP with 13.04 on an older PC using wubi.exe on a USB stick. I had no problem changing the BIOS on another system that was a bit newer but when I change the settings on the older PC to boot from USB, I get a DOS message saying "Searching for boot record...Not found" & asked to try again. I don't have the ability to boot from a live CD so is there a reason why I can boot from a USB on a newer computer but can't from an older one? Both have options to choose to boot from USB, but the older one can find no boot record. The system was built on 12/13/01 by American Megatrends. Since I don't have enough "reputation points" to post the screen shot image, you can see it at http://img.photobucket.com/albums/v633/boonevillephil/1029131748-00.jpg.

    Read the article

  • best way to cleanly display 15-20 youtube videos in a webpage?

    - by Phil
    I am currently building a website for a client who has about 20 or so videos that he wants put on a video gallery page and I was wondering if you guys could help me out and give me some advice on how to go about this. I have found yoxview, possibly lightbox... but I don't know if that popup window is really good for video browsing. Also, should/could this be asked in the webmasters instead of stackoverflow?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >