Search Results

Search found 1484 results on 60 pages for 'practical'.

Page 59/60 | < Previous Page | 55 56 57 58 59 60  | Next Page >

  • PostgreSQL triggers and passing parameters

    - by iandouglas
    This is a multi-part question. I have a table similar to this: CREATE TABLE sales_data ( Company character(50), Contract character(50), top_revenue_sum integer, top_revenue_sales integer, last_sale timestamp) ; I'd like to create a trigger for new inserts into this table, something like this: CREATE OR REPLACE FUNCTION add_contract() RETURNS VOID DECLARE myCompany character(50), myContract character(50), BEGIN myCompany = TG_ARGV[0]; myContract = TG_ARGV[1]; IF (TG_OP = 'INSERT') THEN EXECUTE 'CREATE TABLE salesdata_' || $myCompany || '_' || $myContract || ' ( sale_amount integer, updated TIMESTAMP not null, some_data varchar(32), country varchar(2) ) ;' EXECUTE 'CREATE TRIGGER update_sales_data BEFORE INSERT OR DELETE ON salesdata_' || $myCompany || '_' || $myContract || ' FOR EACH ROW EXECUTE update_sales_data( ' || $myCompany || ',' || $myContract || ', revenue);' ; END IF; END; $add_contract$ LANGUAGE plpgsql; CREATE TRIGGER add_contract AFTER INSERT ON sales_data FOR EACH ROW EXECUTE add_contract() ; Basically, every time I insert a new row into sales_data, I want to generate a new table where the name of the table will be defined as something like "salesdata_Company_Contract" So my first question is how can I pass the Company and Contract data to the trigger so it can be passed to the add_contract() stored procedure? From my stored procedure, you'll see that I also want to update the original sales_data table whenever new data is inserted into the salesdata_Company_Contract table. This trigger will do something like this: CREATE OR REPLACE FUNCTION update_sales_data() RETURNS trigger as $update_sales_data$ DECLARE myCompany character(50) NOT NULL, myContract character(50) NOT NULL, myRevenue integer NOT NULL BEGIN myCompany = TG_ARGV[0] ; myContract = TG_ARGV[1] ; myRevenue = TG_ARGV[2] ; IF (TG_OP = 'INSERT') THEN UPDATE sales_data SET top_revenue_sales = top_revenue_sales + 1, top_revenue_sum = top_revenue_sum + $myRevenue, updated = now() WHERE Company = $myCompany AND Contract = $myContract ; ELSIF (TG_OP = 'DELETE') THEN UPDATE sales_data SET top_revenue_sales = top_revenue_sales - 1, top_revenue_sum = top_revenue_sum - $myRevenue, updated = now() WHERE Company = $myCompany AND Contract = $myContract ; END IF; END; $update_sales_data$ LANGUAGE plpgsql; This will, of course, require that I pass several parameters around within these stored procedures and triggers, and I'm not sure (a) if this is even possible, or (b) practical, or (c) best practice and we should just put this logic into our other software instead of asking the database to do this work for us. To keep our table sizes down, as we'll have hundreds of thousands of transactions per day, we've decided to partition our data using the Company and Contract strings as part of the table names themselves so they're all very small in size; file IO for us is faster and we felt we'd get better performance. Thanks for any thoughts or direction. My thinking, now that I've written all of this out, is that maybe we need to write stored procedures where we pass our insert data as parameters, and call that from our other software, and have the stored procedure do the insert into "sales_data" then create the other table. Then, have a second stored procedure to insert new data into the salesdata_Company_Contract tables, where the table name is passed to the stored proc as a parameter, and again have that stored proc do the insert, then update the main sales_data table afterward. What approach would you take?

    Read the article

  • Explicit method tables in C# instead of OO - good? bad?

    - by FunctorSalad
    Hi! I hope the title doesn't sound too subjective; I absolutely do not mean to start a debate on OO in general. I'd merely like to discuss the basic pros and cons for different ways of solving the following sort of problem. Let's take this minimal example: you want to express an abstract datatype T with functions that may take T as input, output, or both: f1 : Takes a T, returns an int f2 : Takes a string, returns a T f3 : Takes a T and a double, returns another T I'd like to avoid downcasting and any other dynamic typing. I'd also like to avoid mutation whenever possible. 1: Abstract-class-based attempt abstract class T { abstract int f1(); // We can't have abstract constructors, so the best we can do, as I see it, is: abstract void f2(string s); // The convention would be that you'd replace calls to the original f2 by invocation of the nullary constructor of the implementing type, followed by invocation of f2. f2 would need to have side-effects to be of any use. // f3 is a problem too: abstract T f3(double d); // This doesn't express that the return value is of the *same* type as the object whose method is invoked; it just expresses that the return value is *some* T. } 2: Parametric polymorphism and an auxilliary class (all implementing classes of TImpl will be singleton classes): abstract class TImpl<T> { abstract int f1(T t); abstract T f2(string s); abstract T f3(T t, double d); } We no longer express that some concrete type actually implements our original spec -- an implementation is simply a type Foo for which we happen to have an instance of TImpl. This doesn't seem to be a problem: If you want a function that works on arbitrary implementations, you just do something like: // Say we want to return a Bar given an arbitrary implementation of our abstract type Bar bar<T>(TImpl<T> ti, T t); At this point, one might as well skip inheritance and singletons altogether and use a 3 First-class function table class /* or struct, even */ TDictT<T> { readonly Func<T,int> f1; readonly Func<string,T> f2; readonly Func<T,double,T> f3; TDict( ... ) { this.f1 = f1; this.f2 = f2; this.f3 = f3; } } Bar bar<T>(TDict<T> td; T t); Though I don't see much practical difference between #2 and #3. Example Implementation class MyT { /* raw data structure goes here; this class needn't have any methods */ } // It doesn't matter where we put the following; could be a static method of MyT, or some static class collecting dictionaries static readonly TDict<MyT> MyTDict = new TDict<MyT>( (t) => /* body of f1 goes here */ , // f2 (s) => /* body of f2 goes here */, // f3 (t,d) => /* body of f3 goes here */ ); Thoughts? #3 is unidiomatic, but it seems rather safe and clean. One question is whether there are any performance concerns with it. I don't usually need dynamic dispatch, and I'd prefer if these function bodies get statically inlined in places where the concrete implementing type is known statically. Is #2 better in that regard?

    Read the article

  • What's the best-practice way to update an Adapter's underlying data?

    - by skyler
    I'm running into an IllegalStateException updating an underlying List to an Adapter (might be an ArrayAdapter or an extension of BaseAdapter, I don't remember). I do not have or remember the text of the exception at the moment, but it says something to the effect of the List's content changing without the Adapter having been notified of the change. This List /may/ be updated from another thread other than the UI thread (main). After I update this list (adding an item), I call notifyDataSetChanged. The issue seems to be that the Adapter, or ListView attached to the Adapter attempts to update itself before this method is invoked. When this happens, the IllegalStateException is thrown. If I set the ListView's visibility to GONE before the update, then VISIBLE again, no error occurs. But this isn't always practical. I read somewhere that you cannot modify the underlying this from another thread--this would seem to limit an MVC pattern, as with this particular List, I want to add items from different threads. I assumed that as long as I called notifyDataSetChanged() I'd be safe--that the Adapter didn't revisit the underlying List until this method was invoked but this doesn't seem to be the case. I suppose what I'm asking is, can it be safe to update the underlying List from threads other than the UI? Additionally, if I want to modify the data within an Adapter, do I modify the underlying List or the Adapter itself (via its add(), etc. methods). Modifying the data through the Adapter seems wrong. I came across a thread on another site from someone who seems to be having a similar problem to mine: http://osdir.com/ml/Android-Developers/2010-04/msg01199.html (this is from where I grabbed the Visibility.GONE and .VISIBLE idea). To give you a better idea of my particular problem, I'll describe a bit of how my List, Adapter, etc. are set up. I've an object named Queue that contains a LinkedList. Queue extends Observable, and when things are added to its internal list through its methods, I call setChanged() and notifyListeners(). This Queue object can have items added or removed from any number of threads. I have a single "queue view" Activity that contains an Adapter. This Activity, in its onCreate() method, registers an Observer listener to my Queue object. In the Observer's update() method I call notifyDataSetChanged() on the Adapter. I added a lot of log output and determined that when this IllegalStateExcption occurs that my Observer callback was never invoked. So it's as if the Adapter noticed the List's change before the Observer had a chance to notify its Observers, and call my method to notify the Adapter that the contents had changed. So I suppose what I'm asking is, is this a good way to rig-up an Adapter? Is this a problem because I'm updating the Adapter's contents from a thread other than the UI thread? If this is the case, I may have a solution in mind (give the Queue object a Handler to the UI thread when it's created, and make all List modifications using that Handler, but this seems improper). I realize that this is a very open-ended post, but I'm a bit lost on this and would appreciate any comments on what I've written.

    Read the article

  • What language/framework (technology) to use for website (flash games portal)

    - by cripox
    Hello, I know there are a lot of similar questions on the net, but because I am a newbie in web development I didn't find the solution for my specific problem. I am planing on creating a flash games portal from scratch. It is a big chance that there will be big traffic from the beginning (millions of pageviews). I want to reduce the server costs as much as possible but in the same time to not be tide to an expensive contract as there is a chance that the project will not be as successfully as I want and in that case the money would be very little. The question is : what technology to use? I don't know any web dev technology yet so it doesn't matter what I will learn. My web dev experience is a little php 8 years ago, and from then I programmed in C++ / Java- game and mobile development. I like Java and C syntax and language very much and I tend to dislike dynamic typing or non robust scripting (like php)- but I can get along if these are the best choices. The candidates are now: - Grails (my best for now) Ruby on Rails Cake PHP Other technologies (Google App Engine, Python/Django etc...) I was considering at first using pure C and compiling the web app in the server- just to squeeze more from the servers, but soon I understand that this is overkill. Next my eyes came on Ruby - as there is a lot of buzz for it's easiness of use. Next I discovered Grails and looked at Java because it is said that it is "faster". But I don't know what this "Faster" really means on my needs, so here comes the first question: 1) What will be my biggest consumption on the server, other than bandwidth, for a lot of flash content requests? Is it memory? I heard that Java needs a lot of memory, but is faster. Is it CPU? I am planning to take some daily VPS.NET nodes at first, to see if there is a demand, and if the "spike" is permanent to move to a dedicated server (serverloft.com has some good offers), else to remain with less nodes. I was also considering developing in Google App Engine- cheap or free hosting to use at first - so I can test my assumption- and also very easy to use (no need for sys administration) but the costs became high if used more ( 3 million games played / month .. x mb/ each). And the issue with Google is that it looks me in this technology. My other concern is scalability (not only for traffic/users, but as adding functionality) My plans are to release a functional site in just 4 weeks (just the basics frontend and some quick basic backend - so I can be able to modify some things and add games manually) - but then to raise it and add more things to it. I am planning to take a little different approach than other portals so I need to write it from scratch (a script will not do). 2) Will Grails take much more resources than RoR or Php server wise? I heard that making it on Java stack will be hardware expensive and is overkill if you don't make a bank application. My application will not be very complex (I hope and i will try to) but will have a lot of traffic. I also took in account using CDN for files, but the cheapest CDN found was 5c/GB (vps.net) and the cost per gb on serverloft (http://www.serverloft.com/dedizierte-server/server-details.php?products=4) is only 1.79 cents/GB and comes with the other resources either. I am new to this domain (web). I am learning the ropes and searching on the web for ~half of year but don't have any really practical experience, so I know that I must have some naive thinking and other issues that i don't know from now, so please give me any advice you want regarding anything, not just the specific questions asked. And thank you so much for such great community!

    Read the article

  • SQL SERVER – Securing TRUNCATE Permissions in SQL Server

    - by pinaldave
    Download the Script of this article from here. On December 11, 2010, Vinod Kumar, a Databases & BI technology evangelist from Microsoft Corporation, graced Ahmedabad by spending some time with the Community during the Community Tech Days (CTD) event. As he was running through a few demos, Vinod asked the audience one of the most fundamental and common interview questions – “What is the difference between a DELETE and TRUNCATE?“ Ahmedabad SQL Server User Group Expert Nakul Vachhrajani has come up with excellent solutions of the same. I must congratulate Nakul for this excellent solution and as a encouragement to User Group member, I am publishing the same article over here. Nakul Vachhrajani is a Software Specialist and systems development professional with Patni Computer Systems Limited. He has functional experience spanning legacy code deprecation, system design, documentation, development, implementation, testing, maintenance and support of complex systems, providing business intelligence solutions, database administration, performance tuning, optimization, product management, release engineering, process definition and implementation. He has comprehensive grasp on Database Administration, Development and Implementation with MS SQL Server and C, C++, Visual C++/C#. He has about 6 years of total experience in information technology. Nakul is an member of the Ahmedabad and Gandhinagar SQL Server User Groups, and actively contributes to the community by actively participating in multiple forums and websites like SQLAuthority.com, BeyondRelational.com, SQLServerCentral.com and many others. Please note: The opinions expressed herein are Nakul own personal opinions and do not represent his employer’s view in anyway. All data from everywhere here on Earth go through a series of  four distinct operations, identified by the words: CREATE, READ, UPDATE and DELETE, or simply, CRUD. Putting in Microsoft SQL Server terms, is the process goes like this: INSERT, SELECT, UPDATE and DELETE/TRUNCATE. Quite a few interesting responses were received and evaluated live during the session. To summarize them, the most important similarity that came out was that both DELETE and TRUNCATE participate in transactions. The major differences (not all) that came out of the exercise were: DELETE: DELETE supports a WHERE clause DELETE removes rows from a table, row-by-row Because DELETE moves row-by-row, it acquires a row-level lock Depending upon the recovery model of the database, DELETE is a fully-logged operation. Because DELETE moves row-by-row, it can fire off triggers TRUNCATE: TRUNCATE does not support a WHERE clause TRUNCATE works by directly removing the individual data pages of a table TRUNCATE directly occupies a table-level lock. (Because a lock is acquired, and because TRUNCATE can also participate in a transaction, it has to be a logged operation) TRUNCATE is, therefore, a minimally-logged operation; again, this depends upon the recovery model of the database Triggers are not fired when TRUNCATE is used (because individual row deletions are not logged) Finally, Vinod popped the big homework question that must be critically analyzed: “We know that we can restrict a DELETE operation to a particular user, but how can we restrict the TRUNCATE operation to a particular user?” After returning home and having a nice cup of coffee, I noticed that my gray cells immediately started to work. Below was the result of my research. As what is always said, the devil is in the details. Upon looking at the Permissions section for the TRUNCATE statement in Books On Line, the following jumps right out: “The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.“ Now, what does this mean? Unlike DELETE, one cannot directly assign permissions to a user/set of users allowing or revoking TRUNCATE rights. However, there is a way to circumvent this. It is important to recall that in Microsoft SQL Server, database engine security surrounds the concept of a “securable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). urable”, which is any object like a table, stored procedure, trigger, etc. Rights are assigned to a principal on a securable. Refer to the image below (taken from the SQL Server Books On Line). SETTING UP THE ENVIRONMENT – (01A_Truncate Table Permissions.sql) Script Provided at the end of the article. By the end of this demo, one will be able to do all the CRUD operations, except the TRUNCATE, and the other will only be able to execute the TRUNCATE. All you will need for this test is any edition of SQL Server 2008. (With minor changes, these scripts can be made to work with SQL 2005.) We begin by creating the following: 1.       A test database 2.        Two database roles: associated logins and users 3.       Switch over to the test database and create a test table. Then, add some data into it. I am using row constructors, which is new to SQL 2008. Creating the modules that will be used to enforce permissions 1.       We have already created one of the modules that we will be assigning permissions to. That module is the table: TruncatePermissionsTest 2.       We will now create two stored procedures; one is for the DELETE operation and the other for the TRUNCATE operation. Please note that for all practical purposes, the end result is the same – all data from the table TruncatePermissionsTest is removed Assigning the permissions Now comes the most important part of the demonstration – assigning permissions. A permissions matrix can be worked out as under: To apply the security rights, we use the GRANT and DENY clauses, as under: That’s it! We are now ready for our big test! THE TEST (01B_Truncate Table Test Queries.sql) Script Provided at the end of the article. I will now need two separate SSMS connections, one with the login AllowedTruncate and the other with the login RestrictedTruncate. Running the test is simple; all that’s required is to run through the script – 01B_Truncate Table Test Queries.sql. What I will demonstrate here via screen-shots is the behavior of SQL Server when logged in as the AllowedTruncate user. There are a few other combinations than what are highlighted here. I will leave the reader the right to explore the behavior of the RestrictedTruncate user and these additional scenarios, as a form of self-study. 1.       Testing SELECT permissions 2.       Testing TRUNCATE permissions (Remember, “deny by default”?) 3.       Trying to circumvent security by trying to TRUNCATE the table using the stored procedure Hence, we have now proved that a user can indeed be assigned permissions to specifically assign TRUNCATE permissions. I also hope that the above has sparked curiosity towards putting some security around the probably “destructive” operations of DELETE and TRUNCATE. I would like to wish each and every one of the readers a very happy and secure time with Microsoft SQL Server. (Please find the scripts – 01A_Truncate Table Permissions.sql and 01B_Truncate Table Test Queries.sql that have been used in this demonstration. Please note that these scripts contain purely test-level code only. These scripts must not, at any cost, be used in the reader’s production environments). 01A_Truncate Table Permissions.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Run through, step-by-step through the sequence till Step 08 to create a test database 2. Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows, one where you have logged in as 'RestrictedTruncate', and the other as 'AllowedTruncate' 3. Come back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 13, 2010 - NAV - Updated to add a security matrix and improve code readability when applying security December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 01: Create a new test database CREATE DATABASE TruncateTestDB GO USE TruncateTestDB GO -- Step 02: Add roles and users to demonstrate the security of the Truncate operation -- 2a. Create the new roles CREATE ROLE AllowedTruncateRole; GO CREATE ROLE RestrictedTruncateRole; GO -- 2b. Create new logins CREATE LOGIN AllowedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO CREATE LOGIN RestrictedTruncate WITH PASSWORD = 'truncate@2010', CHECK_POLICY = ON GO -- 2c. Create new Users using the roles and logins created aboave CREATE USER TruncateUser FOR LOGIN AllowedTruncate WITH DEFAULT_SCHEMA = dbo GO CREATE USER NoTruncateUser FOR LOGIN RestrictedTruncate WITH DEFAULT_SCHEMA = dbo GO -- 2d. Add the newly created login to the newly created role sp_addrolemember 'AllowedTruncateRole','TruncateUser' GO sp_addrolemember 'RestrictedTruncateRole','NoTruncateUser' GO -- Step 03: Change over to the test database USE TruncateTestDB GO -- Step 04: Create a test table within the test databse CREATE TABLE TruncatePermissionsTest (Id INT IDENTITY(1,1), Name NVARCHAR(50)) GO -- Step 05: Populate the required data INSERT INTO TruncatePermissionsTest VALUES (N'Delhi'), (N'Mumbai'), (N'Ahmedabad') GO -- Step 06: Encapsulate the DELETE within another module CREATE PROCEDURE proc_DeleteMyTable WITH EXECUTE AS SELF AS DELETE FROM TruncateTestDB..TruncatePermissionsTest GO -- Step 07: Encapsulate the TRUNCATE within another module CREATE PROCEDURE proc_TruncateMyTable WITH EXECUTE AS SELF AS TRUNCATE TABLE TruncateTestDB..TruncatePermissionsTest GO -- Step 08: Apply Security /* *****************************SECURITY MATRIX*************************************** =================================================================================== Object                   | Permissions |                 Login |             | AllowedTruncate   |   RestrictedTruncate |             |User:NoTruncateUser|   User:TruncateUser =================================================================================== TruncatePermissionsTest  | SELECT,     |      GRANT        |      (Default) | INSERT,     |                   | | UPDATE,     |                   | | DELETE      |                   | -------------------------+-------------+-------------------+----------------------- TruncatePermissionsTest  | ALTER       |      DENY         |      (Default) -------------------------+-------------+----*/----------------+----------------------- proc_DeleteMyTable | EXECUTE | GRANT | DENY -------------------------+-------------+-------------------+----------------------- proc_TruncateMyTable | EXECUTE | DENY | GRANT -------------------------+-------------+-------------------+----------------------- *****************************SECURITY MATRIX*************************************** */ /* Table: TruncatePermissionsTest*/ GRANT SELECT, INSERT, UPDATE, DELETE ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO DENY ALTER ON TruncateTestDB..TruncatePermissionsTest TO NoTruncateUser GO /* Procedure: proc_DeleteMyTable*/ GRANT EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO NoTruncateUser GO DENY EXECUTE ON TruncateTestDB..proc_DeleteMyTable TO TruncateUser GO /* Procedure: proc_TruncateMyTable*/ DENY EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO NoTruncateUser GO GRANT EXECUTE ON TruncateTestDB..proc_TruncateMyTable TO TruncateUser GO -- Step 09: Test --Switch over to the "Truncate Table Test Queries.sql" and execute it step-by-step in two different SSMS windows: --    1. one where you have logged in as 'RestrictedTruncate', and --    2. the other as 'AllowedTruncate' -- Step 10: Cleanup sp_droprolemember 'AllowedTruncateRole','TruncateUser' GO sp_droprolemember 'RestrictedTruncateRole','NoTruncateUser' GO DROP USER TruncateUser GO DROP USER NoTruncateUser GO DROP LOGIN AllowedTruncate GO DROP LOGIN RestrictedTruncate GO DROP ROLE AllowedTruncateRole GO DROP ROLE RestrictedTruncateRole GO USE MASTER GO DROP DATABASE TruncateTestDB GO 01B_Truncate Table Test Queries.sql /* ***************************************************************************************************************** Developed By          : Nakul Vachhrajani Functionality         : This demo is focused on how to allow only TRUNCATE permissions to a particular user How to Use            : 1. Switch over to this from "Truncate Table Permissions.sql", Step #09 2. Execute this step-by-step in two different SSMS windows a. One where you have logged in as 'RestrictedTruncate', and b. The other as 'AllowedTruncate' 3. Return back to "Truncate Table Permissions.sql" 4. Execute Step 10 to cleanup! Modifications         : December 12, 2010 - NAV - Created ***************************************************************************************************************** */ -- Step 09A: Switch to the test database USE TruncateTestDB GO -- Step 09B: Ensure that we have valid data SELECT * FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The SELECT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09C: Attempt to Truncate Data from the table without using the stored procedure TRUNCATE TABLE TruncatePermissionsTest GO -- (Expected: Following error will occur) --  Msg 1088, Level 16, State 7, Line 2 --  Cannot find the object "TruncatePermissionsTest" because it does not exist or you do not have permissions. -- Step 09D:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'London'), (N'Paris'), (N'Berlin') GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 1 -- The INSERT permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. --Step 09E: Attempt to Truncate Data from the table using the stored procedure EXEC proc_TruncateMyTable GO -- (Expected: Will execute successfully with 'AllowedTruncate' user, will error out as under with 'RestrictedTruncate') -- Msg 229, Level 14, State 5, Procedure proc_TruncateMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_TruncateMyTable', database 'TruncateTestDB', schema 'dbo'. -- Step 09F:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Madrid'), (N'Rome'), (N'Athens') GO --Step 09G: Attempt to Delete Data from the table without using the stored procedure DELETE FROM TruncatePermissionsTest GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Line 2 -- The DELETE permission was denied on the object 'TruncatePermissionsTest', database 'TruncateTestDB', schema 'dbo'. -- Step 09H:Regenerate Test Data INSERT INTO TruncatePermissionsTest VALUES (N'Spain'), (N'Italy'), (N'Greece') GO --Step 09I: Attempt to Delete Data from the table using the stored procedure EXEC proc_DeleteMyTable GO -- (Expected: Following error will occur if logged in as "AllowedTruncate") -- Msg 229, Level 14, State 5, Procedure proc_DeleteMyTable, Line 1 -- The EXECUTE permission was denied on the object 'proc_DeleteMyTable', database 'TruncateTestDB', schema 'dbo'. --Step 09J: Close this SSMS window and return back to "Truncate Table Permissions.sql" Thank you Nakul to take up the challenge and prove that Ahmedabad and Gandhinagar SQL Server User Group has talent to solve difficult problems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Thursday, March 01, 2012

    CodePlex Daily Summary for Thursday, March 01, 2012Popular ReleasesMetodología General Ajustada - MGA: 01.09.08: Cambios John: Cambios en el MDI: Habilitación del menú e ícono de Imprimir. Deshabilitación de menú Ayuda y opciones de Importar y Exportar del menú Proyectos temporalmente. Integración con código de Crystal Report. Validaciones con Try-Catch al generar los reportes, personalización de los formularios en estilos y botones y validación de selección de tipo de reporte. Creación de instalador con TODOS los cambios y la creación de las carpetas asociadas a los RPT.WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.01: Whats NewAdded New Plugin "Ventrian News Articles Link Selector" to select an Article Link from the News Article Module (This Plugin is not visible by default in your Toolbar, you need to manually add the 'newsarticleslinks' to your toolbarset) http://www.watchersnet.de/Portals/0/screenshots/dnn/CKEditorNewsArticlesLinks.png File-Browser: Added Paging to the Files List. You can define the Page Size in the Options (Default Value: 20) http://www.watchersnet.de/Portals/0/screenshots/dnn/CKEdito...MyRouter (Virtual WiFi Router): MyRouter 1.0 (Beta): A friendlier User Interface. A logger file to catch exceptions so you may send it to use to improve and fix any bugs that may occur. A feedback form because we always love hearing what you guy's think of MyRouter. Check for update menu item for you to stay up to date will the latest changes. Facebook fan page so you may spread the word and share MyRouter with friends and family And Many other exciting features were sure your going to love!WPF Sound Visualization Library: WPF SVL 0.3 (Source, Binaries, Examples, Help): Version 0.3 of WPFSVL. This includes three new controls: an equalizer, a digital clock, and a time editor.Thai Flood Watch: Thai Flood Watch - Source: non commercial use only ** This project supported by Department of Computer Science KhonKaen University Thailand.ZXing.Net: ZXing.Net 0.4.0.0: sync with rev. 2196 of the java version important fix for RGBLuminanceSource generating barcode bitmaps Windows Phone demo client (only tested with emulator, because I don't have a Windows Phone) Barcode generation support for Windows Forms demo client Webcam support for Windows Forms demo clientOrchard Project: Orchard 1.4: Please read our release notes for Orchard 1.4: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-Notes.NET Assembly Information: Assembly Information 2.1.0.1: - Fixed the issue in which AnyCPU binaries were shown as 32bit - Added support to show the errors in-case if some dlls failed to load.FluentData -Micro ORM with a fluent API that makes it simple to query a database: FluentData version 1.2: New features: - QueryValues method - Added support for automapping to enumerations (both int and string are supported). Fixed 2 reported issues.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.15: 3.6.0.15 28-Feb-2012 • Fix: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state. Work Item 10435: http://netsqlazman.codeplex.com/workitem/10435 • Fix: Made StorageCache thread safe. Thanks to tangrl. • Fix: Members property of SqlAzManApplicationGroup is not functioning. Thanks to tangrl. Work Item 10267: http://netsqlazman.codeplex.com/workitem/10267 • Fix: Indexer are making database calls. Thanks to t...SCCM Client Actions Tool: Client Actions Tool v1.1: SCCM Client Actions Tool v1.1 is the latest version. It comes with following changes since last version: Added stop button to stop the ongoing process. Added action "Query update status". Added option "saveOnlineComputers" in config.ini to enable saving list of online computers from last session. Default value for "LatestClientVersion" set to SP2 R3 (4.00.6487.2157). Wuauserv service manual startup mode is considered healthy on Windows 7. Errors are now suppressed in checkReleases...Kinect PowerPoint Control: Kinect PowerPoint Control v1.1: Updated for Kinect SDK 1.0.SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.8: API Updates: SOLID Extract Method for Archives (7Zip and RAR). ExtractAllEntries method on Archive classes will extract archives as a streaming file. This can offer better 7Zip extraction performance if any of the entries are solid. The IsSolid method on 7Zip archives will return true if any are solid. Removed IExtractionListener was removed in favor of events. Unit tests show example. Bug fixes: PPMd passes tests plus other fixes (Thanks Pavel) Zip used to always write a Post Descri...Social Network Importer for NodeXL: SocialNetImporter(v.1.3): This new version includes: - Download new networks for Facebook fan pages. - New options for downloading more posts - Bug fixes To use the new graph data provider, do the following: Unzip the Zip file into the "PlugIns" folder that can be found in the NodeXL installation folder (i.e "C:\Program Files\Social Media Research Foundation\NodeXL Excel Template\PlugIns") Open NodeXL template and you can access the new importer from the "Import" menuASP.NET REST Services Framework: Release 1.1 - Standard version: Beginning from v1.1 the REST-services Framework is compatible with ASP.NET Routing model as well with CRUD (Create, Read, Update, and Delete) principle. These two are often important when building REST API functionality within your application. It also includes ability to apply Filters to a class to target all WebRest methods, as well as some performance enhancements. New version includes Metadata Explorer providing ability exploring the existing services that becomes essential as the number ...SQL Live Monitor: SQL Live Monitor 1.31: A quick fix to make it this version work with SQL 2012. Version 2 already has 2012 working, but am still developing the UI in version 2, so this is just an interim fix to allow user to monitor SQL 2012.Content Slider Module for DotNetNuke: 01.02.00: This release has the following updates and new features: Feature: One-Click Enabling of Pager Setting Feature: Cache Sliders for Performance Feature: Configurable Cache Setting Enhancement: Transitions can be Selected Bug: Secure Folder Images not Viewable Bug: Sliders Disappear on Postback Bug: Remote Images Cause Error Bug: Deleted Images Cause Error System Requirements DotNetNuke v06.00.00 or newer .Net Framework v3.5 SP1 or newer SQL Server 2005 or newerImage Resizer for Windows: Image Resizer 3 Preview 3: Here is yet another iteration toward what will eventually become Image Resizer 3. This release is stable. However, I'm calling it a preview since there are still many features I'd still like to add before calling it complete. Updated on February 28 to fix an issue with installing on multi-user machines. As usual, here is my progress report. Done Preview 3 Fix: 3206 3076 3077 5688 Fix: 7420 Fix: 7527 Fix: 7576 7612 Preview 2 6308 6309 Fix: 7339 Fix: 7357 Preview 1 UI...Finestra Virtual Desktops: 2.5.4500: This is a bug fix release for version 2.5. It fixes several things and adds a couple of minor features. See the 2.5 release notes for more information on the major new features in that version. Important - If Finestra crashes on startup for you, you must install the Visual C++ 2010 runtime from http://www.microsoft.com/download/en/details.aspx?id=5555. Fixes a bug with window animations not refreshing the screen on XP and with DWM off Fixes a bug with with crashing on XP due to a bug in t...Media Companion: MC 3.432b Release: General Now remembers window location. Catching a few more exceptions when an image is blank TV A couple of UI tweaks Movies Fixed the actor name displaying HTML Fixed crash when using Save files as "movie.nfo", "movie.tbn", & "fanart.jpg" New CSV template for HTML output function Added <createdate> tag for HTML output A couple of UI tweaks Known Issues Multiepisodes are not handled correctly in MC. The created nfo is valid, but they are not displayed in MC correctly & saving the...New Projectsabac: abac cn websiteAION Launcher: simple aion launcher...just edit the background image of your choosing inside the code and other things such as the links for the buttons and the ip adress and port of the serverAXTFSTool: Dynamics AX tool that connects to your project's TFS and lists the objects your colleagues have changed. Written in C#, still under development and improvements. Useful for team leaders, deployment managers, etc.cookieTopo: Topo map viewerCrmFetchKit.js: Simple Library at allows the execution of fetchxml queries via JavaScript for Dynamics CRM 2011 (using the new WCF endpoints). Like the CrmRestKit this framework uses the promise/A capacities of jQuery. The code and the idea for this framework bases on the CrmServiceToolkit (http://crmtoolkit.codeplex.com/) developed by Daniel Cai. cy univerX engine: ????????DNSAPI.NET: A common API for managing DNS servers on Windows. This project is based on the work I started back in 2002 when I needed to create a web front-end for Windows' DNS server using the .Net framework. The plan is to expand on the project and include support for the BIND server on Windows too. ego.net: ego.netfdTFS: Team Foundation Server Source Control Plugin for FlashDevelopGeoWPS: GeoWPS is an implementation of the OGC WPS. It will be developed in C#. IThink: A new project.King Garden: Boy King's .net practical projects.King Garret: Boy King's .net learning projects.LottoCheck: Follow LottoNot-Terraria: This is a like terraria game but NOT terrariaPassword Protector: Password Protector SharePoint 2010 BlobCache Manager: Manage your web application's blobcache settings directly in the central administration.SharePoint 2010 SilverLight Multiple File Uploader: SharePoint 2010 SilverLight Multiple File Uploader for Documents Libraries with MetaData.Sharepoint Tool Collection: I want to Integrate Various Utilities of Sharepoint at one place. It is for easy working of user or developer. Ex-1. A utility which takes some params & csv file and upload 100s of items on the sharepoint list easily. Ex-2 A utility to upload documents in a library. etc.SQLCLR Cmd Exec Framework Example: For users of MS SQL Server, xp_cmdshell is a utility that we usually want to have disabled. However there are still cases where calling a command line is needed. This project provides an framework/example to make command line calls. It is not meant as an xp_cmdshell replacement but as a workaround.Symmetric Designs Python 3.2: Symmetric Designs for Python 3.2 helps graphical artists to design and develop their own designs freestyle. It uses the pygame module for Python 3.2. It can also be analysed in order to get a grasp of graphics programming in Python.Terminsoft open CLR libraries: Terminsoft open CLR libraries. The first is Terminsoft.Intervals, intended for modeling the sets of intervals with elements, the comparison operation is defined for. The second is Terminsoft.Syntax, intended for text parsing and transformation and built upon regular expressions.Thai Flood Watch: Thai Flood Watch provides useful information, up-to-date and visual access to the major canal in Bangkok, Thailand using data from department of drainage and sewerage. Easily monitor river and canal flow information in Bangkok area, right from your hand.TheNerd: Sample video game source code. Using Sunburn.Unity.WebAPI: A library that allows simple Integration of Microsoft's Unity IoC container with ASP.NET's WebAPI. This project includes a bespoke DependencyResolver that creates a child container per HTTP request and disposes of all registered IDisposable instances at the end of the request.Wholemy.RemoteTouch: The project is a remote touch-sensitive keyboard with a customizable interface which allows to supplement control of another computer, regardless of the wires. For example, if you have not so fast Tablet PC - a client and a fast desktop computer - the server using the network.WindowPlace: WindowPlace makes it possible to save Window positions and sizes to a profile. Switching between profiles will effortlessly move and resize your windows. Help improve productivity - especially for multi-monitor systems. Developed in C# using WPF and a few Windows API calls in the background. WP Error Manager (Devv.Core.WPErrorManager): Library to log, handle and report errors on Windows Phone 7 apps. Fully customizable and extremely easy to implement. Works with any WP7 app. Tested with the emulator, Nokia Lumia 800 and Samsung Focus Flash.WPMatic: Windows Phone7 App to manage Homematic (eQ-3) Devices. The App is like the Homematic Central Configuration Unit (CCU) in German.www.Nabaza.com Freeware and Ebooks: www.Nabaza.com Freeware and Ebooks by William R. NabazaZap: Zap is a light weight .NET communication framework. It is designed for programs running in local area network. Zap provides code generation tool that enables user to call remote methods, add/remote event listener to remote objects, while hides the lower details.

    Read the article

  • Metro: Introduction to CSS 3 Grid Layout

    - by Stephen.Walther
    The purpose of this blog post is to provide you with a quick introduction to the new W3C CSS 3 Grid Layout standard. You can use CSS Grid Layout in Metro style applications written with JavaScript to lay out the content of an HTML page. CSS Grid Layout provides you with all of the benefits of using HTML tables for layout without requiring you to actually use any HTML table elements. Doing Page Layouts without Tables Back in the 1990’s, if you wanted to create a fancy website, then you would use HTML tables for layout. For example, if you wanted to create a standard three-column page layout then you would create an HTML table with three columns like this: <table height="100%"> <tr> <td valign="top" width="300px" bgcolor="red"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </td> <td valign="top" bgcolor="green"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </td> <td valign="top" width="300px" bgcolor="blue"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </td> </tr> </table> When the table above gets rendered out to a browser, you end up with the following three-column layout: The width of the left and right columns is fixed – the width of the middle column expands or contracts depending on the width of the browser. Sometime around the year 2005, everyone decided that using tables for layout was a bad idea. Instead of using tables for layout — it was collectively decided by the spirit of the Web — you should use Cascading Style Sheets instead. Why is using HTML tables for layout bad? Using tables for layout breaks the semantics of the TABLE element. A TABLE element should be used only for displaying tabular information such as train schedules or moon phases. Using tables for layout is bad for accessibility (The Web Content Accessibility Guidelines 1.0 is explicit about this) and using tables for layout is bad for separating content from layout (see http://CSSZenGarden.com). Post 2005, anyone who used HTML tables for layout were encouraged to hold their heads down in shame. That’s all well and good, but the problem with using CSS for layout is that it can be more difficult to work with CSS than HTML tables. For example, to achieve a standard three-column layout, you either need to use absolute positioning or floats. Here’s a three-column layout with floats: <style type="text/css"> #container { min-width: 800px; } #leftColumn { float: left; width: 300px; height: 100%; background-color:red; } #middleColumn { background-color:green; height: 100%; } #rightColumn { float: right; width: 300px; height: 100%; background-color:blue; } </style> <div id="container"> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> </div> The page above contains four DIV elements: a container DIV which contains a leftColumn, middleColumn, and rightColumn DIV. The leftColumn DIV element is floated to the left and the rightColumn DIV element is floated to the right. Notice that the rightColumn DIV appears in the page before the middleColumn DIV – this unintuitive ordering is necessary to get the floats to work correctly (see http://stackoverflow.com/questions/533607/css-three-column-layout-problem). The page above (almost) works with the most recent versions of most browsers. For example, you get the correct three-column layout in both Firefox and Chrome: And the layout mostly works with Internet Explorer 9 except for the fact that for some strange reason the min-width doesn’t work so when you shrink the width of your browser, you can get the following unwanted layout: Notice how the middle column (the green column) bleeds to the left and right. People have solved these issues with more complicated CSS. For example, see: http://matthewjamestaylor.com/blog/holy-grail-no-quirks-mode.htm But, at this point, no one could argue that using CSS is easier or more intuitive than tables. It takes work to get a layout with CSS and we know that we could achieve the same layout more easily using HTML tables. Using CSS Grid Layout CSS Grid Layout is a new W3C standard which provides you with all of the benefits of using HTML tables for layout without the disadvantage of using an HTML TABLE element. In other words, CSS Grid Layout enables you to perform table layouts using pure Cascading Style Sheets. The CSS Grid Layout standard is still in a “Working Draft” state (it is not finalized) and it is located here: http://www.w3.org/TR/css3-grid-layout/ The CSS Grid Layout standard is only supported by Internet Explorer 10 and there are no signs that any browser other than Internet Explorer will support this standard in the near future. This means that it is only practical to take advantage of CSS Grid Layout when building Metro style applications with JavaScript. Here’s how you can create a standard three-column layout using a CSS Grid Layout: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100%; } #leftColumn { -ms-grid-column: 1; background-color:red; } #middleColumn { -ms-grid-column: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; background-color:blue; } </style> </head> <body> <div id="container"> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> </div> </body> </html> When the page above is rendered in Internet Explorer 10, you get a standard three-column layout: The page above contains four DIV elements: a container DIV which contains a leftColumn DIV, middleColumn DIV, and rightColumn DIV. The container DIV is set to Grid display mode with the following CSS rule: #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100%; } The display property is set to the value “-ms-grid”. This property causes the container DIV to lay out its child elements in a grid. (Notice that you use “-ms-grid” instead of “grid”. The “-ms-“ prefix is used because the CSS Grid Layout standard is still preliminary. This implementation only works with IE10 and it might change before the final release.) The grid columns and rows are defined with the “-ms-grid-columns” and “-ms-grid-rows” properties. The style rule above creates a grid with three columns and one row. The left and right columns are fixed sized at 300 pixels. The middle column sizes automatically depending on the remaining space available. The leftColumn, middleColumn, and rightColumn DIVs are positioned within the container grid element with the following CSS rules: #leftColumn { -ms-grid-column: 1; background-color:red; } #middleColumn { -ms-grid-column: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; background-color:blue; } The “-ms-grid-column” property is used to specify the column associated with the element selected by the style sheet selector. The leftColumn DIV is positioned in the first grid column, the middleColumn DIV is positioned in the second grid column, and the rightColumn DIV is positioned in the third grid column. I find using CSS Grid Layout to be just as intuitive as using an HTML table for layout. You define your columns and rows and then you position different elements within these columns and rows. Very straightforward. Creating Multiple Columns and Rows In the previous section, we created a super simple three-column layout. This layout contained only a single row. In this section, let’s create a slightly more complicated layout which contains more than one row: The following page contains a header row, a content row, and a footer row. The content row contains three columns: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100px 1fr 100px; } #header { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 1; background-color: yellow; } #leftColumn { -ms-grid-column: 1; -ms-grid-row: 2; background-color:red; } #middleColumn { -ms-grid-column: 2; -ms-grid-row: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; -ms-grid-row: 2; background-color:blue; } #footer { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 3; background-color: orange; } </style> </head> <body> <div id="container"> <div id="header"> Header, Header, Header </div> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> <div id="footer"> Footer, Footer, Footer </div> </div> </body> </html> In the page above, the grid layout is created with the following rule which creates a grid with three rows and three columns: #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100px 1fr 100px; } The header is created with the following rule: #header { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 1; background-color: yellow; } The header is positioned in column 1 and row 1. Furthermore, notice that the “-ms-grid-column-span” property is used to span the header across three columns. CSS Grid Layout and Fractional Units When you use CSS Grid Layout, you can take advantage of fractional units. Fractional units provide you with an easy way of dividing up remaining space in a page. Imagine, for example, that you want to create a three-column page layout. You want the size of the first column to be fixed at 200 pixels and you want to divide the remaining space among the remaining three columns. The width of the second column is equal to the combined width of the third and fourth columns. The following CSS rule creates four columns with the desired widths: #container { display: -ms-grid; -ms-grid-columns: 200px 2fr 1fr 1fr; -ms-grid-rows: 1fr; } The fr unit represents a fraction. The grid above contains four columns. The second column is two times the size (2fr) of the third (1fr) and fourth (1fr) columns. When you use the fractional unit, the remaining space is divided up using fractional amounts. Notice that the single row is set to a height of 1fr. The single grid row gobbles up the entire vertical space. Here’s the entire HTML page: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 200px 2fr 1fr 1fr; -ms-grid-rows: 1fr; } #firstColumn { -ms-grid-column: 1; background-color:red; } #secondColumn { -ms-grid-column: 2; background-color:green; } #thirdColumn { -ms-grid-column: 3; background-color:blue; } #fourthColumn { -ms-grid-column: 4; background-color:orange; } </style> </head> <body> <div id="container"> <div id="firstColumn"> First Column, First Column, First Column </div> <div id="secondColumn"> Second Column, Second Column, Second Column </div> <div id="thirdColumn"> Third Column, Third Column, Third Column </div> <div id="fourthColumn"> Fourth Column, Fourth Column, Fourth Column </div> </div> </body> </html>   Summary There is more in the CSS 3 Grid Layout standard than discussed in this blog post. My goal was to describe the basics. If you want to learn more than you can read through the entire standard at http://www.w3.org/TR/css3-grid-layout/ In this blog post, I described some of the difficulties that you might encounter when attempting to replace HTML tables with Cascading Style Sheets when laying out a web page. I explained how you can take advantage of the CSS 3 Grid Layout standard to avoid these problems when building Metro style applications using JavaScript. CSS 3 Grid Layout provides you with all of the benefits of using HTML tables for laying out a page without requiring you to use HTML table elements.

    Read the article

  • SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience

    - by pinaldave
    I recently attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever This year the summit was at its best. Instead of writing about my usual routine or the event, I am going to write about the interesting things I did and how I felt about it! Best Summit Ever Trip to Seattle! This was my second trip to Seattle this year and the journey is always long. Here is the travel stats on how long it takes to get to Seattle: 24 hours official air time 36 hours total travel time (connection waits and airport commute) Every time I travel to USA I gain a day and when I travel back to home, I lose a day. However, the total traveling time is around 3 days. The journey is long and very exhausting. However, it is all worth it when you’re attending an event like SQLPASS. Here are few things I carry when I travel for a long journey: Dry Snack packs – I like to have some good Indian Dry Snacks along with me in my backpack so I can have my own snack when I want Amazon Kindle – Loaded with 80+ books A physical book – This is usually a very easy to read book I do not watch movies on the plane and usually spend my time reading something quick and easy. If I can go to sleep, I go for it. I prefer to not to spend time in conversation with the guy sitting next to me because usually I end up listening to their biography, which I cannot blog about. Sheraton Seattle SQLPASS In any case, I love to go to Seattle as the city is great and has everything a brilliant metropolis has to offer. The new Light Train is extremely convenient, and I can take it directly from the airport to the city center. My hotel, the Sheraton, was only few meters (in the USA people count in blocks – 3 blocks) away from the train station. This time I saved USD 40 each round trip due to the Light Train. Sessions I attended! Well, I really wanted to attend most of the sessions but there was great dilemma of which ones to choose. There were many, many sessions to be attended and at any given time there was more than one good session being presented. I had decided to attend sessions in area performance tuning and I attended quite a few sessions this year, compared to what I was able to do last year. Here are few names of the speakers whose sessions I attended (please note, following great speakers are not listed in any order. I loved them and I enjoyed their sessions): Conor Cunningham Rushabh Mehta Buck Woody Brent Ozar Jonathan Kehayias Chris Leonard Bob Ward Grant Fritchey I had great fun attending their sessions. The sessions were meaningful and enlightening. It is hard to rate any session but I have found that the insights learned in Conor Cunningham’s sessions are the highlight of the PASS Summit. Rushabh Mehta at Keynote SQLPASS   Bucky Woody and Brent Ozar I always like the sessions where the speaker is much closer to the audience and has real world experience. I think speakers who have worked in the real world deliver the best content and most useful information. Sessions I did not like! Indeed there were few sessions I did not like it and I am not going to name them here. However, there were strong reasons I did not like their sessions, and here is why: Sessions were all theory and had no real world connections. All technical questions ended with confusing answers (lots of “I will get back to you on it,” “it depends,” “let us take this offline” and many more…) “I am God” kind of attitude in the speakers For example, I attended a session of one very well known speaker who is a specialist for one particular area. I was bit late for the session and was surprised to see that in a room that could hold 350 people there were only 30 attendees. After sitting there for 15 minutes, I realized why lots of people left. Very soon I found I preferred to stare out the window instead of listening to that particular speaker. One on One Talk! Many times people ask me what I really like about PASS. I always say the experience of meeting SQL legends and spending time with them one on one and LEARNING! Here is the quick list of the people I met during this event and spent more than 30 minutes with each of them talking about various subjects: Pinal Dave and Brad Shulz Pinal Dave and Rushabh Mehta Michael Coles and Pinal Dave Rushabh Mehta – It is always pleasure to meet with him. He is a man with lots of energy and a passion for community. He recently told me that he really wanted to turn PASS into resource for learning for every SQL Server Developer and Administrator in the world. I had great in-depth discussion regarding how a single person can contribute to a community. Michael Coles – I consider him my best friend. It is always fun to meet him. He is funny and very knowledgeable. I think there are very few people who are as expert as he is in encryption and spatial databases. Worth meeting him every single time. Glenn Berry – A real friend of everybody. He is very a simple person and very true to his heart. I think there is not a single person in whole community who does not like him. He is a friends of all and everybody likes him very much. I once again had time to sit with him and learn so much from him. As he is known as Dr. DMV, I can be his nurse in the area of DMV. Brad Schulz – I always wanted to meet him but never got chance until today. I had great time meeting him in person and we have spent considerable amount of time together discussing various T-SQL tricks and tips. I do not know where he comes up with all the different ideas but I enjoy reading his blog and sharing his wisdom with me. Jonathan Kehayias – He is drill sergeant in US army. If you get the impression that he is a giant with very strong personality – you are wrong. He is very kind and soft spoken DBA with strong performance tuning skills. I asked him how he has kept his two jobs separate and I got very good answer – just work hard and have passion for what you do. I attended his sessions and his presentation style is very unique.  I feel like he is speaking in a language I understand. Louis Davidson – I had never had a chance to sit with him and talk about technology before. He has so much wisdom and he is very kind. During the dinner, I had talked with him for long time and without hesitation he started to draw a schema for me on the menu. It was a wonderful experience to learn from a master at the dinner table. He explained to me the real and practical differences between third normal form and forth normal form. Honestly I did not know earlier, but now I do. Erland Sommarskog – This man needs no introduction, he is very well known and very clear in conveying his ideas. I learned a lot from him during the course of year. Every time I meet him, I learn something new and this time was no exception. Joe Webb – Joey is all about community and people, we had interesting conversation about community, MVP and how one can be helpful to community without losing passion for long time. It is always pleasant to talk to him and of course, I had fun time. Ross Mistry – I call him my brother many times because he indeed looks like my cousin. He provided me lots of insight of how one can write book and how he keeps his books simple to appeal to all the readers. A wonderful person and great friend. Ola Hallgren - I did not know he was coming to the summit. I had great time meeting him and had a wonderful conversation with him regarding his scripts and future community activities. Blythe Morrow – She used to be integrated part of SQL Server Community and PASS HQ. It was wonderful to meet her again and re-connect. She is wonderful person and I had a great time talking to her. Solid Quality Mentors – It is difficult to decide who to mention here. Instead of writing all the names, I am going to include a photo of our meeting. I had great fun meeting various members of our global branches. This year I was sitting with my Spanish speaking friends and had great fun as Javier Loria from Solid Quality translated lots of things for me. Party, Party and Parties Every evening there were various parties. I did attend almost all of them. Every party had different theme but the goal of all the parties the same – networking. Here are the few parties where I had lots of fun: Dell Reception Party Exhibitor Party Solid Quality Fun Party Red Gate Friends Party MVP Dinner Microsoft Party MVP Dinner Quest Party Gameworks PASS Party Volunteer Party at Garage Solid Quality Mentors (10 Members out of 120) They were all great networking opportunities and lots of fun. I really had great time meeting people at the various parties. There were few people everywhere – well, I will say I am among them – who hopped parties. NDA – Not Decided Agenda During the event there were few meetings marked “NDA.” Someone asked me “why are these things NDA?”  My response was simple: because they are not sure themselves. NDA stands for Not Decided Agenda. Toys, Giveaways and Luggage I admit, I was like child in Gameworks and was playing to win soft toys. I was doing it for my daughter. I must thank all of the people who gave me their cards to try my luck. I won 4 soft-toys for my daughter and it was fun. Also, thanks to Angel who did a final toy swap with me to get the desired toy for my daughter. I also collected ducks from Idera, as my daughter really loves them. Solid Quality Booth Each of the exhibitors was giving away something and I got so much stuff that my luggage got quite a bit bigger when I returned. Best Exhibitor Idera had SQLDoctor (a real magician and fun guy) to promote their new tool SQLDoctor. I really had a great time participating in the magic myself. At one point, the magician made my watch disappear.  I have seen better magic before, but this time it caught me unexpectedly and I was taken by surprise. I won many ducks again. The Common Question I heard the following common questions: I have seen you somewhere – who are you? – I am Pinal Dave. I did not know that Pinal is your first name and Dave is your last name, how do you pronounce your last name again? – Da-way How old are you? – I am as old as I can be. Are you an Indian because you look like one? – I did not answer this one. Where are you from? This question was usually asked after looking at my badge which says India. So did you really fly from India? – Yes, because I have seasickness so I do not prefer the sea journey. How long was the journey? – 24/36/12 (air travel time/total travel time/time zone difference) Why do you write on SQLAuthority.com? – Because I want to. I remember your daughter looks like you. – Is this even a question? Of course, she is daddy’s little girl. There were so many other questions, I will have to write another blog post about it. SQLPASS Again, Best Summit Ever! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SQLPASS

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Metro: Introduction to CSS 3 Grid Layout

    - by Stephen.Walther
    The purpose of this blog post is to provide you with a quick introduction to the new W3C CSS 3 Grid Layout standard. You can use CSS Grid Layout in Metro style applications written with JavaScript to lay out the content of an HTML page. CSS Grid Layout provides you with all of the benefits of using HTML tables for layout without requiring you to actually use any HTML table elements. Doing Page Layouts without Tables Back in the 1990’s, if you wanted to create a fancy website, then you would use HTML tables for layout. For example, if you wanted to create a standard three-column page layout then you would create an HTML table with three columns like this: <table height="100%"> <tr> <td valign="top" width="300px" bgcolor="red"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </td> <td valign="top" bgcolor="green"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </td> <td valign="top" width="300px" bgcolor="blue"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </td> </tr> </table> When the table above gets rendered out to a browser, you end up with the following three-column layout: The width of the left and right columns is fixed – the width of the middle column expands or contracts depending on the width of the browser. Sometime around the year 2005, everyone decided that using tables for layout was a bad idea. Instead of using tables for layout — it was collectively decided by the spirit of the Web — you should use Cascading Style Sheets instead. Why is using HTML tables for layout bad? Using tables for layout breaks the semantics of the TABLE element. A TABLE element should be used only for displaying tabular information such as train schedules or moon phases. Using tables for layout is bad for accessibility (The Web Content Accessibility Guidelines 1.0 is explicit about this) and using tables for layout is bad for separating content from layout (see http://CSSZenGarden.com). Post 2005, anyone who used HTML tables for layout were encouraged to hold their heads down in shame. That’s all well and good, but the problem with using CSS for layout is that it can be more difficult to work with CSS than HTML tables. For example, to achieve a standard three-column layout, you either need to use absolute positioning or floats. Here’s a three-column layout with floats: <style type="text/css"> #container { min-width: 800px; } #leftColumn { float: left; width: 300px; height: 100%; background-color:red; } #middleColumn { background-color:green; height: 100%; } #rightColumn { float: right; width: 300px; height: 100%; background-color:blue; } </style> <div id="container"> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> </div> The page above contains four DIV elements: a container DIV which contains a leftColumn, middleColumn, and rightColumn DIV. The leftColumn DIV element is floated to the left and the rightColumn DIV element is floated to the right. Notice that the rightColumn DIV appears in the page before the middleColumn DIV – this unintuitive ordering is necessary to get the floats to work correctly (see http://stackoverflow.com/questions/533607/css-three-column-layout-problem). The page above (almost) works with the most recent versions of most browsers. For example, you get the correct three-column layout in both Firefox and Chrome: And the layout mostly works with Internet Explorer 9 except for the fact that for some strange reason the min-width doesn’t work so when you shrink the width of your browser, you can get the following unwanted layout: Notice how the middle column (the green column) bleeds to the left and right. People have solved these issues with more complicated CSS. For example, see: http://matthewjamestaylor.com/blog/holy-grail-no-quirks-mode.htm But, at this point, no one could argue that using CSS is easier or more intuitive than tables. It takes work to get a layout with CSS and we know that we could achieve the same layout more easily using HTML tables. Using CSS Grid Layout CSS Grid Layout is a new W3C standard which provides you with all of the benefits of using HTML tables for layout without the disadvantage of using an HTML TABLE element. In other words, CSS Grid Layout enables you to perform table layouts using pure Cascading Style Sheets. The CSS Grid Layout standard is still in a “Working Draft” state (it is not finalized) and it is located here: http://www.w3.org/TR/css3-grid-layout/ The CSS Grid Layout standard is only supported by Internet Explorer 10 and there are no signs that any browser other than Internet Explorer will support this standard in the near future. This means that it is only practical to take advantage of CSS Grid Layout when building Metro style applications with JavaScript. Here’s how you can create a standard three-column layout using a CSS Grid Layout: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100%; } #leftColumn { -ms-grid-column: 1; background-color:red; } #middleColumn { -ms-grid-column: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; background-color:blue; } </style> </head> <body> <div id="container"> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> </div> </body> </html> When the page above is rendered in Internet Explorer 10, you get a standard three-column layout: The page above contains four DIV elements: a container DIV which contains a leftColumn DIV, middleColumn DIV, and rightColumn DIV. The container DIV is set to Grid display mode with the following CSS rule: #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100%; } The display property is set to the value “-ms-grid”. This property causes the container DIV to lay out its child elements in a grid. (Notice that you use “-ms-grid” instead of “grid”. The “-ms-“ prefix is used because the CSS Grid Layout standard is still preliminary. This implementation only works with IE10 and it might change before the final release.) The grid columns and rows are defined with the “-ms-grid-columns” and “-ms-grid-rows” properties. The style rule above creates a grid with three columns and one row. The left and right columns are fixed sized at 300 pixels. The middle column sizes automatically depending on the remaining space available. The leftColumn, middleColumn, and rightColumn DIVs are positioned within the container grid element with the following CSS rules: #leftColumn { -ms-grid-column: 1; background-color:red; } #middleColumn { -ms-grid-column: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; background-color:blue; } The “-ms-grid-column” property is used to specify the column associated with the element selected by the style sheet selector. The leftColumn DIV is positioned in the first grid column, the middleColumn DIV is positioned in the second grid column, and the rightColumn DIV is positioned in the third grid column. I find using CSS Grid Layout to be just as intuitive as using an HTML table for layout. You define your columns and rows and then you position different elements within these columns and rows. Very straightforward. Creating Multiple Columns and Rows In the previous section, we created a super simple three-column layout. This layout contained only a single row. In this section, let’s create a slightly more complicated layout which contains more than one row: The following page contains a header row, a content row, and a footer row. The content row contains three columns: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100px 1fr 100px; } #header { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 1; background-color: yellow; } #leftColumn { -ms-grid-column: 1; -ms-grid-row: 2; background-color:red; } #middleColumn { -ms-grid-column: 2; -ms-grid-row: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; -ms-grid-row: 2; background-color:blue; } #footer { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 3; background-color: orange; } </style> </head> <body> <div id="container"> <div id="header"> Header, Header, Header </div> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> <div id="footer"> Footer, Footer, Footer </div> </div> </body> </html> In the page above, the grid layout is created with the following rule which creates a grid with three rows and three columns: #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100px 1fr 100px; } The header is created with the following rule: #header { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 1; background-color: yellow; } The header is positioned in column 1 and row 1. Furthermore, notice that the “-ms-grid-column-span” property is used to span the header across three columns. CSS Grid Layout and Fractional Units When you use CSS Grid Layout, you can take advantage of fractional units. Fractional units provide you with an easy way of dividing up remaining space in a page. Imagine, for example, that you want to create a three-column page layout. You want the size of the first column to be fixed at 200 pixels and you want to divide the remaining space among the remaining three columns. The width of the second column is equal to the combined width of the third and fourth columns. The following CSS rule creates four columns with the desired widths: #container { display: -ms-grid; -ms-grid-columns: 200px 2fr 1fr 1fr; -ms-grid-rows: 1fr; } The fr unit represents a fraction. The grid above contains four columns. The second column is two times the size (2fr) of the third (1fr) and fourth (1fr) columns. When you use the fractional unit, the remaining space is divided up using fractional amounts. Notice that the single row is set to a height of 1fr. The single grid row gobbles up the entire vertical space. Here’s the entire HTML page: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 200px 2fr 1fr 1fr; -ms-grid-rows: 1fr; } #firstColumn { -ms-grid-column: 1; background-color:red; } #secondColumn { -ms-grid-column: 2; background-color:green; } #thirdColumn { -ms-grid-column: 3; background-color:blue; } #fourthColumn { -ms-grid-column: 4; background-color:orange; } </style> </head> <body> <div id="container"> <div id="firstColumn"> First Column, First Column, First Column </div> <div id="secondColumn"> Second Column, Second Column, Second Column </div> <div id="thirdColumn"> Third Column, Third Column, Third Column </div> <div id="fourthColumn"> Fourth Column, Fourth Column, Fourth Column </div> </div> </body> </html>   Summary There is more in the CSS 3 Grid Layout standard than discussed in this blog post. My goal was to describe the basics. If you want to learn more than you can read through the entire standard at http://www.w3.org/TR/css3-grid-layout/ In this blog post, I described some of the difficulties that you might encounter when attempting to replace HTML tables with Cascading Style Sheets when laying out a web page. I explained how you can take advantage of the CSS 3 Grid Layout standard to avoid these problems when building Metro style applications using JavaScript. CSS 3 Grid Layout provides you with all of the benefits of using HTML tables for laying out a page without requiring you to use HTML table elements.

    Read the article

  • CodePlex Daily Summary for Thursday, January 13, 2011

    CodePlex Daily Summary for Thursday, January 13, 2011Popular ReleasesMVC Music Store: MVC Music Store v2.0: This is the 2.0 release of the MVC Music Store Tutorial. This tutorial is updated for ASP.NET MVC 3 and Entity Framework Code-First, and contains fixes and improvements based on feedback and common questions from previous releases. The main download, MvcMusicStore-v2.0.zip, contains everything you need to build the sample application, including A detailed tutorial document in PDF format Assets you will need to build the project, including images, a stylesheet, and a pre-populated databas...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.7 GA Released: Hi, Today we are releasing Visifire 3.6.7 GA with the following feature: * Inlines property has been implemented in Title. Also, this release contains fix for the following bugs: * In Column and Bar chart DataPoint’s label properties were not working as expected at real-time if marker enabled was set to true. * 3D Column and Bar chart were not rendered properly if AxisMinimum property was set in x-axis. You can download Visifire v3.6.7 here. Cheers, Team VisifireFluent Validation for .NET: 2.0: Changes since 2.0 RC Fix typo in the name of FallbackAwareResourceAccessorBuilder Fix issue #7062 - allow validator selectors to work against nullable properties with overriden names. Fix error in German localization. Better support for client-side validation messages in MVC integration. All changes since 1.3 Allow custom MVC ModelValidators to be added to the FVModelValidatorProvider Support resource provider for custom property validators through the new IResourceAccessorBuilder ...EnhSim: EnhSim 2.3.2 ALPHA: 2.3.1 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Quick update to ...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: paging for the lookup lookup with multiselect changes: the css classes used by the framework where renamed to be more standard the lookup controller requries an item.ascx (no more ViewData["structure"]), and LookupList action renamed to Search all the...pwTools: pwTools: Changelog v1.0 base release??????????: All-In-One Code Framework ??? 2011-01-12: 2011???????All-In-One Code Framework(??) 2011?1??????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????release?,???????ASP.NET, AJAX, WinForm, Windows Shell????13?Sample Code。???,??????????sample code。 ?????:http://blog.csdn.net/sjb5201/archive/2011/01/13/6135037.aspx ??,??????MSDN????????????。 http://social.msdn.microsoft.com/Forums/zh-CN/codezhchs/threads ?????????????????,??Email ????patterns & practices – Enterprise Library: Enterprise Library 5.0 - Extensibility Labs: This is a preview release of the Hands-on Labs to help you learn and practice different ways the Enterprise Library can be extended. Learning MapCustom exception handler (estimated time to complete: 1 hr 15 mins) Custom logging trace listener (1 hr) Custom configuration source (registry-based) (30 mins) System requirementsEnterprise Library 5.0 / Unity 2.0 installed SQL Express 2008 installed Visual Studio 2010 Pro (or better) installed AuthorsChris Tavares, Microsoft Corporation ...Orchard Project: Orchard 1.0: Orchard Release Notes Build: 1.0.20 Published: 1/12/2010 How to Install OrchardTo install the Orchard tech preview using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard.ashx Web PI will detect your hardware environment and install the application. --OR-- Alternatively, to install the release manually, download the Orchard.Web.1.0.20.zip file. The zip contents are pre-built and ready-to-run. Simply extract the contents of the Orchard folder from ...Umbraco CMS: Umbraco 4.6.1: The Umbraco 4.6.1 (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains nearly 200 bug fixes since the 4.5.2 release. Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to check the free foundation videos on how to get started building Umbraco sites. They're ...Google URL Shortener API for .NET: Google URL Shortener API v1: According follow specification: http://code.google.com/apis/urlshortener/v1/reference.htmlStyleCop for ReSharper: StyleCop for ReSharper 5.1.14986.000: A considerable amount of work has gone into this release: Features: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new ObjectBasedEnvironment object does not resolve the StyleCop installation path, thus it does not return the ...SQL Monitor - tracking sql server activities: SQL Monitor 3.1 beta 1: 1. support alert message template 2. dynamic toolbar commands depending on functionality 3. fixed some bugs 4. refactored part of the code, now more stable and more clean upFacebook C# SDK: 4.2.1: - Authentication bug fixes - Updated Json.Net to version 4.0.0 - BREAKING CHANGE: Removed cookieSupport config setting, now automatic. This download is also availible on NuGet: Facebook FacebookWeb FacebookWebMvcHawkeye - The .Net Runtime Object Editor: Hawkeye 1.2.5: In the case you are running an x86 Windows and you installed Release 1.2.4, you should consider upgrading to this release (1.2.5) as it appears Hawkeye is broken on x86 OS. I apologize for the inconvenience, but it appears Hawkeye 1.2.4 (and probably previous versions) doesn't run on x86 Windows (See issue http://hawkeye.codeplex.com/workitem/7791). This maintenance release fixes this broken behavior. This release comes in two flavors: Hawkeye.125.N2 is the standard .NET 2 build, was compile...Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (January 2011): Another release build for daily use; it contains many new features, enhanced compatibility with latest PHP opensource applications and several issue fixes. To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phalanger. Changes made within this release include following: New features available only in Phalanger. Full support of Multi-Script-Assemblies was implemented; you can build your application into several DLLs now. Deploy them separately t...AutoLoL: AutoLoL v1.5.3: A message will be displayed when there's an update available Shows a list of recent mastery files in the Editor Tab (requested by quite a few people) Updater: Update information is now scrollable Added a buton to launch AutoLoL after updating is finished Updated the UI to match that of AutoLoL Fix: Detects and resolves 'Read Only' state on Version.xmlTweetSharp: TweetSharp v2.0.0.0 - Preview 7: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 7 ChangesFixes the regression issue in OAuth from Preview 6 Preview 6 ChangesMaintenance release with user reported fixes Preview 5 ChangesMaintenance release with user reported fixes Third Party Library VersionsHammock v1.0.6: http://hammock.codeplex.com Json.NET 3.5 Release 8: http://json.codeplex.comExtended WPF Toolkit: Extended WPF Toolkit - 1.3.0: What's in the 1.3.0 Release?BusyIndicator ButtonSpinner ChildWindow ColorPicker - Updated (Breaking Changes) DateTimeUpDown - New Control Magnifier - New Control MaskedTextBox - New Control MessageBox NumericUpDown RichTextBox RichTextBoxFormatBar - Updated .NET 3.5 binaries and SourcePlease note: The Extended WPF Toolkit 3.5 is dependent on .NET Framework 3.5 and the WPFToolkit. You must install .NET Framework 3.5 and the WPFToolkit in order to use any features in the To...Ionics Isapi Rewrite Filter: 2.1 latest stable: V2.1 is stable, and is in maintenance mode. This is v2.1.1.25. It is a bug-fix release. There are no new features. 28629 29172 28722 27626 28074 29164 27659 27900 many documentation updates and fixes proper x64 build environment. This release includes x64 binaries in zip form, but no x64 MSI file. You'll have to manually install x64 servers, following the instructions in the documentation.New Projects4chan: Project for educational purposesAE.Net.Mail - POP/IMAP Client: C# POP/IMAP client libraryAlmathy: Application communautaire pour le partage de données basée sur le protocole XMPP. Discussion instantanée, mail, échange de données, travaux partagés. Développée en C#, utilisant les Windows Forms.AMK - Associação Metropolitana de Kendo: Projeto do site versão 2011 da Associação Metropolitana de Kendo. O site contará com: - Cadastro de atletas e eventos; - Controle de cadastro junto a CBK e histórico de graduações; - Calendário de treinos; Será desenvolvido usando .NET, MVC2, Entity Framework, SQL Server, JQueryAzke: New: Azke is a portal developed with ASP.NET MVC and MySQL. Old: Azke is a portal developed with ASP.net and MySQL.BuildScreen: A standalone Windows application to displays project build statuses from TeamCity. It can be used on a large screen for the whole development team to watch their build statuses as well as on a developer machine.CardOnline: CardOnline GameLobby GameCAudioEndpointVolume: The CAudioEndpointVolume implements the IAudioEndpointVolume Interface to control the master volume in Visual Basic 6.0 for Windows Vista and later operating systems.Cloudy - Online Storage Library: The goal of Cloudy is to be an online storage library to enable access for the most common storage services : DropBox, Skydrive, Google Docs, Azure Blob Storage and Amazon S3. It'll work in .NET 3.5 and up, Silverlight and Windows Phone 7 (WP7).ContentManager: Content Manger for SAP Kpro exports the data pool of a content-server by PHIOS-Lists (txt-files) into local binary files. Afterwards this files can be imported in a SAP content-repository . The whole List of the PHIOS Keys can also be downloaded an splitted in practical units.CSWFKrad: private FrameworkDiego: Diegodoomhjx_javalib: this is my lib for the java project.Eve Planetary Interaction: Eve Planetary InteractionEventWall: EventWall allows you to show related blogposts and tweets on the big screen. Just configure the hashtag and blog RSS feeds and you're done.Google URL Shortener API for .NET: Google URL Shortener API for .NET is a wrapper for the Google Project below: http://code.google.com/apis/urlshortener/v1/reference.html With Google URL Shortener API, you may shorten urls and get some analytics information about this. It's developer in C# 4.0 and VS2010. HtmlDeploy: A project to compile asp's Master page into pure html files. The idea is to use jQuery templating to do all the "if" and "for" when it comes to generating html output. Inventory Management System: Inventory Management System is a Silverlight-based system. It aims to manage and control the input raw material activites, output of finished goods of a small inventory.koplamp kapot: demo project voor AzureMSAccess SVN: Access SVN adds to Microsoft Access (MS Access) support for SVN Source controlMultiConvert: console based utility primary to convert solid edge files into data exchange formats like *.stp and *.dxf. It's using the API of the original software and can't be run alone. The goal of this project is creating routines with desired pre-instructions for batch conversionsNGN Image Getter: Nationalgeographic has really stunning photos! They allow to download them via website. This program automates that process.OOD CRM: Project CRM for Object Oriented Development final examsOpen NOS Client: Open NOS Rest ClientopenEHR.NET: openEHR.NET is a C# implementation of openEHR (http://openehr.org) Reference Model (RM) and Archetype Model (AM) specifications (Release 1.0.1). It allows you to build openEHR applications by composing RM objects, validate against AM objects and serialise to/from XML.OpenGL Flow Designer: OpenGL Flow Designer allows writing pseudo-C code to build OpenGL pipeline and preview results immediately.Orchard Translation Manager: An Orchard module that aims at facilitating the creation and management of translations of the Orchard CMS and its extensions.pwTools: A set of tools for viewing/editing some perfect world client/server filesServerStatusMobile: Server availability monitoring application for windows mobile 6.5.x running on WVGA (480x800) devices. Supports autorun, vibration & notification when server became unavailable, text log files for each server. .NET Compact Framework 3.5 required for running this applicationSimple Silverlight Multiple File Uploader: The project allows you to achieve multiple file uploading in Silverlight without complex, complicated, or third-party applications. In just two core classes, it's very simple to understand, use, and extend. Snos: Projet Snos linker Snes multi SupportT-4 Templates for ASP.NET Web Form Databound Control Friendly Logical Layers: This open source project includes a set of T-4 templates to enable you to build logical layers (i.e. DAL/BLL) with just few clicks! The logical layers implemented here are based on Entity Framework 4.0, ASP.NET Web Form Data Bound control friendly and fully unit testable.The Media Store: The Media StoreUM: source control for the um projecturBook - Your Address Book Manager: urBook is and Address Book Manager that can merge all your contacts on all web services you use with your phone contacts to have one consistent Address Book for all your contacts and to update back your web services contacts with the one consistent address bookWindows Phone Certificate Installer: Helps install Trusted Root Certificates on Windows Phone 7 to enable SSL requests. Intended to allow developers to use localhost web servers during development without requiring the purchase of an SSL certificate. Especially helpful for ws-trust secured web service development.

    Read the article

  • Stop Spinning Your Wheels&hellip; Sage Advice for Aspiring Developers

    - by Mark Rackley
    So… lately I’ve been tasked with helping bring some non-developers over the hump and become full-fledged, all around, SharePoint developers. Well, only time will tell if I’m successful or a complete failure. Good thing about failures though, you know what NOT to do next time! Anyway, I’ve been writing some sort of code since I was about 10 years old; so I sometimes take for granted the effort some people have to go through to learn a new technology. I guess if I had to say I was an “expert” in one thing it would be learning (and getting “stuff” done) in new technologies. Maybe that’s why I’ve embraced SharePoint and the SharePoint community. SharePoint is the first technology I haven’t been able to master or get everything done without help from other people. I KNOW I’ll never know it all and I learn something new every day.  It keeps it interesting, it keeps me motivated, and keeps me involved. So, what some people may consider a downside of SharePoint, I definitely consider a plus. Crap.. I’m rambling. Where was I? Oh yeah… me trying to be helpful. Like I said, I am able to quickly and effectively pick up new languages, technology, etc. and put it to good use. Am I just brilliant? Well, my mom thinks so.. but maybe not. Maybe I’ve just been doing it for a long time…. 25 years in some form or fashion… wow I’m old… Anyway, what I lack in depth I make up for in breadth and being the “go-to” guy wherever I work when someone needs to “get stuff done”.  Let’s see if I can take some of that experience and put it to practical use to help new people get up to speed faster, learn things more effectively, and become that go-to guy. First off…  make sure you… Know The Basics I don’t have the time to teach new developers the basics, but you gotta know them. I’ve only been “taught” two languages.. Fortran 77 and C… everything else I’ve picked up from “doing”. I HAD to know the basics though, and all new developers need to understand the very basics of development.  97.23% of all languages will have the following: Variables Functions Arrays If statements For loops / While loops If you think about it, most development is “if this, do this… or while this, do this…”.  “This” may be some unique method to your language or something you develop, but the basics are the basics. YES there are MANY other development topics you need to understand, but you shouldn’t be scratching your head trying to figure out what a ”for loop” is… (Also learn about classes and hashtables as quickly as possible). Once you have the basics down it makes it much easier to… Learn By Doing This may just apply to me and my warped brain.  I don’t learn a new technology by reading or hearing someone speak about it. I learn by doing. It does me no good to try and learn all of the intricacies of a new language or technology inside-and-out before getting my hands dirty. Just show me how to do one thing… let me get that working… then show me how to do the next thing.. let me get that working… Now, let’s see what I can figure out on my own. Okay.. now it starts to make sense. I see how the language works, I can step through the code, and before you know it.. I’m productive in a new technology. Be careful here though…. make sure you… Don’t Reinvent The Wheel People have been writing code for what… 50+ years now? So, why are you trying to tackle ANYTHING without first Googling it with Bing to see what others have done first? When I was first learning C# (I had come from a Java background) I had to call a web service.  Sure! No problem! I’d done this many times in Java. So, I proceeded to write an HTTP Handler, called the Web Service and it worked like a charm!!!  Probably about 2.3 seconds after I got it working completely someone says to me “Why didn’t you just add a Web Reference?” Really? You can do that?  oops… I just wasted a lot of time. Before undertaking the development of any sort of utility method in a new language, make sure it’s not already handled for you… Okay… you are starting to write some code and are curious about the possibilities? Well… don’t just sit there… Try It And See What Happens This is actually one of my biggest pet peeves. “So… ‘x++’ works in C#, but does it also work in JavaScript?”   Really? Did you just ask me that? In the time it spent for you to type that email, press the send button, me receive the email, get around to reading it, and replying with “yes” you could have tested it 47 times and know the answer! Just TRY it! See what happens! You aren’t doing brain surgery. You aren’t going to kill anyone, and you BETTER not be developing in production. So, you are not going to crash any production systems!! Seriously! Get off your butt and just try it yourself. The extra added benefit is that it doesn’t work, the absolute best way to learn is to… Learn From Your Failures I don’t know about you… but if I screw up and something doesn’t work, I learn A LOT more debugging my problem than if everything magically worked. It’s okay that you aren’t perfect! Not everyone can be me? In the same vein… don’t ask someone else to debug your problem until you have made a valiant attempt to do so yourself. There’s nothing quite like stepping through code line by line to see what it’s REALLY doing… and you’ll never feel more stupid sometimes than when you realize WHY it’s not working.. but you realize... you learn... and you remember. There is nothing wrong with failure as long as you learn from it. As you start writing more and more and more code make sure that you ALWAYS… Develop for Production You will soon learn that the “prototype” you wrote last week to show as a “proof of concept” is going to go directly into production no matter how much you beg and plead and try to explain it’s not ready to go into production… it’s going to go straight there.. and it’s like herpes.. it doesn’t go away and there’s no fixing it once it’s in there.  So, why not write ALL your code like it will be put in production? It MIGHT take a little longer, but in the long run it will be easier to maintain, get help on, and you won’t be embarrassed that it’s sitting on a production server for everyone to use and see. So, now that you are getting comfortable and writing code for production it is important to to remember the… KISS Principle… Learn It… Love It… Keep It Simple Stupid Seriously.. don’t try to show how smart you are by writing the most complicated code in history. Break your problem up into discrete steps and write each step. If it turns out you have some redundancy, you can always go back and tweak your code later.  How bad is it when you write code that LOOKS cocky? I’ve seen it before… some of the most abstract and complicated classes when a class wasn’t even needed! Or the most elaborate unreadable code jammed into one really long line when it could have been written in three lines, performed just as well, and been SOOO much easier to maintain. Keep it clear and simple.. baby steps people. This will help you learn the technology, debug problems, AND it will help others help you find your problems if they don’t have to decipher the Dead Sea Scrolls just to figure out what you are trying to do…. Really.. don’t be that guy… try to curb your ego and… Keep an Open Mind No matter how smart you are… how fast you type… or how much you get paid, don’t let your ego get in the way. There is probably a better way to do everything you’ve ever done. Don’t become so cocky that you can’t think someone knows more than you. There’s a lot of brilliant, helpful people out there willing to show you tricks if you just give them a chance. A very super-awesome developer once told me “So what if you’ve been writing code for 10 years or more! Does your code look basically the same? Are you not growing as a developer?” Those 10 years become pretty meaningless if you just “know” that you are right and have not picked up new tips, tricks, methods, and patterns along the way. Learn from others and find out what’s new in development land (you know you don’t have to specifically use pointers anymore??). Along those same lines… If it’s not working, first assume you are doing something wrong. You have no idea how much it annoys people who are trying to help you when you first assume that the help they are trying to give you is wrong. Just MAYBE… you… the person learning is making some small mistake? Maybe you didn’t describe your problem correctly? Maybe you are using the wrong terminology? “I did exactly what you said and it didn’t work.”  Oh really? Are you SURE about that? “Your solution doesn’t work.”  Well… I’m pretty sure it works, I’ve used it 200 times… What are you doing differently? First try some humility and appreciation.. it will go much further, especially when it turns out YOU are the one that is wrong. When all else fails…. Try Professional Training Some people just don’t have the mindset to go and figure stuff out. It’s a gift and not everyone has it. If everyone could do it I wouldn’t have a job and there wouldn’t be professional training available.  So, if you’ve tried everything else and no light bulbs are coming on, contact the experts who specialize in training. Be careful though, there is bad training out there. Want to know the names of some good places? Just shoot me a message and I’ll let you know. I’m boycotting endorsing Andrew Connell anymore until I get that free course dangit!! So… that’s it.. that’s all I got right now. Maybe you thought all of this is common sense, maybe you think I’m smoking crack. If so, don’t just sit there, there’s a comments section for a reason. Finally, what about you? What tips do you have to help this aspiring to learn the dark arts??

    Read the article

  • URL Rewrite – Protocol (http/https) in the Action

    - by OWScott
    IIS URL Rewrite supports server variables for pretty much every part of the URL and http header. However, there is one commonly used server variable that isn’t readily available.  That’s the protocol—HTTP or HTTPS. You can easily check if a page request uses HTTP or HTTPS, but that only works in the conditions part of the rule.  There isn’t a variable available to dynamically set the protocol in the action part of the rule.  What I wish is that there would be a variable like {HTTP_PROTOCOL} which would have a value of ‘HTTP’ or ‘HTTPS’.  There is a server variable called {HTTPS}, but the values of ‘on’ and ‘off’ aren’t practical in the action.  You can also use {SERVER_PORT} or {SERVER_PORT_SECURE}, but again, they aren’t useful in the action. Let me illustrate.  The following rule will redirect traffic for http(s)://localtest.me/ to http://www.localtest.me/. <rule name="Redirect to www"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="http://www.localtest.me/{R:1}" /> </rule> The problem is that it forces the request to HTTP even if the original request was for HTTPS. Interestingly enough, I planned to blog about this topic this week when I noticed in my twitter feed yesterday that Jeff Graves, a former colleague of mine, just wrote an excellent blog post about this very topic.  He beat me to the punch by just a couple days.  However, I figured I would still write my blog post on this topic.  While his solution is a excellent one, I personally handle this another way most of the time.  Plus, it’s a commonly asked question that isn’t documented well enough on the web yet, so having another article on the web won’t hurt. I can think of four different ways to handle this, and depending on your situation you may lean towards any of the four.  Don’t let the choices overwhelm you though.  Let’s keep it simple, Option 1 is what I use most of the time, Option 2 is what Jeff proposed and is the safest option, and Option 3 and Option 4 need only be considered if you have a more unique situation.  All four options will work for most situations. Option 1 – CACHE_URL, single rule There is a server variable that has the protocol in it; {CACHE_URL}.  This server variable contains the entire URL string (e.g. http://www.localtest.me:80/info.aspx?id=5)  All we need to do is extract the HTTP or HTTPS and we’ll be set. This tends to be my preferred way to handle this situation. Indeed, Jeff did briefly mention this in his blog post: … you could use a condition on the CACHE_URL variable and a back reference in the rewritten URL. The problem there is that you then need to match all of the conditions which could be a problem if your rule depends on a logical “or” match for conditions. Thus the problem.  If you have multiple conditions set to “Match Any” rather than “Match All” then this option won’t work.  However, I find that 95% of all rules that I write use “Match All” and therefore, being the lazy administrator that I am I like this simple solution that only requires adding a single condition to a rule.  The caveat is that if you use “Match Any” then you must consider one of the next two options. Enough with the preamble.  Here’s how it works.  Add a condition that checks for {CACHE_URL} with a pattern of “^(.+)://” like so: How you have a back-reference to the part before the ://, which is our treasured HTTP or HTTPS.  In URL Rewrite 2.0 or greater you can check the “Track capture groups across conditions”, make that condition the first condition, and you have yourself a back-reference of {C:1}. The “Redirect to www” example with support for maintaining the protocol, will become: <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions trackAllCaptures="true"> <add input="{CACHE_URL}" pattern="^(.+)://" /> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{C:1}://www.localtest.me/{R:1}" /> </rule> It’s not as easy as it would be if Microsoft gave us a built-in {HTTP_PROTOCOL} variable, but it’s pretty close. I also like this option since I often create rule examples for other people and this type of rule is portable since it’s self-contained within a single rule. Option 2 – Using a Rewrite Map For a safer rule that works for both “Match Any” and “Match All” situations, you can use the Rewrite Map solution that Jeff proposed.  It’s a perfectly good solution with the only drawback being the ever so slight extra effort to set it up since you need to create a rewrite map before you create the rule.  In other words, if you choose to use this as your sole method of handling the protocol, you’ll be safe. After you create a Rewrite Map called MapProtocol, you can use “{MapProtocol:{HTTPS}}” for the protocol within any rule action.  Following is an example using a Rewrite Map. <rewrite> <rules> <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{MapProtocol:{HTTPS}}://www.localtest.me/{R:1}" /> </rule> </rules> <rewriteMaps> <rewriteMap name="MapProtocol"> <add key="on" value="https" /> <add key="off" value="http" /> </rewriteMap> </rewriteMaps> </rewrite> Option 3 – CACHE_URL, Multi-rule If you have many rules that will use the protocol, you can create your own server variable which can be used in subsequent rules. This option is no easier to set up than Option 2 above, but you can use it if you prefer the easier to remember syntax of {HTTP_PROTOCOL} vs. {MapProtocol:{HTTPS}}. The potential issue with this rule is that if you don’t have access to the server level (e.g. in a shared environment) then you cannot set server variables without permission. First, create a rule and place it at the top of the set of rules.  You can create this at the server, site or subfolder level.  However, if you create it at the site or subfolder level then the HTTP_PROTOCOL server variable needs to be approved at the server level.  This can be achieved in IIS Manager by navigating to URL Rewrite at the server level, clicking on “View Server Variables” from the Actions pane, and added HTTP_PROTOCOL. If you create the rule at the server level then this step is not necessary.  Following is an example of the first rule to create the HTTP_PROTOCOL and then a rule that uses it.  The Create HTTP_PROTOCOL rule only needs to be created once on the server. <rule name="Create HTTP_PROTOCOL"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{CACHE_URL}" pattern="^(.+)://" /> </conditions> <serverVariables> <set name="HTTP_PROTOCOL" value="{C:1}" /> </serverVariables> <action type="None" /> </rule>   <rule name="Redirect to www" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> </conditions> <action type="Redirect" url="{HTTP_PROTOCOL}://www.localtest.me/{R:1}" /> </rule> Option 4 – Multi-rule Just to be complete I’ll include an example of how to achieve the same thing with multiple rules. I don’t see any reason to use it over the previous examples, but I’ll include an example anyway.  Note that it will only work with the “Match All” setting for the conditions. <rule name="Redirect to www - http" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> <add input="{HTTPS}" pattern="off" /> </conditions> <action type="Redirect" url="http://www.localtest.me/{R:1}" /> </rule> <rule name="Redirect to www - https" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^localtest\.me$" /> <add input="{HTTPS}" pattern="on" /> </conditions> <action type="Redirect" url="https://www.localtest.me/{R:1}" /> </rule> Conclusion Above are four working examples of methods to call the protocol (HTTP or HTTPS) from the action of a URL Rewrite rule.  You can use whichever method you most prefer.  I’ve listed them in the order that I favor them, although I could see some people preferring Option 2 as their first choice.  In any of the cases, hopefully you can use this as a reference for when you need to use the protocol in the rule’s action when writing your URL Rewrite rules. Further information: Viewing all Server Variable for a site. URL Parts available to URL Rewrite Rules Further URL Rewrite articles

    Read the article

  • Another Marketing Conference, part one – the best morning sessions.

    - by Roger Hart
    Yesterday I went to Another Marketing Conference. I honestly can’t tell if the title is just tipping over into smug, but in the balance of things that doesn’t matter, because it was a good conference. There was an enjoyable blend of theoretical and practical, and enough inter-disciplinary spread to keep my inner dilettante grinning from ear to ear. Sure, there was a bumpy bit in the middle, with two back-to-back sales pitches and a rather thin overview of the state of the web. But the signal:noise ratio at AMC2012 was impressively high. Here’s the first part of my write-up of the sessions. It’s a bit of a mammoth. It’s also a bit of a mash-up of what was said and what I thought about it. I’ll add links to the videos and slides from the sessions as they become available. Although it was in the morning session, I’ve not included Vanessa Northam’s session on the power of internal comms to build brand ambassadors. It’ll be in the next roundup, as this is already pushing 2.5k words. First, the important stuff. I was keeping a tally, and nobody said “synergy” or “leverage”. I did, however, hear the term “marketeers” six times. Shame on you – you know who you are. 1 – Branding in a post-digital world, Graham Hales This initially looked like being a sales presentation for Interbrand, but Graham pulled it out of the bag a few minutes in. He introduced a model for brand management that was essentially Plan >> Do >> Check >> Act, with Do and Check rolled up together, and went on to stress that this looks like on overall business management model for a reason. Brand has to be part of your overall business strategy and metrics if you’re going to care about it at all. This was the first iteration of what proved to be one of the event’s emergent themes: do it throughout the stack or don’t bother. Graham went on to remind us that brands, in so far as they are owned at all, are owned by and co-created with our customers. Advertising can offer a message to customers, but they provide the expression of a brand. This was a preface to talking about an increasingly chaotic marketplace, with increasingly hard-to-manage purchase processes. Services like Amazon reviews and TripAdvisor (four presenters would make this point) saturate customers with information, and give them a kind of vigilante power to comment on and define brands. Consequentially, they experience a number of “moments of deflection” in our sales funnels. Our control is lessened, and failure to engage can negatively-impact buying decisions increasingly poorly. The clearest example given was the failure of NatWest’s “caring bank” campaign, where staff in branches, customer support, and online presences didn’t align. A discontinuity of experience basically made the campaign worthless, and disgruntled customers talked about it loudly on social media. This in turn presented an opportunity to engage and show caring, but that wasn’t taken. What I took away was that brand (co)creation is ongoing and needs monitoring and metrics. But reciprocally, given you get what you measure, strategy and metrics must include brand if any kind of branding is to work at all. Campaigns and messages must permeate product and service design. What that doesn’t mean (and Graham didn’t say it did) is putting Marketing at the top of the pyramid, and having them bawl demands at Product Management, Support, and Development like an entitled toddler. It’s going to have to be collaborative, and session 6 on internal comms handled this really well. The main thing missing here was substantiating data, and the main question I found myself chewing on was: if we’re building brands collaboratively and in the open, what about the cultural politics of trolling? 2 – Challenging our core beliefs about human behaviour, Mark Earls This was definitely the best show of the day. It was also some of the best content. Mark talked us through nudging, behavioural economics, and some key misconceptions around decision making. Basically, people aren’t rational, they’re petty, reactive, emotional sacks of meat, and they’ll go where they’re led. Comforting stuff. Examples given were the spread of the London Riots and the “discovery” of the mountains of Kong, and the popularity of Susan Boyle, which, in turn made me think about Per Mollerup’s concept of “social wayshowing”. Mark boiled his thoughts down into four key points which I completely failed to write down word for word: People do, then think – Changing minds to change behaviour doesn’t work. Post-rationalization rules the day. See also: mere exposure effects. Spock < Kirk - Emotional/intuitive comes first, then we rationalize impulses. The non-thinking, emotive, reactive processes run much faster than the deliberative ones. People are not really rational decision makers, so  intervening with information may not be appropriate. Maximisers or satisficers? – Related to the last point. People do not consistently, rationally, maximise. When faced with an abundance of choice, they prefer to satisfice than evaluate, and will often follow social leads rather than think. Things tend to converge – Behaviour trends to a consensus normal. When faced with choices people overwhelmingly just do what they see others doing. Humans are extraordinarily good at mirroring behaviours and receiving influence. People “outsource the cognitive load” of choices to the crowd. Mark’s headline quote was probably “the real influence happens at the table next to you”. Reference examples, word of mouth, and social influence are tremendously important, and so talking about product experiences may be more important than talking about products. This reminded me of Kathy Sierra’s “creating bad-ass users” concept of designing to make people more awesome rather than products they like. If we can expose user-awesome, and make sharing easy, we can normalise the behaviours we want. If we normalize the behaviours we want, people should make and post-rationalize the buying decisions we want.  Where we need to be: “A bigger boy made me do it” Where we are: “a wizard did it and ran away” However, it’s worth bearing in mind that some purchasing decisions are personal and informed rather than social and reactive. There’s a quadrant diagram, in fact. What was really interesting, though, towards the end of the talk, was some advice for working out how social your products might be. The standard technology adoption lifecycle graph is essentially about social product diffusion. So this idea isn’t really new. Geoffrey Moore’s “chasm” idea may not strictly apply. However, his concepts of beachheads and reference segments are exactly what is required to normalize and thus enable purchase decisions (behaviour change). The final thing is that in only very few categories does a better product actually affect purchase decision. Where the choice is personal and informed, this is true. But where it’s personal and impulsive, or in any way social, “better” is trumped by popularity, endorsement, or “point of sale salience”. UX, UCD, and e-commerce know this to be true. A better (and easier) experience will always beat “more features”. Easy to use, and easy to observe being used will beat “what the user says they want”. This made me think about the astounding stickiness of rational fallacies, “common sense” and the pathological willful simplifications of the media. Rational fallacies seem like they’re basically the heuristics we use for post-rationalization. If I were profoundly grimy and cynical, I’d suggest deploying a boat-load in our messaging, to see if they’re really as sticky and appealing as they look. 4 – Changing behaviour through communication, Stephen Donajgrodzki This was a fantastic follow up to Mark’s session. Stephen basically talked us through some tactics used in public information/health comms that implement the kind of behavioural theory Mark introduced. The session was largely about how to get people to do (good) things they’re predisposed not to do, and how communication can (and can’t) make positive interventions. A couple of things stood out, in particular “implementation intentions” and how they can be linked to goals. For example, in order to get people to check and test their smoke alarms (a goal intention, rarely actualized  an information campaign will attempt to link this activity to the clocks going back or forward (a strong implementation intention, well-actualized). The talk reinforced the idea that making behaviour changes easy and visible normalizes them and makes them more likely to succeed. To do this, they have to be embodied throughout a product and service cycle. Experiential disconnects undermine the normalization. So campaigns, products, and customer interactions must be aligned. This is underscored by the second section of the presentation, which talked about interventions and pre-conditions for change. Taking the examples of drug addiction and stopping smoking, Stephen showed us a framework for attempting (and succeeding or failing in) behaviour change. He noted that when the change is something people fundamentally want to do, and that is easy, this gets a to simpler. Coordinated, easily-observed environmental pressures create preconditions for change and build motivation. (price, pub smoking ban, ad campaigns, friend quitting, declining social acceptability) A triggering even leads to a change attempt. (getting a cold and panicking about how bad the cough is) Interventions can be made to enable an attempt (NHS services, public information, nicotine patches) If it succeeds – yay. If it fails, there’s strong negative enforcement. Triggering events seem largely personal, but messaging can intervene in the creation of preconditions and in supporting decisions. Stephen talked more about systems of thinking and “bounded rationality”. The idea being that to enable change you need to break through “automatic” thinking into “reflective” thinking. Disruption and emotion are great tools for this, but that is only the start of the process. It occurs to me that a great deal of market research is focused on determining triggers rather than analysing necessary preconditions. Although they are presumably related. The final section talked about setting goals. Marketing goals are often seen as deriving directly from business goals. However, marketing may be unable to deliver on these directly where decision and behaviour-change processes are involved. In those cases, marketing and communication goals should be to create preconditions. They should also consider priming and norms. Content marketing and brand awareness are good first steps here, as brands can be heuristics in decision making for choice-saturated consumers, or those seeking education. 5 – The power of engaged communities and how to build them, Harriet Minter (the Guardian) The meat of this was that you need to let communities define and establish themselves, and be quick to react to their needs. Harriet had been in charge of building the Guardian’s community sites, and learned a lot about how they come together, stabilize  grow, and react. Crucially, they can’t be about sales or push messaging. A community is not just an audience. It’s essential to start with what this particular segment or tribe are interested in, then what they want to hear. Eventually you can consider – in light of this – what they might want to buy, but you can’t start with the product. A community won’t cohere around one you’re pushing. Her tips for community building were (again, sorry, not verbatim): Set goals Have some targets. Community building sounds vague and fluffy, but you can have (and adjust) concrete goals. Think like a start-up This is the “lean” stuff. Try things, fail quickly, respond. Don’t restrict platforms Let the audience choose them, and be aware of their differences. For example, LinkedIn is very different to Twitter. Track your stats Related to the first point. Keeping an eye on the numbers lets you respond. They should be qualified, however. If you want a community of enterprise decision makers, headcount alone may be a bad metric – have you got CIOs, or just people who want to get jobs by mingling with CIOs? Build brand advocates Do things to involve people and make them awesome, and they’ll cheer-lead for you. The last part really got my attention. Little bits of drive-by kindness go a long way. But more than that, genuinely helping people turns them into powerful advocates. Harriet gave an example of the Guardian engaging with an aspiring journalist on its Q&A forums. Through a series of serendipitous encounters he became a BBC producer, and now enthusiastically speaks up for the Guardian community sites. Cultivating many small, authentic, influential voices may have a better pay-off than schmoozing the big guys. This could be particularly important in the context of Mark and Stephen’s models of social, endorsement-led, and example-led decision making. There’s a lot here I haven’t covered, and it may be worth some follow-up on community building. Thoughts I was quite sceptical of nudge theory and behavioural economics. First off it sounds too good to be true, and second it sounds too sinister to permit. But I haven’t done the background reading. So I’m going to, and if it seems to hold real water, and if it’s possible to do it ethically (Stephen’s presentations suggests it may be) then it’s probably worth exploring. The message seemed to be: change what people do, and they’ll work out why afterwards. Moreover, the people around them will do it too. Make the things you want them to do extraordinarily easy and very, very visible. Normalize and support the decisions you want them to make, and they’ll make them. In practice this means not talking about the thing, but showing the user-awesome. Glib? Perhaps. But it feels worth considering. Also, if I ever run a marketing conference, I’m going to ban speakers from using examples from Apple. Quite apart from not being consistently generalizable, it’s becoming an irritating cliché.

    Read the article

  • CodePlex Daily Summary for Saturday, June 18, 2011

    CodePlex Daily Summary for Saturday, June 18, 2011Popular ReleasesEffectControls-Silverlight controls with animation effects: EffectControls: EffectControlsMedia Companion: MC 3.408b weekly: Some minor fixes..... Fixed messagebox coming up during batch scrape Added <originaltitle></originaltitle> to movie nfo's - when a new movie is scraped, the original title will be set as the returned title. The end user can of course change the title in MC, however the original title will not change. The original title can be seen by hovering over the movie title in the right pane of the main movie tab. To update all of your current nfo's to add the original title the same as your current ...NLog - Advanced .NET Logging: NLog 2.0 Release Candidate: Release notes for NLog 2.0 RC can be found at http://nlog-project.org/nlog-2-rc-release-notesPowerGUI Visual Studio Extension: PowerGUI VSX 1.3.5: Changes - VS SDK no longer required to be installed (a bug in v. 1.3.4).Gendering Add-In for Microsoft Office Word 2010: Gendering Add-In: This is the first stable Version of the Gendering Add-In. Unzip the package and start "setup.exe". The .reg file shows how to config an alternate path for suggestion table.TerrariViewer: TerrariViewer v3.1 [Terraria Inventory Editor]: This version adds tool tips. Almost every picture box you mouse over will tell you what item is in that box. I have also cleaned up the GUI a little more to make things easier on my end. There are various bug fixes including ones associated with opening different characters in the same instance of the program. As always, please bring any bugs you find to my attention.CommonLibrary.NET: CommonLibrary.NET - 0.9.7 Beta: A collection of very reusable code and components in C# 3.5 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. Samples in <root>\src\Lib\CommonLibrary.NET\Samples CommonLibrary.NET 0.9.7Documentation 6738 6503 New 6535 Enhancements 6583 6737DropBox Linker: DropBox Linker 1.2: Public sub-folders are now monitored for changes as well (thanks to mcm69) Automatic public sync folder detection (thanks to mcm69) Non-Latin and special characters encoded correctly in URLs Pop-ups are now slot-based (use first free slot and will never be overlapped — test it while previewing timeout) Public sync folder setting is hidden when auto-detected Timeout interval is displayed in popup previews A lot of major and minor code refactoring performed .NET Framework 4.0 Client...Terraria World Viewer: Version 1.3: Update June 15th Removed "Draw Markers" checkbox from main window because of redundancy/confusing. (Select all or no items from the Settings tab for the same effect.) Fixed Marker preferences not being saved. It is now possible to render more than one map without having to restart the application. World file will not be locked while the world is being rendered. Note: The World Viewer might render an inaccurate map or even crash if Terraria decides to modify the World file during the pro...MVC Controls Toolkit: Mvc Controls Toolkit 1.1.5 RC: Added Extended Dropdown allows a prompt item to be inserted as first element. RequiredAttribute, if present, trggers if no element is chosen Client side javascript function to set/get the values of DateTimeInput, TypedTextBox, TypedEditDisplay, and to bind/unbind a "change" handler The selected page in the pager is applied the attribute selected-page="selected" that can be used in the definition of CSS rules to style the selected page items controls now interpret a null value as an empr...Umbraco CMS: Umbraco CMS 5.0 CTP 1: Umbraco 5 Community Technology Preview Umbraco 5 will be the next version of everyone's favourite, friendly ASP.NET CMS that already powers over 100,000 websites worldwide. Try out our first CTP of version 5 today! If you're new to Umbraco and would like to get a quick low-down on our popular and easy-to-learn approach to content management, check out our intro video here. What's in the v5 CTP box? This is a preview version of version 5 and includes support for the following familiar Umbr...Coding4Fun Kinect Toolkit: Coding4Fun.Kinect Toolkit: Version 1.0Kinect Mouse Cursor: Kinect Mouse Cursor v1.0: The initial release of the Kinect Mouse Cursor project!patterns & practices: Project Silk: Project Silk Community Drop 11 - June 14, 2011: Changes from previous drop: Many code changes: please see the readme.mht for details. New "Client Data Management and Caching" chapter. Updated "Application Notifications" chapter. Updated "Architecture" chapter. Updated "jQuery UI Widget" chapter. Updated "Widget QuickStart" appendix and code. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separat...Orchard Project: Orchard 1.2: Build: 1.2.41 Published: 6/14/2010 How to Install Orchard To install Orchard using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard.ashx. Web PI will detect your hardware environment and install the application. Alternatively, to install the release manually, download the Orchard.Web.1.2.41.zip file. http://orchardproject.net/docs/Manually-installing-Orchard-zip-file.ashx The zip contents are pre-built and ready-to-run. Simply extract the contents o...Snippet Designer: Snippet Designer 1.4.0: Snippet Designer 1.4.0 for Visual Studio 2010 Change logSnippet Explorer ChangesReworked language filter UI to work better in the side bar. Added result count drop down which lets you choose how many results to see. Language filter and result count choices are persisted after Visual Studio is closed. Added file name to search criteria. Search is now case insensitive. Snippet Editor Changes Snippet Editor ChangesAdded menu option for the $end$ symbol which indicates where the c...Mobile Device Detection and Redirection: 1.0.4.1: Stable Release 51 Degrees.mobi Foundation is the best way to detect and redirect mobile devices and their capabilities on ASP.NET and is being used on thousands of websites worldwide. We’re highly confident in our software and we recommend all users update to this version. Changes to Version 1.0.4.1Changed the BlackberryHandler and BlackberryVersion6Handler to have equal CONFIDENCE values to ensure they both get a chance at detecting BlackBerry version 4&5 and version 6 devices. Prior to thi...Rawr: Rawr 4.1.06: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta6: ??AcDown?????????????,?????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta6 ?????(imanhua.com)????? ???? ?? ??"????","?????","?????","????"?????? "????"?????"????????"?? ??????????? ?????????????? ?????????????/???? ?? ????Windows 7???????????? ????????? ?? ????????????? ???????/??????????? ???????????? ?? ?? ?????(imanh...Pulse: Pulse Beta 2: - Added new wallpapers provider http://wallbase.cc. Supports english search, multiple keywords* - Improved font rendering in Options window - Added "Set wallpaper as logon background" option* - Fixed crashes if there is no internet connection - Fixed: Rewalls downloads empty images sometimes - Added filters* Note 1: wallbase provider supports only english search. Rewalls provider supports only russian search but Pulse automatically translates your english keyword into russian using Google Tr...New Projects.NET Entities Framework Utils: Project for creating supporting code to work with .NET Entity Framework.A Simple Demo of Industrial Process Monitoring System based on .NET & Arduino: The demo show some key points of .NET 1 .NET WPF 2 WCF 3 Sinverlight 4 ADO.Net The demo is a good sample for learing .NET and developing process monitoring system. The demo get temperture from Arduinot It is also a good Arduin sample.AHtml Pad: AHtml Pad is a powerfull html and css editor. It's made for beginnners and experimented programmers. It's made in vb.net with visual studio 2010Allena la mente: Raccolta di minigames per migliorare Memoria, Riflessi, Logica e Matematica. Applicazione in silverlight per Windows Phone. Carousel TeamAnorexia World: Ich habe ein kleines Project geschrieben dass mit einen MDI Formular eine komplette Office Suite (noch in hartz4) in einen verheint!AutomaSolution - IT Automation made easy!: AutomaSolution is a console application written in C#. It takes a single .xml file as input, and processes each section to perform an automation task. AzureManagmentAPI.NET: .NET Wrapper for Microsoft Windows Azure Service Managment REST API. It's developed in C# and uses .NET 4.0 Framework. More Information about the Windows Azure Service Management REST API can be found here: http://msdn.microsoft.com/en-us/library/ee460799.aspxCirrus: Projet en C# qui sert au recueillement de données multi-sites.Cruise Control .NET TV: Cruise Control .NET TV puts your project's integration status on a TV and adds coverage graphs generated through NCover.DBML Updater: External tool for Visual Studio which automatically updating DBML files, according to configuration files (XML). You can also specify rules for editing and deleting elements in DBML file. And supports source code generation on the end.EPiServer CMS Page Type Extensions: EPiServer PageTypeExtensions provide additional features related to page types in EPiServer CMS 6. They allow the developer to set restrictions on the number of pages that can be created under a page and also provide page type image preview functionality.Excel add-in library: Create Excel xll add-ins.Gendering Add-In for Microsoft Office Word 2010: Word Add-In that assists user by giving hints to write gender-neutral documents. The current function is a post-processing function to verify a written text against the rules of gender-neutral definition in German Language. The definitions are implemented in form of words and phrases and their gender-neutral replacement as a suggestion. The documentation is written in German. Word Add-In, das eine Unterstützung bietet, einen bereits geschriebenen Text zu überprüfen, nach einem definierten ...Kinductor: Kinductor puts you on the podium and in control of a full symphonic orchestra using just your hands and Kinect.LPFM Last.fm Scrobbler: LPFM Last.fm Scrobbler is a simple .NET API library for scrobbling to the Last.fm web service. It is designed for desktop, web and mobile applications that target the .NET 4 Framework. The library supports the Scrobble and Now Playing functionality of the Last.fm API version 2.0MDB RIA Service Generator: Auto-generates RIA service from a given mdb to create a lightswitch extension.mediaplayer-isen: super projet qui envoie du fat !!! ^^Metodología General Ajustada - MGA: Herramienta tecnológica que apoya la Metodología para la formulación y evaluación de Proyectos de Inversión Pública mejorada en Colombia. Metodología General Ajustada - MGA. Desarrollado en Visual C# 2008 y Base de datos SQL Server 2008.M-i-c-r-o-S-o-f-t-W-M-S: M8i8c8r8o8S8o8f8t M8i8c8r8o8S8o8f8t M8i8c8r8o8S8o8f8tmim: TBAMinecraft data viewing tools: A little toolset for Minecraft server. Contains a basic NBT reader and ingame map viewer.PHPCSERP: PHPCS ERP ????????????????????,???????????PDF???。 ???????????,??,??。 PHPCS ERP ??????????????????。 PHPCS ERP ?????????????,??????????????????????IT?????????????。 ??????????????? PHPCS ERP。 ??????IT?????????,??????????。Rangers Build Customization Guide: Scenario based and hands-on guidance for the customization and deployment of TFS Builds activities such as versioning, code signing, branching. Rangers Lab Management Guide: Practical and scenario-based guidance, backed by custom VM Template automation for reference environments Snail-Blog: ??asp.net?????SSIS Extensions - SFTP Task, PGP Task, Zip Task: A set of custom tasks to extend SSIS. Includes a SFTP task, PGP encryption task and zip/unzip task.stage: asp.net opensource testprojectTimeBook: Project Description A simple asp.net mvc project to manage time. The main reason for the project is to learn asp.net mvc. The end product will have the following features. Multiple companies/individuals can sign up. Each company/individual can add/remove/update their clients. UMDH Tracer: Tool that generates & exploits UMDH Dump so that leaks detection is easier.

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack.Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while.Self-Service BISelf-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI.This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me:PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.)Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.)One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.)Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.)Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.)This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users.It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations.Collaborative BII have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time.Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people."The Microsoft BI Stack in GeneralA question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years.Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?"Expo HallI had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here.Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions.Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind!Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • CodePlex Daily Summary for Saturday, March 19, 2011

    CodePlex Daily Summary for Saturday, March 19, 2011Popular ReleasesWPF Inspector: WPF Inspector 0.9.8: New Features in Version 0.9.8 - Much improved Style Explorer - Storing the last window position - Inspector stays operateable when a modal dialog is open - Improved visualization of Resources - Style item in property grid - BugfixesCraig's Utility Library: Craig's Utility Library 2.1: This update contains the following functionality additions: Added Min and Max functions to MathHelper Added code for handling comments to BlogML code. Added WMI based file search ability (at present only searches based on file extension). Added CellularTexture class Added FaultFormation class Added PerlinNoise class Added Akismet helper classes Added Gravatar image link generator Added PipeDelimited class Added FaultFormation class Added FluvialErosion functions to Image c...DirectQ: Release 1.8.7 (RC3): Release Candidate 3 of 1.8.7 fixing many bugs and adding some new functionality.Catel - WPF, Silverlight and WP7 MVVM library: 1.3: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com =========== Version 1.3 =========== Release date: ============= 2011/03/18 Added/fixed: ============ (+) Added BindingHelper class to evaluate binding values manually (+) UserControl<TViewModel> now supports master-detail views where the detail view would have a nested UserControl<TViewModel> and no parent ...CBM-Command: Version 2.0 - 2011-03-17 - Final Release: This is the final release of CBM-Command Version 2.0. This version is intended to replace all prior versions you may have downloaded. Please see release notes for prior versions to get comprehensive list of changes. New features since RC1: - (C64) Added double-buffering when displaying the directory in a panel. This eliminates the flickering that users were experiencing when scrolling through long directories. Changes since RC1: - (All Machines) Changed the algorithm for displaying the d...Phalanger - The PHP Language Compiler for the .NET Framework: 2.1 (March 2011) for .NET 4.0: Introducing release of Phalanger 2.1 for .NET 4.0. This release brings big performance boost about 20% in most of the operations. This improvement can be expected also in an overall performance in many PHP applications, e.g. Wordpress. It is the first release that targets .NET Framework 4.0 which allows developers to move the project forward. To migrate your old Phalanger applications from Phalanger 2.0 to 2.1 please follow Migration to 2.1. Installation package also includes basic version o...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.164: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release adds new layout options for groups, makes some minor feature improvements, and fixes a few bugs. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file in Wi...Leage of Legends Masteries Tool: LoLMasterSave_v1.6.1.274: -Addresses resent LoL update that interfered with the way MasterSave sets / reads masteries - Removed Shift windows since some people experiencing issues If your interested in this function i can provide it as small separate tool.LogExpert: 1.4 build 4092: TabControl: Tooltip on dropdown list shows full path names now New menu item "Lock instance" in Options menu. Only available when "Allow only one instance" is disabled in the settings. "Lock instance" will temporary enable the single instance mode. The locked instance will receive all new launched files Some NullPtrExceptions fixed (e.g. in the settings dialog) Note: The debug build is identical to the release build. But the debug version writes a log file. It also contains line numbers ...Facebook C# SDK: 5.0.6 (BETA): This is seventh BETA release of the version 5 branch of the Facebook C# SDK. Remember this is a BETA build. Some things may change or not work exactly as planned. We are absolutely looking for feedback on this release to help us improve the final 5.X.X release. New in this release: Version 5.0.6 is almost completely backward compatible with 4.2.1 and 5.0.3 (BETA) Bug fixes and helpers to simplify many common scenarios For more information about this release see the following blog posts: F...SQLCE Code Generator: Build 1.0.3: New beta of the SQLCE Code Generator. New features: - Generates an IDataRepository interface that contains the generated repository interfaces that represents each table - Visual Studio 2010 Custom Tool Support Custom Tool: The custom tool is called SQLCECodeGenerator. Write this in the Custom Tool field in the Properties Window of an SDF file included in your project, this should create a code-behind file for the generated data access codeDotNetNuke® Community Edition: 06.00.00 CTP: CTP 1 (Build 155) is firmly focused around our conversion to C#. As many people have noted, this is a significant change to the platform and affects all areas of the product. This is one of the driving factors in why we felt it was important to get this release into your hands as soon as possible. We have already done quite a bit of testing on this feature internally and have fixed a number of issues in this area. We also recognize that there are probably still some more bugs to be found ...Kooboo CMS: Kooboo 3.0 RC: Bug fixes Inline editing toolbar positioning for websites with complicate CSS. Inline editing is turned on by default now for the samplesite template. MongoDB version content query for multiple filters. . Add a new 404 page to guide users to login and create first website. Naming validation for page name and datarule name. Files in this download kooboo_CMS.zip: The Kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL,SQLCE, RavenDB ...SQL Monitor - tracking sql server activities: SQL Monitor 3.2: 1. introduce sql color syntax highlighting with http://www.codeproject.com/KB/edit/FastColoredTextBox_.aspxUmbraco CMS: Umbraco 4.7.0: Service release fixing 50+ issues! Getting Started A great place to start is with our Getting Started Guide: Getting Started Guide: http://umbraco.codeplex.com/Project/Download/FileDownload.aspx?DownloadId=197051 Make sure to check the free foundation videos on how to get started building Umbraco sites. They're available from: Introduction for webmasters: http://umbraco.tv/help-and-support/video-tutorials/getting-started Understand the Umbraco concepts: http://umbraco.tv/help-and-support...ProDinner - ASP.NET MVC EF4 Code First DDD jQuery Sample App: first release: ProDinner is an ASP.NET MVC sample application, it uses DDD, EF4 Code First for Data Access, jQuery and MvcProjectAwesome for Web UI, it has Multi-language User Interface Features: CRUD and search operations for entities Multi-Language User Interface upload and crop Images (make thumbnail) for meals pagination using "more results" button very rich and responsive UI (using Mvc Project Awesome) Multiple UI themes (using jQuery UI themes)BEPUphysics: BEPUphysics v0.15.1: Latest binary release. Version HistoryIronRuby: 1.1.3: IronRuby 1.1.3 is a servicing release that keeps on improving compatibility with Ruby 1.9.2 and includes IronRuby integration to Visual Studio 2010. We decided to drop 1.8.6 compatibility mode in all post-1.0 releases. We recommend using IronRuby 1.0 if you need 1.8.6 compatibility. The main purpose of this release is to sync with IronPython 2.7 release, i.e. to keep the Dynamic Language Runtime that both these languages build on top shareable. This release also fixes a few bugs: 5763 Use...SQL Server PowerShell Extensions: 2.3.2.1 Production: Release 2.3.2.1 implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 13 modules with 163 advanced functions, 2 cmdlets and 7 scripts for working with ADO.NET, SMO, Agent, RMO, SSIS, SQL script files, PBM, Performance Counters, SQLProfiler, Oracle and MySQL and using Powershell ISE as a SQL and Oracle query tool. In addition optional backend databases and SQL Server Reporting Services 2008 reports are provided with SQLServer and PBM modules. See readme file for details.IronPython: 2.7: On behalf of the IronPython team, I'm very pleased to announce the release of IronPython 2.7. This release contains all of the language features of Python 2.7, as well as several previously missing modules and numerous bug fixes. IronPython 2.7 also includes built-in Visual Studio support through IronPython Tools for Visual Studio. IronPython 2.7 requires .NET 4.0 or Silverlight 4. To download IronPython 2.7, visit http://ironpython.codeplex.com/releases/view/54498. Any bugs should be report...New ProjectsBRFin: Personal Finances Management SystemCommand Savvy: Command Savvy provides developers with a simple API for quickly creating consistent console applications. It uses conventions and reflection-based auto-wiring to simplify configuration (while allowing for overrides of this default behavior). It is developed in C#.Control Arrays .NET: Ability to create control arrays at design time for standard Windows Forms controls, with all the functionality that existed in VB6, in VB.NET using Visual Studio 2010 and the .NET 4.0 platform.Crystalbyte Horizon: Crystalbyte Horizon is a carrier application written in C# using WPF, licenced under the Ms-PL. Dimos GIS Overlay Project: GIS Project to compute the spatial overlay on the Azure platformEric Fang's SharePoint workflow timer: SharePoint 2010 workflow timer, which can schedule site workflows and list workflowsjaryaan: Jaryaan Makes it easier for infected systems to Force Close Unwanted Process to bring health back to System. Persian Translation is availableMakalu: Track all the problems and issues of your city. Mosscn: mosscn projectNetduino4Fun: Netduino projects for fun. Experiments with the Electronic brick Starter kit from Seeed Studio.NHibernate AutoCriteria: Creating criteria based on...Nina: Nina is an open source asynchronous event-driven networking library.Ploobs Game Engine: Full Game Engine developed in C# and XNA using Deferred Rendering. The principal Features are: Integrated Physic, Artificial Inteligence, Dynamic Lights and Shadow, Lots of Post Effects, Billboards , Extensible Particle System, Vegetation, Materials types, 3D Sound and MUCH MORE!Roadkill - .NET Wiki engine: Roadkill .NET is a lightweight but powerful Wiki engine built on the following foundations: .NET 4.0 jQuery, ASP.NET MVC 3 with Razor, NHibernate, Creole Wiki/Markdown/Mediawiki syntax, SQL ServerService Application Sample: Practical sample of a service application based on known patterns and practices.SharePoint 2010 Custom Login: This is a custom login screen for SharePoint 2010 FBA. Working on a custom authentication.aspx to replace the Windows Authentication / Forms Authentication prompt as well based on URL / Zone.SharePoint People Search Pivot Viewer WebPart: The Silverlight PivotViewer ?ontrol is a new way to display SharePoint 2010 Enterprise Search Results. It's another way to interact with massive amounts of data on the web in ways that are powerful, informative, and valuable. Tested on FAST Search 2010. SharePoint ReGhost: Console tool created for reghosting SharePoint sites after an upgrade.Simesoft: ????????Spruce - MVC frontend for TFS: Spruce is a small ASP.NET MVC 3 (razor) frontend for managing work items in Team Foundation Server 2010. It is influenced by Bitbucket, Fogbugz and other simple to use bug tracking systems.Twavatar: Super simple ASP.NET helper for rendering Twitter avatars / profile images.UCB_GP_B: this is a pilot project which is intented to help the college education.UCB_GP_C: this is a pilot project which is intented to help the college education.UCB_GP_D: this is a pilot project which is intented to help the college education. Ultimate Calculator for windows phone 7: ???????????? ????????????????????????????WP7 Text-to-Speech Tool & Translation Library: Windows Phone Text-to-Speech (wpTTS) produces speech from text strings. wpTTS also provides real-time translation between a select list of languages. (AppID required.)

    Read the article

  • Scrum Master Stephen Forte Teaches Agile Development, Silverlight and BI at GIDS 2010

    - by rajesh ahuja
    Great Indian Developer Summit 2010 – Gold Standard for India's Software Developer Ecosystem Bangalore, March 25, 2010: The author of several books on application and database development including Programming SQL Server 2008 and certified Scrum Master Stephen Forte is coming this summer to India's biggest summit for the developer ecosystem - Great Indian Developer Summit. At the summit, Stephen will conduct a workshop guaranteed to give attendees a jump start in taking a certified scrum master exam. Scrum, one of the most popular Agile project management and development methods, which is starting to be adopted at major corporations and on very large projects. After an introduction to the basics of Scrum like project planning and estimation, the Scrum Master, team, product owner and burn down, and of course the daily Scrum, Stephen will show many real world applications of the methodology drawn from his own experience as a Scrum Master. Negotiating with the business, estimation and team dynamics are all discussed as well as how to use Scrum in small organizations, large enterprise environments and consulting environments. Stephen will also discuss using Scrum with virtual teams and an off-shoring environment. He will then take a look at the tools we will use for Agile development, including planning poker, unit testing, and much more. On 20th April at the GIDS.NET Conference, Stephen will also conduct a series of sessions on Microsoft computing technologies. He will teach how to build data driven, n-tier Rich Internet Applications (RIA) with Silverlight 4.0. Line of business applications (LOB) in Silverlight 4.0 are easy by tapping the power of WCF RIA Services, the Silverlight Toolkit, and elevated out of browser support. Stephen's demo centric session will walk you through an example of building a LOB application with Silverlight 4.0. See how Silverlight and WCF RIA Services support domain logic, services, data binding, validation, server based paging, authentication, authorization and much more. Silverlight 4.0 means business. Silverlight runs C# and Visual Basic code, and so it seems natural that a business application might share some code between the Silverlight client and its ASP.NET Web server. You may want to run some code client-side for interactivity, but re-run that code on the server for security or reliability. This is possible, and there are several techniques you can use to accomplish this goal. In Stephen's second talk learn about the various techniques and their pros and cons. Some techniques work better in C#, others in VB. Still others are simpler with a little extra tooling or code-generation. Any serious Silverlight business application will almost certainly face this issue, and this session gets you going fast. In the third talk, Stephen will explain how to properly architect and deploy a BI application using a mix of some exciting new tools and some old familiar ones. He will start with a traditional relational transaction centric database (OLTP) and explore ways to build a data warehouse (OLAP), looking at the star and snowflake schemas. Next he will look at the process of extraction, transformation, and loading (ETL) your OLTP data into your data warehouse. Different techniques for ETL will be described and the various tradeoffs will be discussed. Then he will look at using the warehouse for reporting, drill down, and data analysis in Microsoft Excel's PowerPivot 2010. The session will round off by showing how to properly build a cube and build a data analysis application on top of that cube, and conclude by looking at some tools to help with the data visualization process. Every year, GIDS is a game changer for several thousands of IT professionals, providing them with a competitive edge over their peers, enlightening them with bleeding-edge information most useful in their daily jobs, helping them network with world-class experts and visionaries, and providing them with a much needed thrust in their careers. Attend Great Indian Developer Summit to gain the information, education and solutions you seek. From post-conference workshops, breakout sessions by expert instructors, keynotes by industry heavyweights, enhanced networking opportunities, and more. About Great Indian Developer Summit Great Indian Developer Summit is the gold standard for India's software developer ecosystem for gaining exposure to and evaluating new projects, tools, services, platforms, languages, software and standards. Packed with premium knowledge, action plans and advise from been-there-done-it veterans, creators, and visionaries, the 2010 edition of Great Indian Developer Summit features focused sessions, case studies, workshops and power panels that will transform you into a force to reckon with. Featuring 3 co-located conferences: GIDS.NET, GIDS.Web, GIDS.Java and an exclusive day of in-depth tutorials - GIDS.Workshops, from 20 April to 24 April at the IISc campus in Bangalore. At GIDS you'll participate in hundreds of sessions encompassing the full range of Microsoft computing, Java, Agile, RIA, Rich Web, open source/standards, languages, frameworks and platforms, practical tutorials that deep dive into technical skill and best practices, inspirational keynote presentations, an Expo Hall featuring dozens of the latest projects and products activities, engaging networking events, and the interact with the best and brightest of speakers from around the world. For further information on GIDS 2010, please visit the summit on the web http://www.developersummit.com/ A Saltmarch Media Press Release E: [email protected] Ph: +91 80 4005 1000

    Read the article

  • Can you help me get my head around openssl public key encryption with rsa.h in c++?

    - by Ben
    Hi there, I am trying to get my head around public key encryption using the openssl implementation of rsa in C++. Can you help? So far these are my thoughts (please do correct if necessary) Alice is connected to Bob over a network Alice and Bob want secure communications Alice generates a public / private key pair and sends public key to Bob Bob receives public key and encrypts a randomly generated symmetric cypher key (e.g. blowfish) with the public key and sends the result to Alice Alice decrypts the ciphertext with the originally generated private key and obtains the symmetric blowfish key Alice and Bob now both have knowledge of symmetric blowfish key and can establish a secure communication channel Now, I have looked at the openssl/rsa.h rsa implementation (since I already have practical experience with openssl/blowfish.h), and I see these two functions: int RSA_public_encrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa, int padding); int RSA_private_decrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa, int padding); If Alice is to generate *rsa, how does this yield the rsa key pair? Is there something like rsa_public and rsa_private which are derived from rsa? Does *rsa contain both public and private key and the above function automatically strips out the necessary key depending on whether it requires the public or private part? Should two unique *rsa pointers be generated so that actually, we have the following: int RSA_public_encrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa_public, int padding); int RSA_private_decrypt(int flen, unsigned char *from, unsigned char *to, RSA *rsa_private, int padding); Secondly, in what format should the *rsa public key be sent to Bob? Must it be reinterpreted in to a character array and then sent the standard way? I've heard something about certificates -- are they anything to do with it? Sorry for all the questions, Best Wishes, Ben. EDIT: Coe I am currently employing: /* * theEncryptor.cpp * * * Created by ben on 14/01/2010. * Copyright 2010 __MyCompanyName__. All rights reserved. * */ #include "theEncryptor.h" #include <iostream> #include <sys/socket.h> #include <sstream> theEncryptor::theEncryptor() { } void theEncryptor::blowfish(unsigned char *data, int data_len, unsigned char* key, int enc) { // hash the key first! unsigned char obuf[20]; bzero(obuf,20); SHA1((const unsigned char*)key, 64, obuf); BF_KEY bfkey; int keySize = 16;//strlen((char*)key); BF_set_key(&bfkey, keySize, obuf); unsigned char ivec[16]; memset(ivec, 0, 16); unsigned char* out=(unsigned char*) malloc(data_len); bzero(out,data_len); int num = 0; BF_cfb64_encrypt(data, out, data_len, &bfkey, ivec, &num, enc); //for(int i = 0;i<data_len;i++)data[i]=out[i]; memcpy(data, out, data_len); free(out); } void theEncryptor::generateRSAKeyPair(int bits) { rsa = RSA_generate_key(bits, 65537, NULL, NULL); } int theEncryptor::publicEncrypt(unsigned char* data, unsigned char* dataEncrypted,int dataLen) { return RSA_public_encrypt(dataLen, data, dataEncrypted, rsa, RSA_PKCS1_OAEP_PADDING); } int theEncryptor::privateDecrypt(unsigned char* dataEncrypted, unsigned char* dataDecrypted) { return RSA_private_decrypt(RSA_size(rsa), dataEncrypted, dataDecrypted, rsa, RSA_PKCS1_OAEP_PADDING); } void theEncryptor::receivePublicKeyAndSetRSA(int sock, int bits) { int max_hex_size = (bits / 4) + 1; char keybufA[max_hex_size]; bzero(keybufA,max_hex_size); char keybufB[max_hex_size]; bzero(keybufB,max_hex_size); int n = recv(sock,keybufA,max_hex_size,0); n = send(sock,"OK",2,0); n = recv(sock,keybufB,max_hex_size,0); n = send(sock,"OK",2,0); rsa = RSA_new(); BN_hex2bn(&rsa->n, keybufA); BN_hex2bn(&rsa->e, keybufB); } void theEncryptor::transmitPublicKey(int sock, int bits) { const int max_hex_size = (bits / 4) + 1; long size = max_hex_size; char keyBufferA[size]; char keyBufferB[size]; bzero(keyBufferA,size); bzero(keyBufferB,size); sprintf(keyBufferA,"%s\r\n",BN_bn2hex(rsa->n)); sprintf(keyBufferB,"%s\r\n",BN_bn2hex(rsa->e)); int n = send(sock,keyBufferA,size,0); char recBuf[2]; n = recv(sock,recBuf,2,0); n = send(sock,keyBufferB,size,0); n = recv(sock,recBuf,2,0); } void theEncryptor::generateRandomBlowfishKey(unsigned char* key, int bytes) { /* srand( (unsigned)time( NULL ) ); std::ostringstream stm; for(int i = 0;i<bytes;i++){ int randomValue = 65 + rand()% 26; stm << (char)((int)randomValue); } std::string str(stm.str()); const char* strs = str.c_str(); for(int i = 0;bytes;i++)key[i]=strs[i]; */ int n = RAND_bytes(key, bytes); if(n==0)std::cout<<"Warning key was generated with bad entropy. You should not consider communication to be secure"<<std::endl; } theEncryptor::~theEncryptor(){}

    Read the article

  • Python hashable dicts

    - by TokenMacGuy
    As an exercise, and mostly for my own amusement, I'm implementing a backtracking packrat parser. The inspiration for this is i'd like to have a better idea about how hygenic macros would work in an algol-like language (as apposed to the syntax free lisp dialects you normally find them in). Because of this, different passes through the input might see different grammars, so cached parse results are invalid, unless I also store the current version of the grammar along with the cached parse results. (EDIT: a consequence of this use of key-value collections is that they should be immutable, but I don't intend to expose the interface to allow them to be changed, so either mutable or immutable collections are fine) The problem is that python dicts cannot appear as keys to other dicts. Even using a tuple (as I'd be doing anyways) doesn't help. >>> cache = {} >>> rule = {"foo":"bar"} >>> cache[(rule, "baz")] = "quux" Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unhashable type: 'dict' >>> I guess it has to be tuples all the way down. Now the python standard library provides approximately what i'd need, collections.namedtuple has a very different syntax, but can be used as a key. continuing from above session: >>> from collections import namedtuple >>> Rule = namedtuple("Rule",rule.keys()) >>> cache[(Rule(**rule), "baz")] = "quux" >>> cache {(Rule(foo='bar'), 'baz'): 'quux'} Ok. But I have to make a class for each possible combination of keys in the rule I would want to use, which isn't so bad, because each parse rule knows exactly what parameters it uses, so that class can be defined at the same time as the function that parses the rule. But combining the rules together is much more dynamic. In particular, I'd like a simple way to have rules override other rules, but collections.namedtuple has no analogue to dict.update(). Edit: An additional problem with namedtuples is that they are strictly positional. Two tuples that look like they should be different can in fact be the same: >>> you = namedtuple("foo",["bar","baz"]) >>> me = namedtuple("foo",["bar","quux"]) >>> you(bar=1,baz=2) == me(bar=1,quux=2) True >>> bob = namedtuple("foo",["baz","bar"]) >>> you(bar=1,baz=2) == bob(bar=1,baz=2) False tl'dr: How do I get dicts that can be used as keys to other dicts? Having hacked a bit on the answers, here's the more complete solution I'm using. Note that this does a bit extra work to make the resulting dicts vaguely immutable for practical purposes. Of course it's still quite easy to hack around it by calling dict.__setitem__(instance, key, value) but we're all adults here. class hashdict(dict): """ hashable dict implementation, suitable for use as a key into other dicts. >>> h1 = hashdict({"apples": 1, "bananas":2}) >>> h2 = hashdict({"bananas": 3, "mangoes": 5}) >>> h1+h2 hashdict(apples=1, bananas=3, mangoes=5) >>> d1 = {} >>> d1[h1] = "salad" >>> d1[h1] 'salad' >>> d1[h2] Traceback (most recent call last): ... KeyError: hashdict(bananas=3, mangoes=5) based on answers from http://stackoverflow.com/questions/1151658/python-hashable-dicts """ def __key(self): return tuple(sorted(self.items())) def __repr__(self): return "{0}({1})".format(self.__class__.__name__, ", ".join("{0}={1}".format( str(i[0]),repr(i[1])) for i in self.__key())) def __hash__(self): return hash(self.__key()) def __setitem__(self, key, value): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def __delitem__(self, key): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def clear(self): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def pop(self, *args, **kwargs): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def popitem(self, *args, **kwargs): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def setdefault(self, *args, **kwargs): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def update(self, *args, **kwargs): raise TypeError("{0} does not support item assignment" .format(self.__class__.__name__)) def __add__(self, right): result = hashdict(self) dict.update(result, right) return result if __name__ == "__main__": import doctest doctest.testmod()

    Read the article

  • What add-in/workbench framework is the best .NET alternative to Eclipse RCP?

    - by Winston Fassett
    I'm looking for a plugin-based application framework that is comparable to the Eclipse Plugin Framework, which to my simple mind consists of: a core plugin management framework (Equinox / OSGI), which provides the ability to declare extension endpoints and then discover and load plugins that service those endpoints. (this is different than Dependency Injection, but admittedly the difference is subtle - configuration is highly de-centralized, there are versioning concerns, it might involve an online plugin repository, and most importantly to me, it should be easy for the user to add plugins without needing to know anything about the underlying architecture / config files) many layers of plugins that provide a basic workbench shell with concurrency support, commands, preference sheets, menus, toolbars, key bindings, etc. That is just scratching the surface of the RCP, which itself is meant to serve as the foundation of your application, which you build by writing / assembling even more plugins. Here's what I've gleaned from the internet in the past couple of days... As far as I can tell, there is nothing in the .NET world that remotely approaches the robustness and maturity of the Eclipse RCP for Java but there are several contenders that do either #1 or #2 pretty well. (I should also mention that I have not made a final decision on WinForms vs WPF, so I'm also trying to understand the level of UI coupling in any candidate framework. I'm also wondering about platform coupling and source code licensing) I must say that the open-source stuff is generally less-documented but easier to understand, while the MS stuff typically has more documentation but is less accessible, so that with many of the MS technologies, I'm left wondering what they actually do, in a practical sense. These are the libraries I have found: SharpDevelop The first thing I looked at was SharpDevelop, which does both #1 and also #2 in a basic way (no insult to SharpDevelop, which is admirable - I just mean more basic than Eclipse RCP). However, SharpDevelop is an application more than a framework, and there are basic assumptions and limitations there (i.e. being somewhat coupled to WinForms). Still, there are some articles on CodeProject explaining how to use it as the foundation for an application. System.Addins It appears that System.Addins is meant to provide a robust add-in loading framework, with some sophisticated options for loading assemblies with varying levels of trusts and even running the out of process. It appears to be primarily code-based, and pretty code-heavy, with lots of assemblies that serve to insulate against versioning issues., using Guidance Automation to generate a good deal of code. So far I haven't found many System.AddIns articles that illustrate how it could be used to build something like an Eclipse RCP, and many people seem to be wringing their hands about its complexity. Mono.Addins It appears that Mono.Addins was influenced by System.Addins, SharpDevelop, and MonoDevelop. It seems to provide the basics from System.Addins, with less sophisticated options for plugin loading, but more simplicity, with attribute-based registration, XML manifests, and the infrastructure for online plugin repositories. It has a pretty good FAQ and documentation, as well as a fairly robust set of examples that really help paint a picture of how to develop an architecture like that of SharpDevelop or Eclipse. The examples use GTK for UI, but the framework itself is not coupled to GTK. So it appears to do #1 (add-in loading) pretty well and points the way to #2 (workbench framework). It appears that Mono.Addins was derived from MonoDevelop, but I haven't actually looked at whether MonoDevelop provides a good core workbench framework. Managed Extensibility Framework This is what everyone's talking about at the moment, and it's slowly getting clearer what it does, but I'm still pretty fuzzy, even after reading several posts on SO. The official word is that it "can live side-by-side" with System.Addins. However, it doesn't reference it and it appears to reproduce some of its functionality. It seems to me, then, that it is a simpler, more accessible alternative to System.Addins. It appears to be more like Mono.Addins in that it provides attribute-based wiring. It provides "catalogs" that can be attribute-based or directory-based. It does not seem to provide any XML or manifest-based wiring. So far I haven't found much documentation and the examples seem to be kind of "magical" and more reminiscent of attribute-based DI, despite the clarifications that MEF is not a DI container. Its license just got opened up, but it does reference WindowsBase -- not sure if that means it's coupled to Windows. Acropolis I'm not sure what this is. Is it MEF, or something that is still coming? Composite Application Blocks There are WPF and Winforms Composite Application blocks that seem to provide much more of a workbench framework. I have very little experience with these but they appear to rely on Guidance Automation quite a bit are obviously coupled with the UI layers. There are a few examples of combining MEF with these application blocks. I've done the best I could to answer my own question here, but I'm really only scratching the surface, and I don't have experience with any of these frameworks. Hopefully some of you can add more detail about the frameworks you have experience with. It would be great if we could end up with some sort of comparison matrix.

    Read the article

  • Tools and Utilities for the .NET Developer

    - by mbcrump
    Tweet this list! Add a link to my site to your bookmarks to quickly find this page again! Add me to twitter! This is a list of the tools/utilities that I use to do my job/hobby. I wanted this page to load fast and contain information that only you care about. If I have missed a tool that you like, feel free to contact me and I will add it to the list. Also, this list took a lot of time to complete. Please do not steal my work, if you like the page then please link back to my site. I will keep the links/information updated as new tools/utilities are created.  Windows/.NET Development – This is a list of tools that any Windows/.NET developer should have in his bag. I have used at some point in my career everything listed on this page and below is the tools worth keeping. Name Description License AnkhSVN Subversion support for Visual Studio. It also works with VS2010. Free Aurora XAML Designer One of the best XAML creation tools available. Has a ton of built in templates that you can copy/paste into VS2010. COST/Trial BeyondCompare Beyond Compare 3 is the ideal tool for comparing files and folders on your Windows or Linux system. Visualize changes in your code and carefully reconcile them. COST/Trial BuildIT Automated Task Tool Its main purpose is to automate tasks, whether it is the final packaging of a product, an automated daily build, maybe sending out a mailing list, even backing-up files. Free C Sharper for VB Convert VB to C#. COST CLRProfiler Analyze and improve the behavior of your .NET app. Free CodeRush Direct competitor to ReSharper, contains similar feature. This is one of those decide for yourself. COST/Trial Disk2VHD Disk2vhd is a utility that creates VHD (Virtual Hard Disk - Microsoft's Virtual Machine disk format) versions of physical disks for use in Microsoft Virtual PC or Microsoft Hyper-V virtual machines (VMs). Free Eazfuscator.NET Is a free obfuscator for .NET. The main purpose is to protect intellectual property of software. Free EQATEC Profiler Make your .NET app run faster. No source code changes are needed. Just point the profiler to your app, run the modified code, and get a visual report. COST Expression Studio 3/4 Comes with Web, Blend, Sketch Flow and more. You can create websites, produce beautiful XAML and more. COST/Trial Expresso The award-winning Expresso editor is equally suitable as a teaching tool for the beginning user of regular expressions or as a full-featured development environment for the experienced programmer or web designer with an extensive knowledge of regular expressions. Free Fiddler Fiddler is a web debugging proxy which logs all HTTP(s) traffic between your computer and the internet. Free Firebug Powerful Web development tool. If you build websites, you will need this. Free FxCop FxCop is an application that analyzes managed code assemblies (code that targets the .NET Framework common language runtime) and reports information about the assemblies, such as possible design, localization, performance, and security improvements. Free GAC Browser and Remover Easy way to remove multiple assemblies from the GAC. Assemblies registered by programs like Install Shield can also be removed. Free GAC Util The Global Assembly Cache tool allows you to view and manipulate the contents of the global assembly cache and download cache. Free HelpScribble Help Scribble is a full-featured, easy-to-use help authoring tool for creating help files from start to finish. You can create Win Help (.hlp) files, HTML Help (.chm) files, a printed manual and online documentation (on a web site) all from the same Help Scribble project. COST/Trial IETester IETester is a free Web Browser that allows you to have the rendering and JavaScript engines of IE9 preview, IE8, IE7 IE 6 and IE5.5 on Windows 7, Vista and XP, as well as the installed IE in the same process. Free iTextSharp iText# (iTextSharp) is a port of the iText open source java library for PDF generation written entirely in C# for the .NET platform. Use the iText mailing list to get support. Free Kaxaml Kaxaml is a lightweight XAML editor that gives you a "split view" so you can see both your XAML and your rendered content. Free LINQPad LinqPad lets you interactively query databases in a LINQ. Free Linquer Many programmers are familiar with SQL and will need a help in the transition to LINQ. Sometimes there are complicated queries to be written and Linqer can help by converting SQL scripts to LINQ. COST/Trial LiquidXML Liquid XML Studio 2010 is an advanced XML developers toolkit and IDE, containing all the tools needed for designing and developing XML schema and applications. COST/Trial Log4Net log4net is a tool to help the programmer output log statements to a variety of output targets. log4net is a port of the excellent log4j framework to the .NET runtime. We have kept the framework similar in spirit to the original log4j while taking advantage of new features in the .NET runtime. For more information on log4net see the features document. Free Microsoft Web Platform Installer The Microsoft Web Platform Installer 2.0 (Web PI) is a free tool that makes getting the latest components of the Microsoft Web Platform, including Internet Information Services (IIS), SQL Server Express, .NET Framework and Visual Web Developer easy. Free Mono Development Don't have Visual Studio - no problem! This is an open Source C# and .NET development environment for Linux, Windows, and Mac OS X Free Net Mass Downloader While it’s great that Microsoft has released the .NET Reference Source Code, you can only get it one file at a time while you’re debugging. If you’d like to batch download it for reading or to populate the cache, you’d have to write a program that instantiated and called each method in the Framework Class Library. Fortunately, .NET Mass Downloader comes to the rescue! Free nMap Nmap ("Network Mapper") is a free and open source (license) utility for network exploration or security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Free NoScript (Firefox add-in) The NoScript Firefox extension provides extra protection for Firefox, Flock, Seamonkey and other Mozilla-based browsers: this free, open source add-on allows JavaScript, Java and Flash and other plug-ins to be executed only by trusted web sites of your choice (e.g. your online bank), and provides the most powerful Anti-XSS protection available in a browser. Free NotePad 2 Notepad2, a fast and light-weight Notepad-like text editor with syntax highlighting. This program can be run out of the box without installation, and does not touch your system's registry. Free PageSpy PageSpy is a small add-on for Internet Explorer that allows you to select any element within a webpage, select an option in the context menu, and view detailed information about both the coding behind the page and the element you selected. Free Phrase Express PhraseExpress manages your frequently used text snippets in customizable categories for quick access. Free PowerGui PowerGui is a free community for PowerGUI, a graphical user interface and script editor for Microsoft Windows PowerShell! Free Powershell Comes with Win7, but you can automate tasks by using the .NET Framework. Great for network admins. Free Process Explorer Ever wondered which program has a particular file or directory open? Now you can find out. Process Explorer shows you information about which handles and DLLs processes have opened or loaded. Also, included in the SysInterals Suite. Free Process Monitor Process Monitor is an advanced monitoring tool for Windows that shows real-time file system, Registry and process/thread activity. Free Reflector Explore and analyze compiled .NET assemblies, viewing them in C#, Visual Basic, and IL. This is an Essential for any .NET developer. Free Regular Expression Library Stuck on a Regular Expression but you think someone has already figured it out? Chances are they have. Free Regulator Regulator makes Regular Expressions easy. This is a must have for a .NET Developer. Free RenameMaestro RenameMaestro is probably the easiest batch file renamer you'll find to instantly rename multiple files COST ReSharper The one program that I cannot live without. Supports VS2010 and offers simple refactoring, code analysis/assistance/cleanup/templates. One of the few applications that is worth the $$$. COST/Trial ScrewTurn Wiki ScrewTurn Wiki allows you to create, manage and share wikis. A wiki is a collaboratively-edited, information-centered website: the most famous is Wikipedia. Free SharpDevelop What is #develop? SharpDevelop is a free IDE for C# and VB.NET projects on Microsoft's .NET platform. Free Show Me The Template Show Me The Template is a tool for exploring the templates, be their data, control or items panel, that comes with the controls built into WPF for all 6 themes. Free SnippetCompiler Compiles code snippets without opening Visual Studio. It does not support .NET 4. Free SQL Prompt SQL Prompt is a plug-in that increases how fast you can work with SQL. It provides code-completion for SQL server, reformatting, db schema information and snippets. Awesome! COST/Trial SQLinForm SQLinForm is an automatic SQL code formatter for all major databases  including ORACLE, SQL Server, DB2, UDB, Sybase, Informix, PostgreSQL, Teradata, MySQL, MS Access etc. with over 70 formatting options. COST/OnlineFree SSMS Tools SSMS Tools Pack is an add-in for Microsoft SQL Server Management Studio (SSMS) including SSMS Express. Free Storm STORM is a free and open source tool for testing web services. Free Telerik Code Convertor Convert code from VB to C Sharp and Vice Versa. Free TurtoiseSVN TortoiseSVN is a really easy to use Revision control / version control / source control software for Windows.Since it's not an integration for a specific IDE you can use it with whatever development tools you like. Free UltraEdit UltraEdit is the ideal text, HTML and hex editor, and an advanced PHP, Perl, Java and JavaScript editor for programmers. UltraEdit is also an XML editor including a tree-style XML parser. An industry-award winner, UltraEdit supports disk-based 64-bit file handling (standard) on 32-bit Windows platforms (Windows 2000 and later). COST/Trial Virtual Windows XP Comes with some W7 version and allows you to run WinXP along side W7. Free VirtualBox Virtualization by Sun Microsystems. You can virtualize Windows, Linux and more. Free Visual Log Parser SQL queries against a variety of log files and other system data sources. Free WinMerge WinMerge is an Open Source differencing and merging tool for Windows. WinMerge can compare both folders and files, presenting differences in a visual text format that is easy to understand and handle. Free Wireshark Wireshark is one of the best network protocol analyzer's for Unix and windows. This has been used several times to get me out of a bind. Free XML Notepad 07 Old, but still one of my favorite XML viewers. Free Productivity Tools – This is the list of tools that I use to save time or quickly navigate around Windows. Name Description License AutoHotKey Automate almost anything by sending keystrokes and mouse clicks. You can write a mouse or keyboard macro by hand or use the macro recorder. Free CLCL CLCL is clipboard caching utility. Free Ditto Ditto is an extension to the standard windows clipboard. It saves each item placed on the clipboard allowing you access to any of those items at a later time. Ditto allows you to save any type of information that can be put on the clipboard, text, images, html, custom formats, ..... Free Evernote Remember everything from notes to photos. It will synch between computers/devices. Free InfoRapid Inforapid is a search tool that will display all you search results in a html like browser. If you click on a word in that browser, it will start another search to the word you clicked on. Handy if you want to trackback something to it's true origin. The word you looked for will be highlighted in red. Clicking on the red word will open the containing file in a text based viewer. Clicking on any word in the opened document will start another search on that word. Free KatMouse The prime purpose of the KatMouse utility is to enhance the functionality of mice with a scroll wheel, offering 'universal' scrolling: moving the mouse wheel will scroll the window directly beneath the mouse cursor (not the one with the keyboard focus, which is default on Windows OSes). This is a major increase in the usefulness of the mouse wheel. Free ScreenR Instant Screencast with nothing to download. Works with Mac or PC and free. Free Start++ Start++ is an enhancement for the Start Menu in Windows Vista. It also extends the Run box and the command-line with customizable commands.  For example, typing "w Windows Vista" will take you to the Windows Vista page on Wikipedia! Free Synergy Synergy lets you easily share a single mouse and keyboard between multiple computers with different operating systems, each with its own display, without special hardware. It's intended for users with multiple computers on their desk since each system uses its own monitor(s). Free Texter Texter lets you define text substitution hot strings that, when triggered, will replace hotstring with a larger piece of text. By entering your most commonly-typed snippets of text into Texter, you can save countless keystrokes in the course of the day. Free Total Commander File handling, FTP, Archive handling and much more. Even works with Win3.11. COST/Trial Available Wizmouse WizMouse is a mouse enhancement utility that makes your mouse wheel work on the window currently under the mouse pointer, instead of the currently focused window. This means you no longer have to click on a window before being able to scroll it with the mouse wheel. This is a far more comfortable and practical way to make use of the mouse wheel. Free Xmarks Bookmark sync and search between computers. Free General Utilities – This is a list for power user users or anyone that wants more out of Windows. I usually install a majority of these whenever I get a new system. Name Description License µTorrent µTorrent is a lightweight and efficient BitTorrent client for Windows or Mac with many features. I use this for downloading LEGAL media. Free Audacity Audacity® is free, open source software for recording and editing sounds. It is available for Mac OS X, Microsoft Windows, GNU/Linux, and other operating systems. Learn more about Audacity... Also check our Wiki and Forum for more information. Free AVast Free FREE Antivirus. Free CD Burner XP Pro CDBurnerXP is a free application to burn CDs and DVDs, including Blu-Ray and HD-DVDs. It also includes the feature to burn and create ISOs, as well as a multilanguage interface. Free CDEX You can extract digital audio CDs into mp3/wav. Free Combofix Combofix is a freeware (a legitimate spyware remover created by sUBs), Combofix was designed to scan a computer for known malware, spyware (SurfSideKick, QooLogic, and Look2Me as well as any other combination of the mentioned spyware applications) and remove them. Free Cpu-Z Provides information about some of the main devices of your system. Free Cropper Cropper is a screen capture utility written in C#. It makes it fast and easy to grab parts of your screen. Use it to easily crop out sections of vector graphic files such as Fireworks without having to flatten the files or open in a new editor. Use it to easily capture parts of a web site, including text and images. It's also great for writing documentation that needs images of your application or web site. Free DropBox Drag and Drop files to sync between computers. Free DVD-Fab Converts/Copies DVDs/Blu-Ray to different formats. (like mp4, mkv, avi) COST/Trial Available FastStone Capture FastStone Capture is a powerful, lightweight, yet full-featured screen capture tool that allows you to easily capture and annotate anything on the screen including windows, objects, menus, full screen, rectangular/freehand regions and even scrolling windows/web pages. Free ffdshow FFDShow is a DirectShow decoding filter for decompressing DivX, XviD, H.264, FLV1, WMV, MPEG-1 and MPEG-2, MPEG-4 movies. Free Filezilla FileZilla Client is a fast and reliable cross-platform FTP, FTPS and SFTP client with lots of useful features and an intuitive graphical user interface. You can also download a server version. Free FireFox Web Browser, do you really need an explanation? Free FireGestures A customizable mouse gestures extension which enables you to execute various commands and user scripts with five types of gestures. Free FoxIt Reader Light weight PDF viewer. You should install this with the advanced setting or it will install a toolbar and setup some shortcuts. Free gSynchIt Synch Gmail and Outlook. Even supports Outlook 2010 32/64 bit COST/Trial Available Hulu Desktop At home or in a hotel, this has replaced my cable/satellite subscription. Free ImgBurn ImgBurn is a lightweight CD / DVD / HD DVD / Blu-ray burning application that everyone should have in their toolkit! Free Infrarecorder InfraRecorder is a free CD/DVD burning solution for Microsoft Windows. It offers a wide range of powerful features; all through an easy to use application interface and Windows Explorer integration. Free KeePass KeePass is a free open source password manager, which helps you to manage your passwords in a secure way. Free LastPass Another password management, synchronize between browsers, automatic form filling and more. Free Live Essentials One download and lots of programs including Mail, Live Writer, Movie Maker and more! Free Monitores MonitorES is a small windows utility that helps you to turnoff monitor display when you lock down your machine.Also when you lock your machine, it will pause all your running media programs & set your IM status message to "Away" / Custom message(via options) and restore it back to normal when you back. Free mRemote mRemote is a full-featured, multi-tab remote connections manager. Free Open Office OpenOffice.org 3 is the leading open-source office software suite for word processing, spreadsheets, presentations, graphics, databases and more. It is available in many languages and works on all common computers. It stores all your data in an international open standard format and can also read and write files from other common office software packages. It can be downloaded and used completely free of charge for any purpose. Free Paint.NET Simple, intuitive, and innovative user interface for editing photos. Free Picasa Picasa is free photo editing software from Google that makes your pictures look great. Free Pidgin Pidgin is an easy to use and free chat client used by millions. Connect to AIM, MSN, Yahoo, and more chat networks all at once. Free PING PING is a live Linux ISO, based on the excellent Linux From Scratch (LFS) documentation. It can be burnt on a CD and booted, or integrated into a PXE / RIS environment. Free Putty PuTTY is an SSH and telnet client, developed originally by Simon Tatham for the Windows platform. Free Revo Uninstaller Revo Uninstaller Pro helps you to uninstall software and remove unwanted programs installed on your computer easily! Even if you have problems uninstalling and cannot uninstall them from "Windows Add or Remove Programs" control panel applet.Revo Uninstaller is a much faster and more powerful alternative to "Windows Add or Remove Programs" applet! It has very powerful features to uninstall and remove programs. Free Security Essentials Microsoft Security Essentials is a new, free consumer anti-malware solution for your computer. Free SetupVirtualCloneDrive Virtual CloneDrive works and behaves just like a physical CD/DVD drive, however it exists only virtually. Point to the .ISO file and it appears in Windows Explorer as a Drive. Free Shark 007 Codec Pack Play just about any file format with this download. Also includes my W7 Media Playlist Generator. Free Snagit 9 Screen Capture on steroids. Add arrows, captions, etc to any screenshot. COST/Trial Available SysinternalsSuite Go ahead and download the entire sys internals suite. I have mentioned multiple programs in this suite already. Free TeraCopy TeraCopy is a compact program designed to copy and move files at the maximum possible speed, providing the user with a lot of features. Free for Home TrueCrypt Free open-source disk encryption software for Windows 7/Vista/XP, Mac OS X, and Linux Free TweetDeck Fully featured Twitter client. Free UltraVNC UltraVNC is a powerful, easy to use and free software that can display the screen of another computer (via internet or network) on your own screen. The program allows you to use your mouse and keyboard to control the other PC remotely. It means that you can work on a remote computer, as if you were sitting in front of it, right from your current location. Free Unlocker Unlocks locked files. Pretty simple right? Free VLC Media Player VLC media player is a highly portable multimedia player and multimedia framework capable of reading most audio and video formats Free Windows 7 Media Playlist This program is special to my heart because I wrote it. It has been mentioned on podcast and various websites. It allows you to quickly create wvx video playlist for Windows Media Center. Free WinRAR WinRAR is a powerful archive manager. It can backup your data and reduce the size of email attachments, decompress RAR, ZIP and other files downloaded from Internet and create new archives in RAR and ZIP file format. COST/Trial Available Blogging – I use the following for my blog. Name Description License Insert Code for Windows Live Writer Insert Code for Windows Live Writer will format a snippet of text in a number of programming languages such as C#, HTML, MSH, JavaScript, Visual Basic and TSQL. Free LiveWriter Included in Live Essentials, but the ultimate in Windows Blogging Free PasteAsVSCode Plug-in for Windows Live Writer that pastes clipboard content as Visual Studio code. Preserves syntax highlighting, indentation and background color. Converts RTF, outputted by Visual Studio, into HTML. Free Desktop Management – The list below represent the best in Windows Desktop Management. Name Description License 7 Stacks Allows users to have "stacks" of icons in their taskbar. Free Executor Executor is a multi purpose launcher and a more advanced and customizable version of windows run. Free Fences Fences is a program that helps you organize your desktop and can hide your icons when they are not in use. Free RocketDock Rocket Dock is a smoothly animated, alpha blended application launcher. It provides a nice clean interface to drop shortcuts on for easy access and organization. With each item completely customizable there is no end to what you can add and launch from the dock. Free WindowsTab Tabbing is an essential feature of modern web browsers. Window Tabs brings the productivity of tabbed window management to all of your desktop applications. Free

    Read the article

< Previous Page | 55 56 57 58 59 60  | Next Page >