Search Results

Search found 6328 results on 254 pages for 'anonymous person'.

Page 245/254 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Setup and Use SpecFlow BDD with DevExpress XAF

    - by Patrick Liekhus
    Let’s get started with using the SpecFlow BDD syntax for writing tests with the DevExpress XAF EasyTest scripting syntax.  In order for this to work you will need to download and install the prerequisites listed below.  Once they are installed follow the steps outlined below and enjoy. Prerequisites Install the following items: DevExpress eXpress Application Framework (XAF) found here SpecFlow found here Liekhus BDD/XAF Testing library found here Assumptions I am going to assume at this point that you have created your XAF application and have your Module, Win.Module and Win ready for usage.  You should have also set any attributes and/or settings as you see fit. Setup So where to start. Create a new testing project within your solution. I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e. AlbumManager.FunctionalTests). Add the following references to your project.  It should look like the reference list below. DevExpress.Data.v11.x DevExpress.Persistent.Base.v11.x DevExpress.Persistent.BaseImpl.v11.x DevExpress.Xpo.v11.2 Liekhus.Testing.BDD.Core Liekhus.Testing.BDD.DevExpress TechTalk.SpecFlow TestExecutor.v11.x (found in %Program Files%\DevExpress 2011.x\eXpressApp Framework\Tools\EasyTest Right click the TestExecutor reference and set the “Copy Local” setting to True.  This forces the TestExecutor executable to be available in the bin directory which is where the EasyTest script will be executed further down in the process. Add an Application Configuration File (app.config) to your test application.  You will need to make a few modifications to have SpecFlow generate Microsoft style unit tests.  First add the section handler for SpecFlow and then set your choice of testing framework.  I prefer MS Tests for my projects. Add the EasyTest configuration file to your project.  Add a new XML file and call it Config.xml. Open the properties window for the Config.xml file and set the “Copy to Ouput Directory” to “Copy Always”. You will setup the Config file according to the specifications of the EasyTest library my mapping to your executable and other settings.  You can find the details for the configuration of EasyTest here.  My file looks like this Create a new folder in your test project called “StepDefinitions”.  Add a new SpecFlow Step Definition file item under the StepDefinitions folder.  I typically call this class StepDefinition.cs. Have your step definition inherit from the Liekhus.Testing.BDD.DevExpress.StepDefinition class.  This will give you the default behaviors for your test in the next section. OK.  Now that we have done this series of steps, we will work on simplifying this.  This is an early preview of this new project and is not fully ready for consumption.  If you would like to experiment with it, please feel free.  Our goals are to make this a installable project on it’s own with it’s own project templates and default settings.  This will be coming in later versions.  Currently this project is in Alpha release. Let’s write our first test Remove the basic test that is created for you. We will not use the default test but rather create our own SpecFlow “Feature” files. Add a new item to your project and select the SpecFlow Feature file under C#. Name your feature file as you do your class files after the test they are performing. Writing a feature file uses the Cucumber syntax of Given… When… Then.  Think of it in these terms.  Givens are the pre-conditions for the test.  The Whens are the actual steps for the test being performed.  The Thens are the verification steps that confirm your test either passed or failed.  All of these steps are generated into a an EasyTest format and executed against your XAF project.  You can find more on the Cucumber syntax by using the Secret Ninja Cucumber Scrolls.  This document has several good styles of tests, plus you can get your fill of Chuck Norris vs Ninjas.  Pretty humorous document but full of great content. My first test is going to test the entry of a new Album into the application and is outlined below. The Feature section at the top is more for your documentation purposes.  Try to be descriptive of the test so that it makes sense to the next person behind you.  The Scenario outline is described in the Ninja Scrolls, but think of it as test template.  You can write one test outline and have multiple datasets (Scenarios) executed against that test.  Here are the steps of my test and their descriptions Given I am starting a new test – tells our test to create a new EasyTest file And (Given) the application is open – tells EasyTest to open our application defined in the Config.xml When I am at the “Albums” screen – tells XAF to navigate to the Albums list view And (When) I click the “New:Album” button – tells XAF to click the New Album button on the ribbon And (When) I enter the following information – tells XAF to find the field on the screen and put the value in that field And (When) I click the “Save and Close” button – tells XAF to click the “Save and Close” button on the detail window Then I verify results as “user” – tells the testing framework to execute the EasyTest as your configured user Once you compile and prepare your tests you should see the following in your Test View.  For each of your CreateNewAlbum lines in your scenarios, you will see a new test ready to execute. From here you will use your testing framework of choice to execute the test.  This in turn will execute the EasyTest framework to call back into your XAF application and test your business application. Again, please remember that this is an early preview and we are still working out the details.  Please let us know if you have any comments/questions/concerns. Thanks and happy testing.

    Read the article

  • Life Technologies: Making Life Easier to Manage

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} When we’re thinking about customer engagement, we’re acutely aware of all the forces at play competing for our customer’s attention. Solutions that make life easier for our customers draw attention to themselves. We tend to engage more when there is a distinct benefit and we can take a deep breath and accept that there is hope in the world and everything isn’t designed to frustrate us and make our lives miserable. (sigh…) When products are designed to automate processes that were consuming hours of our time with no relief in sight, they deserve to be recognized. One of our recent Oracle Fusion Middleware Innovation Award Winners in the WebCenter category, Life Technologies, has recently posted a video promoting their “award winning” solution. The Oracle Innovation Awards are part of the overall Oracle Excellence awards given to customers for innovation with Oracle products. More info here. Their award nomination included this description: Life Technologies delivered the My Life Service Portal as part of a larger Digital Hub strategy. This Portal is the first of its kind in the biotechnology service providing industry. The Portal provides access to Life Technologies cloud based service monitoring system where all customer deployed instruments can be remotely monitored and proactively repaired. The portal provides alerts from these cloud based monitoring services directly to the customer and to Life Technologies Field Engineers. The Portal provides insight into the instruments and services customers purchased for the purpose of analyzing and anticipating future customer needs and creating targeted sales and service programs. This portal not only provides benefits for Life Technologies internal sales and service teams but provides customers a central place to track all pertinent instrument information including: instrument service history instrument status and previous activities instrument performance analytics planned service visits warranty/contract information discussion forums social networks for lab management and collaboration alerts and notifications on all of the above team scheduling for instrument usage promote optional reagents required to keep instruments performing From their website The Life Technologies Instruments & Services Portal Helps You Save Time and Gain Peace of Mind Introducing the new, award-winning, free online tool that enables easier management of your instrument use and care, faster response to requests for service or service quotes, and instant sharing of key instrument and service information with your colleagues. Now – this unto itself is obviously beneficial for their customers who were previously burdened with having to do all of these tasks separately, manually and inconsistently by nature. Now – all in one place and free to their customers – a portal that ties it all together. They now have built the platform to give their customers yet another reason to do business with them – Their headline on their product page says it all: “Life is now easier to manage - All your instrument use and care in one place – the no-cost, no-hassle Instruments and Services Portal.” Of course – it’s very convenient that the company name includes “Life” and now can also promote to their clients and prospects that doing business with them is easy and their sophisticated lab equipment is easy to manage. In an industry full of PhD’s – “Easy” isn’t usually the first word that comes to mind, but Life Technologies has now tied the word to their brand in a very eloquent way. Between our work lives and family or personal lives, getting any mono-focused minutes of dedicated attention has become such a rare occurrence in our current era of multi-tasking that those moments of focus are highly prized. So – when something is done really well – so well that it becomes captivating and urges sharing impulses – I take notice and dig deeper and most of the time I discover other gems not so hidden below the surface. And then I share with those I know would enjoy and understand. In the spirit of full disclosure, I must admit here that the first person I shared the videos below with was my daughter. She’s in her senior year of high school in the midst of her college search. She’s passionate about her academics and has already decided that she wants to study Neuroscience in college and like her mother will be in for the long haul to a PhD eventually. In a summer science program at Smith College 2 summers ago – she sent the family famous text to me – “I just dissected a sheep’s brain – wicked cool!” – This was followed by an equally memorable text this past summer in a research mentorship in Neuroscience at UConn – “Just sliced up some rat brain. Reminded me of a deli slicer at the supermarket… sorry I forgot to call last night…” So… needless to say – I knew I had an audience that would enjoy and understand these videos below and are now being shared among her science classmates and faculty. And evidently - so does Life Technologies! They’ve done a great job on these making them fun and something that will easily be shared among their customers social networks. They’ve created a neuro-archetypal character, “Ph.Diddy” and know that their world of clients in academics, research, and other institutions would understand and enjoy the “edutainment” value in this series of videos on their YouTube channel that pokes fun at the stereotypes while also promoting their products at the same time. They use their Facebook page for additional engagement with their clients and as another venue to promote these videos. Enjoy this one as well! More to be found here: http://www.youtube.com/lifetechnologies Stay tuned to this Oracle WebCenter blog channel. Tomorrow we'll be taking a look at another winner of the Innovation Awards, LADWP - helping to keep the citizens of Los Angeles engaged with their Water and Power provider.

    Read the article

  • Using Find/Replace with regular expressions inside a SSIS package

    - by jamiet
    Another one of those might-be-useful-again-one-day-so-I’ll-share-it-in-a-blog-post blog posts I am currently working on a SQL Server Integration Services (SSIS) 2012 implementation where each package contains a parameter called ETLIfcHist_ID: During normal execution this will get altered when the package is executed from the Execute Package Task however we want to make sure that at deployment-time they all have a default value of –1. Of course, they tend to get changed during development so I wanted a way of easily changing them all back to the default value. Opening up each package in turn and editing them was an option but given that we have over 40 packages and we might want to carry out this reset fairly frequently I needed a more automated method so I turned to Visual Studio’s Find/Replace… feature Of course, we don’t know what value will be in that parameter so I can’t simply search for a particular value; hence I opted to use a regular expression to identify the value to be change. In the rest of this blog post I’ll explain how to do that. For demonstration purposes I have taken the contents of a .dtsx file and stripped out everything except the element containing the parameters (<DTS:PackageParameters>), if you want to play along at home you can copy-paste the XML document below into a new XML file and open it up in Visual Studio: <?xml version="1.0"?> <DTS:Executable xmlns:DTS="www.microsoft.com/SqlServer/Dts">   <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="Some other description"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25845C7E}"       DTS:ObjectName="SomeOtherObjectName">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">SomeOtherValue</DTS:Property>     </DTS:PackageParameter>   </DTS:PackageParameters> </DTS:Executable> We are trying to identify the value of the parameter whose name is ETLIfcHist_ID – notice that in the XML document above that value is VALUE_TO_BE_CHANGED. The following regular expression will find the appropriate portion of the XML document: {\<DTS\:PackageParameter[\n ]*DTS\:CreationName="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Description="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:DTSID="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:ObjectName="ETLIfcHist_ID"\>[\n ]*\<DTS\:Property[\n ]*DTS\:DataType="[A-Za-z0-9\:_\{\}- ]*"[\n ]*DTS\:Name="ParameterValue"\>}[A-Za-z0-9\:_\{\}- ]*{\<\/DTS\:Property\>} I have highlighted the name of the parameter that we’re looking for. I have also highlighted two portions identified by pairs of curly braces “{…}”; these are important because they pick out the two portions either side of the value I want to replace, in other words the portions highlighted here: <DTS:PackageParameters>     <DTS:PackageParameter       DTS:CreationName=""       DTS:DataType="3"       DTS:Description="InterfaceHistory_ID: used for Lineage"       DTS:DTSID="{635616DB-EEEE-45C8-89AA-713E25846C7E}"       DTS:ObjectName="ETLIfcHist_ID">       <DTS:Property         DTS:DataType="3"         DTS:Name="ParameterValue">VALUE_TO_BE_CHANGED</DTS:Property>     </DTS:PackageParameter> Those sections in the curly braces are termed tag expressions and can be identified in the replace expression using a backslash and a number identifying which tag expression you’re referring to according to its ordinal position. Hence, our replace expression is simply: \1-1\2 We’re saying the portion of our file identified by the regular expression should be replaced by the first curly brace section, then the literal –1, then the second curly brace section. Make sense? Give it a go yourself by plugging those two expressions into Visual Studio’s Find and Replace dialog. If you set it to look in “All Open Documents” then you can open up the code-behind of all your packages and change all of them at once. The Find and Replace dialog will look like this: That’s it! I realise that not everyone will be looking to change the value of a parameter but hopefully I have shown you a technique that you can modify to work for your own scenario. Given that this blog post is, y’know, on the web I have no doubt that someone is going to find a fault with my find regex expression and if that person is you….that’s OK. Let me know about it in the comments below and perhaps we can work together to come up with something better! Note that some parameters may have a different set of properties (for example some, but not all, of my parameters have a DTS:Required attribute) so your find regular expression may have to change accordingly. When researching this I found the following article to be invaluable: Visual Studio Find/Replace Regular Expression Usage @Jamiet

    Read the article

  • API Message Localization

    - by Jesse Taber
    In my post, “Keep Localizable Strings Close To Your Users” I talked about the internationalization and localization difficulties that can arise when you sprinkle static localizable strings throughout the different logical layers of an application. The main point of that post is that you should have your localizable strings reside as close to the user-facing modules of your application as possible. For example, if you’re developing an ASP .NET web forms application all of the localizable strings should be kept in .resx files that are associated with the .aspx views of the application. In this post I want to talk about how this same concept can be applied when designing and developing APIs. An API Facilitates Machine-to-Machine Interaction You can typically think about a web, desktop, or mobile application as a collection “views” or “screens” through which users interact with the underlying logic and data. The application can be designed based on the assumption that there will be a human being on the other end of the screen working the controls. You are designing a machine-to-person interaction and the application should be built in a way that facilitates the user’s clear understanding of what is going on. Dates should be be formatted in a way that the user will be familiar with, messages should be presented in the user’s preferred language, etc. When building an API, however, there are no screens and you can’t make assumptions about who or what is on the other end of each call. An API is, by definition, a machine-to-machine interaction. A machine-to-machine interaction should be built in a way that facilitates a clear and unambiguous understanding of what is going on. Dates and numbers should be formatted in predictable and standard ways (e.g. ISO 8601 dates) and messages should be presented in machine-parseable formats. For example, consider an API for a time tracking system that exposes a resource for creating a new time entry. The JSON for creating a new time entry for a user might look like: 1: { 2: "userId": 4532, 3: "startDateUtc": "2012-10-22T14:01:54.98432Z", 4: "endDateUtc": "2012-10-22T11:34:45.29321Z" 5: }   Note how the parameters for start and end date are both expressed as ISO 8601 compliant dates in UTC. Using a date format like this in our API leaves little room for ambiguity. It’s also important to note that using ISO 8601 dates is a much, much saner thing than the \/Date(<milliseconds since epoch>)\/ nonsense that is sometimes used in JSON serialization. Probably the most important thing to note about the JSON snippet above is the fact that the end date comes before the start date! The API should recognize that and disallow the time entry from being created, returning an error to the caller. You might inclined to send a response that looks something like this: 1: { 2: "errors": [ {"message" : "The end date must come after the start date"}] 3: }   While this may seem like an appropriate thing to do there are a few problems with this approach: What if there is a user somewhere on the other end of the API call that doesn’t speak English?  What if the message provided here won’t fit properly within the UI of the application that made the API call? What if the verbiage of the message isn’t consistent with the rest of the application that made the API call? What if there is no user directly on the other end of the API call (e.g. this is a batch job uploading time entries once per night unattended)? The API knows nothing about the context from which the call was made. There are steps you could take to given the API some context (e.g.allow the caller to send along a language code indicating the language that the end user speaks), but that will only get you so far. As the designer of the API you could make some assumptions about how the API will be called, but if we start making assumptions we could very easily make the wrong assumptions. In this situation it’s best to make no assumptions and simply design the API in such a way that the caller has the responsibility to convey error messages in a manner that is appropriate for the context in which the error was raised. You would work around some of these problems by allowing callers to add metadata to each request describing the context from which the call is being made (e.g. accepting a ‘locale’ parameter denoting the desired language), but that will add needless clutter and complexity. It’s better to keep the API simple and push those context-specific concerns down to the caller whenever possible. For our very simple time entry example, this can be done by simply changing our error message response to look like this: 1: { 2: "errors": [ {"code": 100}] 3: }   By changing our error error from exposing a string to a numeric code that is easily parseable by another application, we’ve placed all of the responsibility for conveying the actual meaning of the error message on the caller. It’s best to have the caller be responsible for conveying this meaning because the caller understands the context much better than the API does. Now the caller can see error code 100, know that it means that the end date submitted falls before the start date and take appropriate action. Now all of the problems listed out above are non-issues because the caller can simply translate the error code of ‘100’ into the proper action and message for the current context. The numeric code representation of the error is a much better way to facilitate the machine-to-machine interaction that the API is meant to facilitate. An API Does Have Human Users While APIs should be built for machine-to-machine interaction, people still need to wire these interactions together. As a programmer building a client application that will consume the time entry API I would find it frustrating to have to go dig through the API documentation every time I encounter a new error code (assuming the documentation exists and is accurate). The numeric error code approach hurts the discoverability of the API and makes it painful to integrate with. We can help ease this pain by merging our two approaches: 1: { 2: "errors": [ {"code": 100, "message" : "The end date must come after the start date"}] 3: }   Now we have an easily parseable numeric error code for the machine-to-machine interaction that the API is meant to facilitate and a human-readable message for programmers working with the API. The human-readable message here is not intended to be viewed by end-users of the API and as such is not really a “localizable string” in my opinion. We could opt to expose a locale parameter for all API methods and store translations for all error messages, but that’s a lot of extra effort and overhead that doesn’t add a lot real value to the API. I might be a bit of an “ugly American”, but I think it’s probably fine to have the API return English messages when the target for those messages is a programmer. When resources are limited (which they always are), I’d argue that you’re better off hard-coding these messages in English and putting more effort into building more useful features, improving security, tweaking performance, etc.

    Read the article

  • SQL SERVER – 3 Online SQL Courses at Pluralsight and Free Learning Resources

    - by pinaldave
    Usain Bolt is an inspiration for all. He broke his own record multiple times because he wanted to do better! Read more about him on wikipedia. He is great and indeed fastest man on the planet. Usain Bolt – World’s Fastest Man “Can you teach me SQL Server Performance Tuning?” This is one of the most popular questions which I receive all the time. The answer is YES. I would love to do performance tuning training for anyone, anywhere.  It is my favorite thing to do, and it is my favorite thing to train others in.  If possible, I would love to do training 24 hours a day, 7 days a week, 365 days a year.  To me, it doesn’t feel like a job. Of course, as much as I would love to do performance tuning 24/7/365, obviously I am just one human being and can only be in one place t one time.  It is also very difficult to train more than one person at a time, and it is difficult to train two or more people at a time, especially when the two people are at different levels.  I am also limited by geography.  I live in India, and adjust to my own time zone.  Trying to teach a live course from India to someone whose time zone is 12 or more hours off of mine is very difficult.  If I am trying to teach at 2 am, I am sure I am not at my best! There was only one solution to scale – Online Trainings. I have built 3 different courses on SQL Server Performance Tuning with Pluralsight. Now I have no problem – I am 100% scalable and available 24/7 and 365. You can make me say the same things again and again till you find it right. I am in your mobile, PC as well as on XBOX. This is why I am such a big fan of online courses.  I have recorded many performance tuning classes and you can easily access them online, at your own time.  And don’t think that just because these aren’t live classes you won’t be able to get any feedback from me.  I encourage all my viewers to go ahead and ask me questions by e-mail, Twitter, Facebook, or whatever way you can get a hold of me. Here are details of three of my courses with Pluralsight. I suggest you go over the description of the course. As an author of the course, I have few FREE codes for watching the free courses. Please leave a comment with your valid email address, I will send a few of them to random winners. SQL Server Performance: Introduction to Query Tuning  SQL Server performance tuning is an art to master – for developers and DBAs alike. This course takes a systematic approach to planning, analyzing, debugging and troubleshooting common query-related performance problems. This includes an introduction to understanding execution plans inside SQL Server. In this almost four hour course we cover following important concepts. Introduction 10:22 Execution Plan Basics 45:59 Essential Indexing Techniques 20:19 Query Design for Performance 50:16 Performance Tuning Tools 01:15:14 Tips and Tricks 25:53 Checklist: Performance Tuning 07:13 The duration of each module is mentioned besides the name of the module. SQL Server Performance: Indexing Basics This course teaches you how to master the art of performance tuning SQL Server by better understanding indexes. In this almost two hour course we cover following important concepts. Introduction 02:03 Fundamentals of Indexing 22:21 Practical Indexing Implementation Techniques 37:25 Index Maintenance 16:33 Introduction to ColumnstoreIndex 08:06 Indexing Practical Performance Tips and Tricks 24:56 Checklist : Index and Performance 07:29 The duration of each module is mentioned besides the name of the module. SQL Server Questions and Answers This course is designed to help you better understand how to use SQL Server effectively. The course presents many of the common misconceptions about SQL Server, and then carefully debunks those misconceptions with clear explanations and short but compelling demos, showing you how SQL Server really works. In this almost 2 hours and 15 minutes course we cover following important concepts. Introduction 00:54 Retrieving IDENTITY value using @@IDENTITY 08:38 Concepts Related to Identity Values 04:15 Difference between WHERE and HAVING 05:52 Order in WHERE clause 07:29 Concepts Around Temporary Tables and Table Variables 09:03 Are stored procedures pre-compiled? 05:09 UNIQUE INDEX and NULLs problem 06:40 DELETE VS TRUNCATE 06:07 Locks and Duration of Transactions 15:11 Nested Transaction and Rollback 09:16 Understanding Date/Time Datatypes 07:40 Differences between VARCHAR and NVARCHAR datatypes 06:38 Precedence of DENY and GRANT security permissions 05:29 Identify Blocking Process 06:37 NULLS usage with Dynamic SQL 08:03 Appendix Tips and Tricks with Tools 20:44 The duration of each module is mentioned besides the name of the module. SQL in Sixty Seconds You will have to login and to get subscribed to the courses to view them. Here are my free video learning resources SQL in Sixty Seconds. These are 60 second video which I have built on various subjects related to SQL Server. Do let me know what you think about them? Here are three of my latest videos: Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 Copy Column Headers from Resultset – SQL in Sixty Seconds #027 Effect of Collation on Resultset – SQL in Sixty Seconds #026 You can watch and learn at your own pace.  Then you can easily ask me any questions you have.  E-mail is easiest, but for really tough questions I’m willing to talk on Skype, Gtalk, or even Facebook chat.  Please do watch and then talk with me, I am always available on the internet! Here is the video of the world’s fastest man.Usain St. Leo Bolt inspires us that we all do better than best. We can go the next level of our own record. We all can improve if we have a will and dedication.  Watch the video from 5:00 mark. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology, Video

    Read the article

  • CodePlex Daily Summary for Sunday, May 16, 2010

    CodePlex Daily Summary for Sunday, May 16, 2010New Projects3D Calculator: 3D Calc is a simple calculator application for Windows Phone 7, the purpose of this project is to demo the 3D animations capabilities of WP7 and sh...azaleas: AzaleasBlueset Studio Opensource Projects: Only for Opensource projects form Blueset Studio.Breck: A Phoenix and Jumper Moneky Production: Breck is a first person non-violent shooter developed in C++ and Dark GDK. After the main game is developed we are looking into making a sequel or...Discuz! Forum SDK: This project is use to login in and post or reply topic on discuz forum.Dominion.NET: Evolving Dominion source code originally written in VB6 and posted by "jatill" on Collectible Card Game Headquarters. Migration of the design and s...EkspSys2010-ITR: A mini project for the course Experimental System devolopment in spring 2010Facebook Graph Toolkit: This project is a .Net implementation of the Facebook Graph API. The aim of this project is to be a replacement to the existing Facebook Toolkit (h...iFree: This is a solution for Vietnamese network socialInfoPath Editor for Developer: InfoPath Editor for developer allows user to modify the html text directly inside InfoPath designer or filler and push the change back to InfoPath ...iZeit: Run your own online calendar, with blog integration, recurrence, todo list and categories.machgos dotNet Tests: Just some little test-projects for learningmim: TBAMinePost: MinePost is a game made for the first 48 hour Reddit Game Jam.Mockina: Mockina is a mock framework. Expression tree syntax is used to specify which members to mock, both public and non-public. The code is easy to under...MSBuild Launch Pad (mPad): This is just another shell extension for MSBuild to enable quick execution of MSBuild scripts via Windows Explorer context menu. (C) 2010 Lex LiPeacock: A browser like tabbed applicationPrimeCalculation: PrimeCalculation is a .NET app to calculate primes in a given range. Speed on Core2Duo 2,4GHZ: Found all primes from 0 to 1 billion in 35 seconds (...Slightly Silverlight: A Framework that leverages Silverlight for processing, business logic but standard HTML for the presentation layer.Stopwatch: Stopwatch is a tool for measuring the time. To start and pause stopwatch you only need to press a key on the keyboard. An additional context menu a...YAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib is an XML Serialization library which helps you structure freely the XML result, choose among private and public fields to be serialized, an...New ReleasesActivate Your Glutes: v1.0.3.0: This release is a migration to VS2010, .Net 4, MVC2 and Entity Framework 4. The code has also been considerably cleaned up - taking advantage of E...AnyCAD: AnyCAD.Free.ENU.v1.1: http://www.anycad.net Modeling •2D: Line, Rectangle, Arc, Arch, Circle, Spline, Polygon •Feature: Extrude, Loft, Chamfer, Sweep, Revol •Boolean: ...Blueset Studio Opensource Projects: 多功能计算器 3.5: 稳定版本。Code for Rapid C# Windows Development eBook: LLBLGen LINQPad Data Context Driver Ver 1.0.0.0: First release of a Static LLBLGen Pro Data Context Driver for LINQPad I recommend LINQPad 4 as it seems more stable with this driver than LINQPad 2.DSQLT - Dynamic SQL Templates: Release 1.2. Some behaviour has changed!!: Attention. Some behaviour has changed! Now its necessary to use WildCards in the pattern-parameter for DSQLT.AllSourceContains DSQLT.Databases DSQ...FDS AutoCAD plug-in: FDS to AutoCAD plug-in: Basic functionality was implemented. Some routines like setting fds executable location are still not automated.Feature Builder Guidance Extensions: FBGX 2 - Standalone FX: Background: The Feature Builder Guidance is extensible and displays guidance content supplied by all the Feature Builder Guidance Extensions (FBGX...Floe IRC Client: Floe IRC Client 2010-05 R3: - You can now right click on the input box to get options for toggling bold, underline, colors, etc. - The size of the nickname column is now saved...Floe IRC Client: Floe IRC Client 2010-05 R4: - A user's channel status now appears next to their nick when they talk (e.g. @Nick or +Nick) - Fixed an error where certain kinds of network probl...HD-Trailers.NET Downloader: HD-Trailers.NET Downloader v1.0: Version 1.0 Thanks to Wolfgang for all his help. I let this project languish for too long while focusing on other things, but his involvement has ...InfoPath Editor for Developer: InfoPath Editor Beta 1: Intial Release: Can load InfoPath inner html. Can edit InfoPath inner html. InfoPath 2007 only.LinkSharp: LinkSharp 0.1.0: First release of LinkSharp. Set up iis, and use the sql script to create a new database.PowerAuras: PowerAuras V3.0.0F: This version adds better integration with GTFO New Flags Added PvP flag In 5-Man Instance In Raid Instance In Battleground In ArenaRx Contrib: V1.4: Add the ability to catch internal exception and the ability to publish error by queue adaptersSEO SiteMap: SEO SiteMap RC1: -SevenZipLib Library: v9.13.2: Stable release associated with 7z.dll 9.13 beta. Ability to create and update archives not implemente yet.Silverlight / WPF Controls: Upload, FlipPanel, DeepZoom, Animation, Encryption: Code Camp Demonstration: This code example demonstrates MVVM/MEF with WPF with attached properties,security and custom ICommand class.SQL Data Capture - Black Box Application Testing: SQLDataCapture V1.2: Added Entity Framework Support to CRUD generator (Insert Stored Procedure) and switched to VS 2010 for development.Stopwatch: Stopwatch 0.1: Stopwatch Release 0.1VCC: Latest build, v2.1.30515.0: Automatic drop of latest buildYet another developer blog - Examples: Asynchronous TreeView in ASP.NET MVC: This sample application shows how to use jQuery TreeView plugin for creating an asynchronous TreeView in ASP.NET MVC. This application is accompani...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRawrPHPExcelBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersMirror Testing SystemN2 CMSStyleCop

    Read the article

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

  • Security Issues with Single Page Apps

    - by Stephen.Walther
    Last week, I was asked to do a code review of a Single Page App built using the ASP.NET Web API, Durandal, and Knockout (good stuff!). In particular, I was asked to investigate whether there any special security issues associated with building a Single Page App which are not present in the case of a traditional server-side ASP.NET application. In this blog entry, I discuss two areas in which you need to exercise extra caution when building a Single Page App. I discuss how Single Page Apps are extra vulnerable to both Cross-Site Scripting (XSS) attacks and Cross-Site Request Forgery (CSRF) attacks. This goal of this blog post is NOT to persuade you to avoid writing Single Page Apps. I’m a big fan of Single Page Apps. Instead, the goal is to ensure that you are fully aware of some of the security issues related to Single Page Apps and ensure that you know how to guard against them. Cross-Site Scripting (XSS) Attacks According to WhiteHat Security, over 65% of public websites are open to XSS attacks. That’s bad. By taking advantage of XSS holes in a website, a hacker can steal your credit cards, passwords, or bank account information. Any website that redisplays untrusted information is open to XSS attacks. Let me give you a simple example. Imagine that you want to display the name of the current user on a page. To do this, you create the following server-side ASP.NET page located at http://MajorBank.com/SomePage.aspx: <%@Page Language="C#" %> <html> <head> <title>Some Page</title> </head> <body> Welcome <%= Request["username"] %> </body> </html> Nothing fancy here. Notice that the page displays the current username by using Request[“username”]. Using Request[“username”] displays the username regardless of whether the username is present in a cookie, a form field, or a query string variable. Unfortunately, by using Request[“username”] to redisplay untrusted information, you have now opened your website to XSS attacks. Here’s how. Imagine that an evil hacker creates the following link on another website (hackers.com): <a href="/SomePage.aspx?username=<script src=Evil.js></script>">Visit MajorBank</a> Notice that the link includes a query string variable named username and the value of the username variable is an HTML <SCRIPT> tag which points to a JavaScript file named Evil.js. When anyone clicks on the link, the <SCRIPT> tag will be injected into SomePage.aspx and the Evil.js script will be loaded and executed. What can a hacker do in the Evil.js script? Anything the hacker wants. For example, the hacker could display a popup dialog on the MajorBank.com site which asks the user to enter their password. The script could then post the password back to hackers.com and now the evil hacker has your secret password. ASP.NET Web Forms and ASP.NET MVC have two automatic safeguards against this type of attack: Request Validation and Automatic HTML Encoding. Protecting Coming In (Request Validation) In a server-side ASP.NET app, you are protected against the XSS attack described above by a feature named Request Validation. If you attempt to submit “potentially dangerous” content — such as a JavaScript <SCRIPT> tag — in a form field or query string variable then you get an exception. Unfortunately, Request Validation only applies to server-side apps. Request Validation does not help in the case of a Single Page App. In particular, the ASP.NET Web API does not pay attention to Request Validation. You can post any content you want – including <SCRIPT> tags – to an ASP.NET Web API action. For example, the following HTML page contains a form. When you submit the form, the form data is submitted to an ASP.NET Web API controller on the server using an Ajax request: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> </head> <body> <form data-bind="submit:submit"> <div> <label> User Name: <input data-bind="value:user.userName" /> </label> </div> <div> <label> Email: <input data-bind="value:user.email" /> </label> </div> <div> <input type="submit" value="Submit" /> </div> </form> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { user: { userName: ko.observable(), email: ko.observable() }, submit: function () { $.post("/api/users", ko.toJS(this.user)); } }; ko.applyBindings(viewModel); </script> </body> </html> The form above is using Knockout to bind the form fields to a view model. When you submit the form, the view model is submitted to an ASP.NET Web API action on the server. Here’s the server-side ASP.NET Web API controller and model class: public class UsersController : ApiController { public HttpResponseMessage Post(UserViewModel user) { var userName = user.UserName; return Request.CreateResponse(HttpStatusCode.OK); } } public class UserViewModel { public string UserName { get; set; } public string Email { get; set; } } If you submit the HTML form, you don’t get an error. The “potentially dangerous” content is passed to the server without any exception being thrown. In the screenshot below, you can see that I was able to post a username form field with the value “<script>alert(‘boo’)</script”. So what this means is that you do not get automatic Request Validation in the case of a Single Page App. You need to be extra careful in a Single Page App about ensuring that you do not display untrusted content because you don’t have the Request Validation safety net which you have in a traditional server-side ASP.NET app. Protecting Going Out (Automatic HTML Encoding) Server-side ASP.NET also protects you from XSS attacks when you render content. By default, all content rendered by the razor view engine is HTML encoded. For example, the following razor view displays the text “<b>Hello!</b>” instead of the text “Hello!” in bold: @{ var message = "<b>Hello!</b>"; } @message   If you don’t want to render content as HTML encoded in razor then you need to take the extra step of using the @Html.Raw() helper. In a Web Form page, if you use <%: %> instead of <%= %> then you get automatic HTML Encoding: <%@ Page Language="C#" %> <% var message = "<b>Hello!</b>"; %> <%: message %> This automatic HTML Encoding will prevent many types of XSS attacks. It prevents <script> tags from being rendered and only allows &lt;script&gt; tags to be rendered which are useless for executing JavaScript. (This automatic HTML encoding does not protect you from all forms of XSS attacks. For example, you can assign the value “javascript:alert(‘evil’)” to the Hyperlink control’s NavigateUrl property and execute the JavaScript). The situation with Knockout is more complicated. If you use the Knockout TEXT binding then you get HTML encoded content. On the other hand, if you use the HTML binding then you do not: <!-- This JavaScript DOES NOT execute --> <div data-bind="text:someProp"></div> <!-- This Javacript DOES execute --> <div data-bind="html:someProp"></div> <script src="Scripts/jquery-1.7.1.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { someProp : "<script>alert('Evil!')<" + "/script>" }; ko.applyBindings(viewModel); </script>   So, in the page above, the DIV element which uses the TEXT binding is safe from XSS attacks. According to the Knockout documentation: “Since this binding sets your text value using a text node, it’s safe to set any string value without risking HTML or script injection.” Just like server-side HTML encoding, Knockout does not protect you from all types of XSS attacks. For example, there is nothing in Knockout which prevents you from binding JavaScript to a hyperlink like this: <a data-bind="attr:{href:homePageUrl}">Go</a> <script src="Scripts/jquery-1.7.1.min.js"></script> <script src="Scripts/knockout-2.1.0.js"></script> <script> var viewModel = { homePageUrl: "javascript:alert('evil!')" }; ko.applyBindings(viewModel); </script> In the page above, the value “javascript:alert(‘evil’)” is bound to the HREF attribute using Knockout. When you click the link, the JavaScript executes. Cross-Site Request Forgery (CSRF) Attacks Cross-Site Request Forgery (CSRF) attacks rely on the fact that a session cookie does not expire until you close your browser. In particular, if you visit and login to MajorBank.com and then you navigate to Hackers.com then you will still be authenticated against MajorBank.com even after you navigate to Hackers.com. Because MajorBank.com cannot tell whether a request is coming from MajorBank.com or Hackers.com, Hackers.com can submit requests to MajorBank.com pretending to be you. For example, Hackers.com can post an HTML form from Hackers.com to MajorBank.com and change your email address at MajorBank.com. Hackers.com can post a form to MajorBank.com using your authentication cookie. After your email address has been changed, by using a password reset page at MajorBank.com, a hacker can access your bank account. To prevent CSRF attacks, you need some mechanism for detecting whether a request is coming from a page loaded from your website or whether the request is coming from some other website. The recommended way of preventing Cross-Site Request Forgery attacks is to use the “Synchronizer Token Pattern” as described here: https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29_Prevention_Cheat_Sheet When using the Synchronizer Token Pattern, you include a hidden input field which contains a random token whenever you display an HTML form. When the user opens the form, you add a cookie to the user’s browser with the same random token. When the user posts the form, you verify that the hidden form token and the cookie token match. Preventing Cross-Site Request Forgery Attacks with ASP.NET MVC ASP.NET gives you a helper and an action filter which you can use to thwart Cross-Site Request Forgery attacks. For example, the following razor form for creating a product shows how you use the @Html.AntiForgeryToken() helper: @model MvcApplication2.Models.Product <h2>Create Product</h2> @using (Html.BeginForm()) { @Html.AntiForgeryToken(); <div> @Html.LabelFor( p => p.Name, "Product Name:") @Html.TextBoxFor( p => p.Name) </div> <div> @Html.LabelFor( p => p.Price, "Product Price:") @Html.TextBoxFor( p => p.Price) </div> <input type="submit" /> } The @Html.AntiForgeryToken() helper generates a random token and assigns a serialized version of the same random token to both a cookie and a hidden form field. (Actually, if you dive into the source code, the AntiForgeryToken() does something a little more complex because it takes advantage of a user’s identity when generating the token). Here’s what the hidden form field looks like: <input name=”__RequestVerificationToken” type=”hidden” value=”NqqZGAmlDHh6fPTNR_mti3nYGUDgpIkCiJHnEEL59S7FNToyyeSo7v4AfzF2i67Cv0qTB1TgmZcqiVtgdkW2NnXgEcBc-iBts0x6WAIShtM1″ /> And here’s what the cookie looks like using the Google Chrome developer toolbar: You use the [ValidateAntiForgeryToken] action filter on the controller action which is the recipient of the form post to validate that the token in the hidden form field matches the token in the cookie. If the tokens don’t match then validation fails and you can’t post the form: public ActionResult Create() { return View(); } [ValidateAntiForgeryToken] [HttpPost] public ActionResult Create(Product productToCreate) { if (ModelState.IsValid) { // save product to db return RedirectToAction("Index"); } return View(); } How does this all work? Let’s imagine that a hacker has copied the Create Product page from MajorBank.com to Hackers.com – the hacker grabs the HTML source and places it at Hackers.com. Now, imagine that the hacker trick you into submitting the Create Product form from Hackers.com to MajorBank.com. You’ll get the following exception: The Cross-Site Request Forgery attack is blocked because the anti-forgery token included in the Create Product form at Hackers.com won’t match the anti-forgery token stored in the cookie in your browser. The tokens were generated at different times for different users so the attack fails. Preventing Cross-Site Request Forgery Attacks with a Single Page App In a Single Page App, you can’t prevent Cross-Site Request Forgery attacks using the same method as a server-side ASP.NET MVC app. In a Single Page App, HTML forms are not generated on the server. Instead, in a Single Page App, forms are loaded dynamically in the browser. Phil Haack has a blog post on this topic where he discusses passing the anti-forgery token in an Ajax header instead of a hidden form field. He also describes how you can create a custom anti-forgery token attribute to compare the token in the Ajax header and the token in the cookie. See: http://haacked.com/archive/2011/10/10/preventing-csrf-with-ajax.aspx Also, take a look at Johan’s update to Phil Haack’s original post: http://johan.driessen.se/posts/Updated-Anti-XSRF-Validation-for-ASP.NET-MVC-4-RC (Other server frameworks such as Rails and Django do something similar. For example, Rails uses an X-CSRF-Token to prevent CSRF attacks which you generate on the server – see http://excid3.com/blog/rails-tip-2-include-csrf-token-with-every-ajax-request/#.UTFtgDDkvL8 ). For example, if you are creating a Durandal app, then you can use the following razor view for your one and only server-side page: @{ Layout = null; } <!DOCTYPE html> <html> <head> <title>Index</title> </head> <body> @Html.AntiForgeryToken() <div id="applicationHost"> Loading app.... </div> @Scripts.Render("~/scripts/vendor") <script type="text/javascript" src="~/App/durandal/amd/require.js" data-main="/App/main"></script> </body> </html> Notice that this page includes a call to @Html.AntiForgeryToken() to generate the anti-forgery token. Then, whenever you make an Ajax request in the Durandal app, you can retrieve the anti-forgery token from the razor view and pass the token as a header: var csrfToken = $("input[name='__RequestVerificationToken']").val(); $.ajax({ headers: { __RequestVerificationToken: csrfToken }, type: "POST", dataType: "json", contentType: 'application/json; charset=utf-8', url: "/api/products", data: JSON.stringify({ name: "Milk", price: 2.33 }), statusCode: { 200: function () { alert("Success!"); } } }); Use the following code to create an action filter which you can use to match the header and cookie tokens: using System.Linq; using System.Net.Http; using System.Web.Helpers; using System.Web.Http.Controllers; namespace MvcApplication2.Infrastructure { public class ValidateAjaxAntiForgeryToken : System.Web.Http.AuthorizeAttribute { protected override bool IsAuthorized(HttpActionContext actionContext) { var headerToken = actionContext .Request .Headers .GetValues("__RequestVerificationToken") .FirstOrDefault(); ; var cookieToken = actionContext .Request .Headers .GetCookies() .Select(c => c[AntiForgeryConfig.CookieName]) .FirstOrDefault(); // check for missing cookie or header if (cookieToken == null || headerToken == null) { return false; } // ensure that the cookie matches the header try { AntiForgery.Validate(cookieToken.Value, headerToken); } catch { return false; } return base.IsAuthorized(actionContext); } } } Notice that the action filter derives from the base AuthorizeAttribute. The ValidateAjaxAntiForgeryToken only works when the user is authenticated and it will not work for anonymous requests. Add the action filter to your ASP.NET Web API controller actions like this: [ValidateAjaxAntiForgeryToken] public HttpResponseMessage PostProduct(Product productToCreate) { // add product to db return Request.CreateResponse(HttpStatusCode.OK); } After you complete these steps, it won’t be possible for a hacker to pretend to be you at Hackers.com and submit a form to MajorBank.com. The header token used in the Ajax request won’t travel to Hackers.com. This approach works, but I am not entirely happy with it. The one thing that I don’t like about this approach is that it creates a hard dependency on using razor. Your single page in your Single Page App must be generated from a server-side razor view. A better solution would be to generate the anti-forgery token in JavaScript. Unfortunately, until all browsers support a way to generate cryptographically strong random numbers – for example, by supporting the window.crypto.getRandomValues() method — there is no good way to generate anti-forgery tokens in JavaScript. So, at least right now, the best solution for generating the tokens is the server-side solution with the (regrettable) dependency on razor. Conclusion The goal of this blog entry was to explore some ways in which you need to handle security differently in the case of a Single Page App than in the case of a traditional server app. In particular, I focused on how to prevent Cross-Site Scripting and Cross-Site Request Forgery attacks in the case of a Single Page App. I want to emphasize that I am not suggesting that Single Page Apps are inherently less secure than server-side apps. Whatever type of web application you build – regardless of whether it is a Single Page App, an ASP.NET MVC app, an ASP.NET Web Forms app, or a Rails app – you must constantly guard against security vulnerabilities.

    Read the article

  • Pre-rentrée Oracle Open World 2012 : à vos agendas

    - by Eric Bezille
    A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût. Stratégie et Roadmaps Oracle Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les équipes d'intégration matériels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle’s SPARC Server Strategy Update", avec Masood Heydari, responsable des développements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des développements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des développement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du développement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'expériences et témoignages Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les témoignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les bénéfices non seulement métiers, mais également projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'expérience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le témoignage de Carte Wright, Database Engineer à CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le témoignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'expérience des experts Oracle ayant participé à la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", animé par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le témoignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le témoignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les témoignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers’ View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le témoignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure à Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle’s Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les équipes de développement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm Testez et évaluez les solutions Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseillé d'avoir suivi le "Hands-on-Labs" précédent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle’s Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle’s Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

    Read the article

  • Building KPIs to monitor your business Its not really about the Technology

    When I have discussions with people about Business Intelligence, one of the questions the inevitably come up is about building KPIs and how to accomplish that. From a technical level the concept of a KPI is very simple, almost too simple in that it is like the tip of an iceberg floating above the water. The key to that iceberg is not really the tip, but the mass of the iceberg that is hidden beneath the surface upon which the tip sits. The analogy of the iceberg is not meant to indicate that the foundation of the KPI is overly difficult or complex. The disparity in size in meant to indicate that the larger thing that needs to be defined is not the technical tip, but the underlying business definition of what the KPI means. From a technical perspective the KPI consists of primarily the following items: Actual Value This is the actual value data point that is being measured. An example would be something like the amount of sales. Target Value This is the target goal for the KPI. This is a number that can be measured against Actual Value. An example would be $10,000 in monthly sales. Target Indicator Range This is the definition of ranges that define what type of indicator the user will see comparing the Actual Value to the Target Value. Most often this is defined by stoplight, but can be any indicator that is going to show a status in a quick fashion to the user. Typically this would be something like: Red Light = Actual Value more than 5% below target; Yellow Light = Within 5% of target either direction; Green Light = More than 5% higher than Target Value Status\Trend Indicator This is an optional attribute of a KPI that is typically used to show some kind of trend. The vast majority of these indicators are used to show some type of progress against a previous period. As an example, the status indicator might be used to show how the monthly sales compare to last month. With this type of indicator there needs to be not only a definition of what the ranges are for your status indictor, but then also what value the number needs to be compared against. So now we have an idea of what data points a KPI consists of from a technical perspective lets talk a bit about tools. As you can see technically there is not a whole lot to them and the choice of technology is not as important as the definition of the KPIs, which we will get to in a minute. There are many different types of tools in the Microsoft BI stack that you can use to expose your KPI to the business. These include Performance Point, SharePoint, Excel, and SQL Reporting Services. There are pluses and minuses to each technology and the right technology is based a lot on your goals and how you want to deliver the information to the users. Additionally, there are other non-Microsoft tools that can be used to expose KPI indicators to your business users. Regardless of the technology used as your front end, the heavy lifting of KPI is in the business definition of the values and benchmarks for that KPI. The discussion about KPIs is very dependent on the history of an organization and how much they are exposed to the attributes of a KPI. Often times when discussing KPIs with a business contact who has not been exposed to KPIs the discussion tends to also be a session educating the business user about what a KPI is and what goes into the definition of a KPI. The majority of times the business user has an idea of what their actual values are and they have been tracking those numbers for some time, generally in Excel and all manually. So they will know the amount of sales last month along with sales two years ago in the same month. Where the conversation tends to get stuck is when you start discussing what the target value should be. The actual value is answering the What and How much questions. When you are talking about the Target values you are asking the question Is this number good or bad. Typically, the user will know whether or not the value is good or bad, but most of the time they are not able to quantify what is good or bad. Their response is usually something like I just know. Because they have been watching the sales quantity for years now, they can tell you that a 5% decrease in sales this month might actually be a good thing, maybe because the salespeople are all waiting until next month when the new versions come out. It can sometimes be very hard to break the business people of this habit. One of the fears generally is that the status indicator is not subjective. Thus, in the scenario above, the business user is going to be fearful that their boss, just looking at a negative red indicator, is going to haul them out to the woodshed for a bad month. But, on the flip side, if all you are displaying is the amount of sales, only a person with knowledge of last month sales and the target amount for this month would have any idea if $10,000 in sales is good or not. Here is where a key point about KPIs needs to be communicated to both the business user and any user who might be viewing the results of that KPI. The KPI is just one tool that is used to report on business performance. The KPI is meant as a quick indicator of one business statistic. It is not meant to tell the entire story. It does not answer the question Why. Its primary purpose is to objectively and quickly expose an area of the business that might warrant more review. There is always going to be the need to do further analysis on any potential negative or neutral KPI. So, hopefully, once you have convinced your business user to come up with some target numbers and ranges for status indicators, you then need to take the next step and help them answer the Why question. The main question here to ask is, Okay, you see the indicator and you need to discover why the number is what is, where do you go?. The answer is usually a combination of sources. A sales manager might have some of the following items at their disposal (Marketing report showing a decrease in the promotional discounts for the month, Pricing Report showing the reduction of prices of older models, an Inventory Report showing the discontinuation of a particular product line, or a memo showing the ending of a large affiliate partnership. The answers to the question Why are never as simple as a single indicator value. Bring able to quickly get to this information is all about designing how a user accesses the KPIs and then also how easily they can get to the additional information they need. This is where a Dashboard mentality can come in handy. For example, the business user can have a dashboard that shows their KPIs, but also has links to some of the common reports that they run regarding Sales Data. The users boss may have the same KPIs on their dashboard, but instead of links to individual reports they are going to have a link to a status report that was created by the user that pulls together all the data about the KPI in a summary format the users boss can review. So some of the key things to think about when building or evaluating KPIs for your organization: Technology should not be the driving factor KPIs are of little value without some indicator for whether a value is good, bad or neutral. KPIs only give an answer to the Is this number good\bad? question Make sure the ability to drill into the Why of a KPI is close at hand and relevant to the user who is viewing the KPI. The KPI is a key business tool when defined properly to help monitor business performance across the enterprise in an objective and consistent manner. At times it might feel like the process of defining the business aspects of a KPI can sometimes be arduous, the payoff in the end can far outweigh the costs. Some of the benefits of going through this process are a better understanding of the key metrics for an organization and the measure of those metrics and a consistent snapshot of business performance that can be utilized across the organization. And I think that these are benefits to any organization regardless of the technology or the implementation.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • CodePlex Daily Summary for Wednesday, November 23, 2011

    CodePlex Daily Summary for Wednesday, November 23, 2011Popular ReleasesVisual Leak Detector for Visual C++ 2008/2010: v2.2.1: Enhancements: * strdup and _wcsdup functions support added. * Preliminary support for VS 11 added. Bugs Fixed: * Low performance after upgrading from VLD v2.1. * Memory leaks with static linking fixed (disabled calloc support). * Runtime error R6002 fixed because of wrong memory dump format. * version.h fixed in installer. * Some PVS studio warning fixed.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.10: 3.6.0.10 22-Nov-2011 Update: Removed PreEmptive Platform integration (PreEmptive analytics) Removed all PreEmptive attributes Removed PreEmptive.dll assembly references from all projects Added first support to ADAM/AD LDS Thanks to PatBea. Work Item 9775: http://netsqlazman.codeplex.com/workitem/9775Developer Team Article System Management: DTASM v1.3: ?? ??? ???? 3 ????? ???? ???? ????? ??? : - ????? ?????? ????? ???? ?? ??? ???? ????? ?? ??? ? ?? ???? ?????? ???? ?? ???? ????? ?? . - ??? ?? ???? ????? ???? ????? ???? ???? ?? ????? , ?????? ????? ????? ?? ??? . - ??? ??????? ??? ??? ???? ?? ????? ????? ????? .SharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.2.0: Web parts are now fully customizable via html templates (Issue #323) FBA Pack is now completely localizable using resource files. Thank you David Chen for submitting the code as well as Chinese translations of the FBA Pack! The membership request web part now gives the option of having the user enter the password and removing the captcha (Issue # 447) The FBA Pack will now work in a zone that does not have FBA enabled (Another zone must have FBA enabled, and the zone must contain the me...SharePoint 2010 Education Demo Project: Release SharePoint SP1 for Education Solutions: This release includes updates to the Content Packs for SharePoint SP1. All Content Packs have been updated to install successfully under SharePoint SP1SQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 6: 1. improved support for schema 2. added find reference when right click on object list 3. added object rename supportBugNET Issue Tracker: BugNET 0.9.126: First stable release of version 0.9. Upgrades from 0.8 are fully supported and upgrades to future releases will also be supported. This release is now compiled against the .NET 4.0 framework and is a requirement. Because of this the web.config has significantly changed. After upgrading, you will need to configure the authentication settings for user registration and anonymous access again. Please see our installation / upgrade instructions for more details: http://wiki.bugnetproject.c...Anno 2070 Assistant: v0.1.0 (STABLE): Version 0.1.0 Features Production Chains Eco Production Chains (Complete) Tycoon Production Chains (Disabled - Incomplete) Tech Production Chains (Disabled - Incomplete) Supply (Disabled - Incomplete) Calculator (Disabled - Incomplete) Building Layouts Eco Building Layouts (Complete) Tycoon Building Layouts (Disabled - Incomplete) Tech Building Layouts (Disabled - Incomplete) Credits (Complete)Free SharePoint 2010 Sites Templates: SharePoint Server 2010 Sites Templates: here is the list of sites templates to be downloadedVsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 30 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 30 (beta)New: Support for TortoiseSVN 1.7 added. (the download contains both setups, for TortoiseSVN 1.6 and 1.7) New: OpenModifiedDocumentDialog displays conflicted files now. New: OpenModifiedDocument allows to group items by changelist now. Fix: OpenModifiedDocumentDialog caused Visual Studio 2010 to freeze sometimes. Fix: The installer didn...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.30: Highlight features & improvements: • Performance optimization. • Back in stock notifications. • Product special price support. • Catalog mode (based on customer role) To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).WPF Converters: WPF Converters V1.2.0.0: support for enumerations, value types, and reference types in the expression converter's equality operators the expression converter now handles DependencyProperty.UnsetValue as argument values correctly (#4062) StyleCop conformance (more or less)Json.NET: Json.NET 4.0 Release 4: Change - JsonTextReader.Culture is now CultureInfo.InvariantCulture by default Change - KeyValurPairConverter no longer cares about the order of the key and value properties Change - Time zone conversions now use new TimeZoneInfo instead of TimeZone Fix - Fixed boolean values sometimes being capitalized when converting to XML Fix - Fixed error when deserializing ConcurrentDictionary Fix - Fixed serializing some Uris returning the incorrect value Fix - Fixed occasional error when...Media Companion: MC 3.423b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Replaced 'Rebuild' with 'Refresh' throughout entire code. Rebuild will now be known as Refresh. mc_com.exe has been fully updated TV Show Resolutions... Resolved issue #206 - having to hit save twice when updating runtime manually Shrunk cache size and lowered loading times f...Delta Engine: Delta Engine Beta Preview v0.9.1: v0.9.1 beta release with lots of refactoring, fixes, new samples and support for iOS, Android and WP7 (you need a Marketplace account however). If you want a binary release for the games (like v0.9.0), just say so in the Forum or here and we will quickly prepare one. It is just not much different from v0.9.0, so I left it out this time. See http://DeltaEngine.net/Wiki.Roadmap for details.ASP.net Awesome Samples (Web-Forms): 1.0 samples: Demos and Tutorials for ASP.net Awesome VS2008 are in .NET 3.5 VS2010 are in .NET 4.0 (demos for the ASP.net Awesome jQuery Ajax Controls)SharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblyAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.36: Fix for issue #16908: string literals containing ASP.NET replacement syntax fail if the ASP.NET code contains the same character as the string literal delimiter. Also, we shouldn't be changing the delimiter for those literals or combining them with other literals; the developer may have specifically chosen the delimiter used because of possible content inserted by ASP.NET code. This logic is normally off; turn it on via the -aspnet command-line flag (or the Code.Settings.AllowEmbeddedAspNetBl...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...New ProjectsActiveWorlds World Server Admin PowerShell SnapIn: The purpose of this PowerShell SnapIn is to provide a set of tools to administer the world server from PowerShell. It leverages the ActiveWorlds SDK .NET Wrapper to provide this functionality.Aigu: Enter special characters like you would on your mobile phone. For instance, if you want to type 'é', you just hold down 'e' and a menu will appear. Selected the desired character using the arrow keys and press 'enter'. Simple but powerful.Are you workaholic?: Are you a workaholic? Did your Doctor advice you not to stare at the computer monitor for a long time? Then this app is perfectly made for you. It runs in the background, and alerts you to take periodic rests for your eyes and body. What's more, It's open source (MS-PL).ATDIS PoC: privateAuto Version Web Assets: The AVWA project is an HTTP Module written in C# that is designed to allow for versioning of various web assets such as .CSS and .JS files. This allows you to publish new versions of these files without having to force the server or the client browsers to expire cache.Bachelor Thesis Algorithm Test Bed: Algorithm Test Bed for my Bachelor ThesisBase64: Simple application helps converting strings and files from or to Base64 string. You can use any encoding to convert while a sidebar previews decoded string for all other encodings.BoracayExpress: BoracayExpressC++ Framework for Test Driven Development: A testing framework for C++ written in C++.Class2Table: Class2Table aka Entity2Table. Easy tool that allows creation of SQL tables from .Net types.Code for Demos & Experiments: This is where I will post code from demos and presentationsCodeMaker: CodeMaker?????????: 1、?????????? 2、???? 3、????? 4、??Python????????? ConsoleCommand: ConsoleCommand provides certain .Net commands for access from javascript console engines. Included are commands to set the text and background colors, as well as list and extract resources compiled in a .Net dll. Converter: Character code conversion tools ???????? CryptoInator - self contained, self-encrypting, self-decrypting image viewer: Original developed to encrypt and store NemID images in Denmark. DAiBears: Something, something, botDelicious Notify Plugin: Lets you push a blog post straight to Delicious from Live WriterDeveloperFile: Compresses Javascripts using the YUI .NET project. Loops through the root folder and subfolders for files matching the debug extension and creates new files using the release extension. (File extensions must match exactly).DotNetNuke SharePoint File Explorer: A DotNetNuke SharePoint File ExplorerDouban FM: WP7 Douban FM appGame Lib: Game Library is a open-source game library to allow focusing on the fun part of a game. It is developed in C#, but will be ported to C++ and VB.net.Google reader notes to Delicious Export tool (WPF): Google reader discontinued note in reader features. Current google reader allows to export users old notes in JSON format, This App will parse the JSON file & upload it to it delicious , delicious is a good alternative for note in readerHtml Source Transmitter Control: This web control allows getting a source of a web page, that will displayed before submit. So, developer can store a view of the html page, that was before server exception. It helps to reproduce bugs and can be used with other logging systems.Ideopuzzle: A puzzle gameImageShack-Uploader: This project demonstrates how to upload files automated to imageshack.us and other image hosters with C#.Insert Acronym Tags: Lets you insert <acronym> and <abbr> tags into your blog entry more easily.Insert Quick Link: Allows you to paste a link into the Writer window and have the a window similar to the one in Writer where you can change what text is to appear, open in new window, etc.Insert Video Plugin: Allows you to insert a video into a blog entry from a multitude of different sitesIoCWrap: Provides a wrapper to the various IoC container implementations so that it is possible to switch to a different provider without changing any application code.kaveepoj: sharepoint projectKinect Quiz Engine: Fun quiz game for the Kinect.Klaverjas: Test application for testing different new technologies in .NET (WCF, DataServices, C# stuff, Entity...etc.)Man In The Middle: A cyberpunk themed action with puzzle and strategy elements. Made with XNA as part of a game development course at the IT University of Copenhagen by Bo Bendtsen, Jonas Flensbak, Daniel Kromand, Jess Rahbek & Darryl Woodford.MediaSelektor: Simple tool to select mediasMicajah Mindtouch Deki Wiki Copier: Small C# application to move data between 2 Deki Wiki installs or, more importantly, from a wik.is account to a locally installed systemMineFlagger: MineFlagger is a mine clearing game modeled after Microsoft’s Minesweeper. In addition to standard play, MineFlagger incorporates an AI for fun and training.myXbyqwrhjadsfasfhgf: myXbyqwrhjadsfasfhgfnatoop: natoopNauplius.KeyStore: Provides secure application key storage backed by SQL 2008 and Active Directory.ObjectDB: An object database written using C# 4 and Mono.Cecil.PaceR: PaceR is an attempt to encapsulate a lot of the common code functionality I use on different projects. Instead of recreating functionality from memory or worse, copying from older projects, I'd like to have a central location to maintain this common code. Parseq: Parseq is a Parser Combinator library written in C# (version 2.0).PowerShell Network Adapter Configuration module: PowerShell Network Adapter Configuration module is a PowerShell module which provides functions for managing network adapters using WMI.public traffic tracker: This is a university project for a .net course. We develop a public traffic tracker applications for Windows Phone 7 devices, that can give information about the actual positions of the nearest vehicle on a given line. The speciality is that we use only the GPS information of the users' WP7 devices, so this is a completely software solution without any hardware investment. The disatvantage is that for the real operation we would need a lot of active WP7 user.puyo: puyoRadioTroll: Projeto web Radio TrollRead Feed Community: Read Feed CommunityReviewer: Reviewer.dk - Dansk spil og anmeldelsessite.Rollout Sharepoint Solutions - ROSS: ROSS performs the following actions: - Delete sitecollection and restart services - 'Get Latest Version' from SourceSafe - Rebuild Solution - Install all wsp solutions - Create SiteCollections - Check for build en provisioning errors - Send email to developers if errors occurredSchool Management: school managementSQL File Executer: This project is a class library written in c# which is used for executing *.sql files in remote server. Simply one dll file. You include it in your web project, add using statement at the top of your page, pass the parameters inside. Rest, it will do.Startup Manager: Startup Manager launches all startup programs at a managed rate therefore meaning that your computer doesn't crash everytime it starts up and you can use it immediately.stetic: ...Test Infrastructure Guidance: The purpose of this project is to provide guidance to testers in using TFS effectively as an ALM solution. TFS is much more than a simple code repository. Used with Visual Studio it can form a powerful testing solution and remove a lot of pain in dealing with test infrastructure overhead.Tête-à-tête: Tete-a-tete is an address book with a built-in function to send electronic mail over the Internet.Tipeysh! - Add-in that helps you creating C/C++ header files on a single click: Are you also feel miserable when you need to create a new header file in your Visual Studio C/C++ project? Repeatedly choosing "new header file", then writing the annoying (but needed) "#ifndef" section, then writing the class name with it's "private", "protected" and "public" access modifiers... too much clicks and typewriting! Well, there is a solution: Tipeysh! is a simple, easy to use, very handy and configurable Visual Studio Add-In, compatible for both the 2005 and 2008 versions. Once ...UMN Dashboard Project: academic projUsersMOSS: UsersMOSS est une petite application permettant de consulter sur un serveur MOSS les sites web (SPWeb) les users (SPUser), et les groupes (SPGroup). Cette application utilise le modèle objet de MOSS pour inspecter le contenu des objets d'un serveur MOSS. Cette application est loin d'être professionnelle, ou même terminée, mais elle me rend très souvent service. Surtout ne l'utilisez pas sur un serveur de production car le gestion du GC n'est pas faite, ce qui peut provoquer des plantages de v...UtilityLibrary.Win32: UtilityLibrary.Win32UW iLearn: The iLearn activity inference platform is a suite of desktop and mobile tools for logging, modeling, and classifying sensor data for mobile devices. It was created at the University of Washington.VsDocGen: Dynamic javascript documentation generation directly from xml comment documented source code.Windows Live Spaces Photo Album plugin: This is going to be a plugin for Windows Live Writer that will allow you to browse a Windows Live Space Photo Album.Windows Live Writer Plugin for Amazon Books using CueCat: This Windows Live Writer Plugin is for users who use WLW and wish to use their CueCat to scan books. ItemLookups are run against Amazon via its AWS and book image, title, author, and publisher is returned. This project was first created by Scott Hanselman on MSDN's Coding4Fun! X7: X7 makes it easier for win7user to clean the system. You'll no longer have to delete useless stuff in your win7. It's developed in bat.xDT - Commander: Using this application, the user can assign shortcuts (short texts) for various links/URLs. These short texts will be typed into a Textbox to then launch/go to the target (similar to the "Run" program in Windows).

    Read the article

  • Scripting Language Sessions at Oracle OpenWorld and MySQL Connect, 2012

    - by cj
    This posts highlights some great scripting language sessions coming up at the Oracle OpenWorld and MySQL Connect conferences. These events are happening in San Francisco from the end of September. You can search for other interesting conference sessions in the Content Catalog. Also check out what is happening at JavaOne in that event's Content Catalog (I haven't included sessions from it in this post.) To find the timeslots and locations of each session, click their respective link and check the "Session Schedule" box on the top right. GEN8431 - General Session: What’s New in Oracle Database Application Development This general session takes a look at what’s been new in the last year in Oracle Database application development tools using the latest generation of database technology. Topics range from Oracle SQL Developer and Oracle Application Express to Java and PHP. (Thomas Kyte - Architect, Oracle) BOF9858 - Meet the Developers of Database Access Services (OCI, ODBC, DRCP, PHP, Python) This session is your opportunity to meet in person the Oracle developers who have built Oracle Database access tools and products such as the Oracle Call Interface (OCI), Oracle C++ Call Interface (OCCI), and Open Database Connectivity (ODBC) drivers; Transparent Application Failover (TAF); Oracle Database Instant Client; Database Resident Connection Pool (DRCP); Oracle Net Services, and so on. The team also works with those who develop the PHP, Ruby, Python, and Perl adapters for Oracle Database. Come discuss with them the features you like, your pains, and new product enhancements in the latest database technology. CON8506 - Syndication and Consolidation: Oracle Database Driver for MySQL Applications This technical session presents a new Oracle Database driver that enables you to run MySQL applications (written in PHP, Perl, C, C++, and so on) against Oracle Database with almost no code change. Use cases for such a driver include application syndication such as interoperability across a relationship database management system, application migration, and database consolidation. In addition, the session covers enhancements in database technology that enable and simplify the migration of third-party databases and applications to and consolidation with Oracle Database. Attend this session to learn more and see a live demo. (Srinath Krishnaswamy - Director, Software Development, Oracle. Kuassi Mensah - Director Product Management, Oracle. Mohammad Lari - Principal Technical Staff, Oracle ) CON9167 - Current State of PHP and MySQL Together, PHP and MySQL power large parts of the Web. The developers of both technologies continue to enhance their software to ensure that developers can be satisfied despite all their changing and growing needs. This session presents an overview of changes in PHP 5.4, which was released earlier this year and shows you various new MySQL-related features available for PHP, from transparent client-side caching to direct support for scaling and high-availability needs. (Johannes Schlüter - SoftwareDeveloper, Oracle) CON8983 - Sharding with PHP and MySQL In deploying MySQL, scale-out techniques can be used to scale out reads, but for scaling out writes, other techniques have to be used. To distribute writes over a cluster, it is necessary to shard the database and store the shards on separate servers. This session provides a brief introduction to traditional MySQL scale-out techniques in preparation for a discussion on the different sharding techniques that can be used with MySQL server and how they can be implemented with PHP. You will learn about static and dynamic sharding schemes, their advantages and drawbacks, techniques for locating and moving shards, and techniques for resharding. (Mats Kindahl - Senior Principal Software Developer, Oracle) CON9268 - Developing Python Applications with MySQL Utilities and MySQL Connector/Python This session discusses MySQL Connector/Python and the MySQL Utilities component of MySQL Workbench and explains how to write MySQL applications in Python. It includes in-depth explanations of the features of MySQL Connector/Python and the MySQL Utilities library, along with example code to illustrate the concepts. Those interested in learning how to expand or build their own utilities and connector features will benefit from the tips and tricks from the experts. This session also provides an opportunity to meet directly with the engineers and provide feedback on your issues and priorities. You can learn what exists today and influence future developments. (Geert Vanderkelen - Software Developer, Oracle) BOF9141 - MySQL Utilities and MySQL Connector/Python: Python Developers, Unite! Come to this lively discussion of the MySQL Utilities component of MySQL Workbench and MySQL Connector/Python. It includes in-depth explanations of the features and dives into the code for those interested in learning how to expand or build their own utilities and connector features. This is an audience-driven session, so put on your best Python shirt and let’s talk about MySQL Utilities and MySQL Connector/Python. (Geert Vanderkelen - Software Developer, Oracle. Charles Bell - Senior Software Developer, Oracle) CON3290 - Integrating Oracle Database with a Social Network Facebook, Flickr, YouTube, Google Maps. There are many social network sites, each with their own APIs for sharing data with them. Most developers do not realize that Oracle Database has base tools for communicating with these sites, enabling all manner of information, including multimedia, to be passed back and forth between the sites. This technical presentation goes through the methods in PL/SQL for connecting to, and then sending and retrieving, all types of data between these sites. (Marcelle Kratochvil - CTO, Piction) CON3291 - Storing and Tuning Unstructured Data and Multimedia in Oracle Database Database administrators need to learn new skills and techniques when the decision is made in their organization to let Oracle Database manage its unstructured data. They will face new scalability challenges. A single row in a table can become larger than a whole database. This presentation covers the techniques a DBA needs for managing the large volume of data in a standard Oracle Database instance. (Marcelle Kratochvil - CTO, Piction) CON3292 - Using PHP, Perl, Visual Basic, Ruby, and Python for Multimedia in Oracle Database These five programming languages are just some of the most popular ones in use at the moment in the marketplace. This presentation details how you can use them to access and retrieve multimedia from Oracle Database. It covers programming techniques and methods for achieving faster development against Oracle Database. (Marcelle Kratochvil - CTO, Piction) UGF5181 - Building Real-World Oracle DBA Tools in Perl Perl is not normally associated with building mission-critical application or DBA tools. Learn why Perl could be a good choice for building your next killer DBA app. This session draws on real-world experience of building DBA tools in Perl, showing the framework and architecture needed to deal with portability, efficiency, and maintainability. Topics include Perl frameworks; Which Comprehensive Perl Archive Network (CPAN) modules are good to use; Perl and CPAN module licensing; Perl and Oracle connectivity; Compiling and deploying your app; An example of what is possible with Perl. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON3153 - Perl: A DBA’s and Developer’s Best (Forgotten) Friend This session reintroduces Perl as a language of choice for many solutions for DBAs and developers. Discover what makes Perl so successful and why it is so versatile in our day-to-day lives. Perl can automate all those manual tasks and is truly platform-independent. Perl may not be in the limelight the way other languages are, but it is a remarkable language, it is still very current with ongoing development, and it has amazing online resources. Learn what makes Perl so great (including CPAN), get an introduction to Perl language syntax, find out what you can use Perl for, hear how Oracle uses Perl, discover the best way to learn Perl, and take away a small Perl project challenge. (Arjen Visser - CEO & CTO, Dbvisit Software Limited) CON10332 - Oracle RightNow CX Cloud Service’s Connect PHP API: Intro, What’s New, and Roadmap Connect PHP is a public API that enables developers to build solutions with the Oracle RightNow CX Cloud Service platform. This API is used primarily by developers working within the Oracle RightNow Customer Portal Cloud Service framework who are looking to gain access to data and services hosted by the Oracle RightNow CX Cloud Service platform through a backward-compatible API. Connect for PHP leverages the same data model and services as the Connect Web Services for SOAP API. Come to this session to get an introduction and learn what’s new and what’s coming up. (Mark Rhoads - Senior Principal Applications Engineer, Oracle. Mark Ericson - Sr. Principle Product Manager, Oracle) CON10330 - Oracle RightNow CX Cloud Service APIs and Frameworks Overview Oracle RightNow CX Cloud Service APIs are available in the following areas: desktop UI, Web services, customer portal, PHP, and knowledge. These frameworks provide access to Oracle RightNow CX Cloud Service’s Connect Common Object Model and custom objects. This session provides a broad overview of capabilities in all these areas. (Mark Ericson - Sr. Principle Product Manager, Oracle)

    Read the article

  • C# Extension Methods - To Extend or Not To Extend...

    - by James Michael Hare
    I've been thinking a lot about extension methods lately, and I must admit I both love them and hate them. They are a lot like sugar, they taste so nice and sweet, but they'll rot your teeth if you eat them too much.   I can't deny that they aren't useful and very handy. One of the major components of the Shared Component library where I work is a set of useful extension methods. But, I also can't deny that they tend to be overused and abused to willy-nilly extend every living type.   So what constitutes a good extension method? Obviously, you can write an extension method for nearly anything whether it is a good idea or not. Many times, in fact, an idea seems like a good extension method but in retrospect really doesn't fit.   So what's the litmus test? To me, an extension method should be like in the movies when a person runs into their twin, separated at birth. You just know you're related. Obviously, that's hard to quantify, so let's try to put a few rules-of-thumb around them.   A good extension method should:     Apply to any possible instance of the type it extends.     Simplify logic and improve readability/maintainability.     Apply to the most specific type or interface applicable.     Be isolated in a namespace so that it does not pollute IntelliSense.     So let's look at a few examples in relation to these rules.   The first rule, to me, is the most important of all. Once again, it bears repeating, a good extension method should apply to all possible instances of the type it extends. It should feel like the long lost relative that should have been included in the original class but somehow was missing from the family tree.    Take this nifty little int extension, I saw this once in a blog and at first I really thought it was pretty cool, but then I started noticing a code smell I couldn't quite put my finger on. So let's look:       public static class IntExtensinos     {         public static int Seconds(int num)         {             return num * 1000;         }           public static int Minutes(int num)         {             return num * 60000;         }     }     This is so you could do things like:       ...     Thread.Sleep(5.Seconds());     ...     proxy.Timeout = 1.Minutes();     ...     Awww, you say, that's cute! Well, that's the problem, it's kitschy and it doesn't always apply (and incidentally you could achieve the same thing with TimeStamp.FromSeconds(5)). It's syntactical candy that looks cool, but tends to rot and pollute the code. It would allow things like:       total += numberOfTodaysOrders.Seconds();     which makes no sense and should never be allowed. The problem is you're applying an extension method to a logical domain, not a type domain. That is, the extension method Seconds() doesn't really apply to ALL ints, it applies to ints that are representative of time that you want to convert to milliseconds.    Do you see what I mean? The two problems, in a nutshell, are that a) Seconds() called off a non-time value makes no sense and b) calling Seconds() off something to pass to something that does not take milliseconds will be off by a factor of 1000 or worse.   Thus, in my mind, you should only ever have an extension method that applies to the whole domain of that type.   For example, this is one of my personal favorites:       public static bool IsBetween<T>(this T value, T low, T high)         where T : IComparable<T>     {         return value.CompareTo(low) >= 0 && value.CompareTo(high) <= 0;     }   This allows you to check if any IComparable<T> is within an upper and lower bound. Think of how many times you type something like:       if (response.Employee.Address.YearsAt >= 2         && response.Employee.Address.YearsAt <= 10)     {     ...     }     Now, you can instead type:       if(response.Employee.Address.YearsAt.IsBetween(2, 10))     {     ...     }     Note that this applies to all IComparable<T> -- that's ints, chars, strings, DateTime, etc -- and does not depend on any logical domain. In addition, it satisfies the second point and actually makes the code more readable and maintainable.   Let's look at the third point. In it we said that an extension method should fit the most specific interface or type possible. Now, I'm not saying if you have something that applies to enumerables, you create an extension for List, Array, Dictionary, etc (though you may have reasons for doing so), but that you should beware of making things TOO general.   For example, let's say we had an extension method like this:       public static T ConvertTo<T>(this object value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This lets you do more fluent conversions like:       double d = "5.0".ConvertTo<double>();     However, if you dig into Reflector (LOVE that tool) you will see that if the type you are calling on does not implement IConvertible, what you convert to MUST be the exact type or it will throw an InvalidCastException. Now this may or may not be what you want in this situation, and I leave that up to you. Things like this would fail:       object value = new Employee();     ...     // class cast exception because typeof(IEmployee) != typeof(Employee)     IEmployee emp = value.ConvertTo<IEmployee>();       Yes, that's a downfall of working with Convertible in general, but if you wanted your fluent interface to be more type-safe so that ConvertTo were only callable on IConvertibles (and let casting be a manual task), you could easily make it:         public static T ConvertTo<T>(this IConvertible value)     {         return (T)Convert.ChangeType(value, typeof(T));     }         This is what I mean by choosing the best type to extend. Consider that if we used the previous (object) version, every time we typed a dot ('.') on an instance we'd pull up ConvertTo() whether it was applicable or not. By filtering our extension method down to only valid types (those that implement IConvertible) we greatly reduce our IntelliSense pollution and apply a good level of compile-time correctness.   Now my fourth rule is just my general rule-of-thumb. Obviously, you can make extension methods as in-your-face as you want. I included all mine in my work libraries in its own sub-namespace, something akin to:       namespace Shared.Core.Extensions { ... }     This is in a library called Shared.Core, so just referencing the Core library doesn't pollute your IntelliSense, you have to actually do a using on Shared.Core.Extensions to bring the methods in. This is very similar to the way Microsoft puts its extension methods in System.Linq. This way, if you want 'em, you use the appropriate namespace. If you don't want 'em, they won't pollute your namespace.   To really make this work, however, that namespace should only include extension methods and subordinate types those extensions themselves may use. If you plant other useful classes in those namespaces, once a user includes it, they get all the extensions too.   Also, just as a personal preference, extension methods that aren't simply syntactical shortcuts, I like to put in a static utility class and then have extension methods for syntactical candy. For instance, I think it imaginable that any object could be converted to XML:       namespace Shared.Core     {         // A collection of XML Utility classes         public static class XmlUtility         {             ...             // Serialize an object into an xml string             public static string ToXml(object input)             {                 var xs = new XmlSerializer(input.GetType());                   // use new UTF8Encoding here, not Encoding.UTF8. The later includes                 // the BOM which screws up subsequent reads, the former does not.                 using (var memoryStream = new MemoryStream())                 using (var xmlTextWriter = new XmlTextWriter(memoryStream, new UTF8Encoding()))                 {                     xs.Serialize(xmlTextWriter, input);                     return Encoding.UTF8.GetString(memoryStream.ToArray());                 }             }             ...         }     }   I also wanted to be able to call this from an object like:       value.ToXml();     But here's the problem, if i made this an extension method from the start with that one little keyword "this", it would pop into IntelliSense for all objects which could be very polluting. Instead, I put the logic into a utility class so that users have the choice of whether or not they want to use it as just a class and not pollute IntelliSense, then in my extensions namespace, I add the syntactical candy:       namespace Shared.Core.Extensions     {         public static class XmlExtensions         {             public static string ToXml(this object value)             {                 return XmlUtility.ToXml(value);             }         }     }   So now it's the best of both worlds. On one hand, they can use the utility class if they don't want to pollute IntelliSense, and on the other hand they can include the Extensions namespace and use as an extension if they want. The neat thing is it also adheres to the Single Responsibility Principle. The XmlUtility is responsible for converting objects to XML, and the XmlExtensions is responsible for extending object's interface for ToXml().

    Read the article

  • Security in Software

    The term security has many meanings based on the context and perspective in which it is used. Security from the perspective of software/system development is the continuous process of maintaining confidentiality, integrity, and availability of a system, sub-system, and system data. This definition at a very high level can be restated as the following: Computer security is a continuous process dealing with confidentiality, integrity, and availability on multiple layers of a system. Key Aspects of Software Security Integrity Confidentiality Availability Integrity within a system is the concept of ensuring only authorized users can only manipulate information through authorized methods and procedures. An example of this can be seen in a simple lead management application.  If the business decided to allow each sales member to only update their own leads in the system and sales managers can update all leads in the system then an integrity violation would occur if a sales member attempted to update someone else’s leads. An integrity violation occurs when a team member attempts to update someone else’s lead because it was not entered by the sales member.  This violates the business rule that leads can only be update by the originating sales member. Confidentiality within a system is the concept of preventing unauthorized access to specific information or tools.  In a perfect world the knowledge of the existence of confidential information/tools would be unknown to all those who do not have access. When this this concept is applied within the context of an application only the authorized information/tools will be available. If we look at the sales lead management system again, leads can only be updated by originating sales members. If we look at this rule then we can say that all sales leads are confidential between the system and the sales person who entered the lead in to the system. The other sales team members would not need to know about the leads let alone need to access it. Availability within a system is the concept of authorized users being able to access the system. A real world example can be seen again from the lead management system. If that system was hosted on a web server then IP restriction can be put in place to limit access to the system based on the requesting IP address. If in this example all of the sales members where accessing the system from the 192.168.1.23 IP address then removing access from all other IPs would be need to ensure that improper access to the system is prevented while approved users can access the system from an authorized location. In essence if the requesting user is not coming from an authorized IP address then the system will appear unavailable to them. This is one way of controlling where a system is accessed. Through the years several design principles have been identified as being beneficial when integrating security aspects into a system. These principles in various combinations allow for a system to achieve the previously defined aspects of security based on generic architectural models. Security Design Principles Least Privilege Fail-Safe Defaults Economy of Mechanism Complete Mediation Open Design Separation Privilege Least Common Mechanism Psychological Acceptability Defense in Depth Least Privilege Design PrincipleThe Least Privilege design principle requires a minimalistic approach to granting user access rights to specific information and tools. Additionally, access rights should be time based as to limit resources access bound to the time needed to complete necessary tasks. The implications of granting access beyond this scope will allow for unnecessary access and the potential for data to be updated out of the approved context. The assigning of access rights will limit system damaging attacks from users whether they are intentional or not. This principle attempts to limit data changes and prevents potential damage from occurring by accident or error by reducing the amount of potential interactions with a resource. Fail-Safe Defaults Design PrincipleThe Fail-Safe Defaults design principle pertains to allowing access to resources based on granted access over access exclusion. This principle is a methodology for allowing resources to be accessed only if explicit access is granted to a user. By default users do not have access to any resources until access has been granted. This approach prevents unauthorized users from gaining access to resource until access is given. Economy of Mechanism Design PrincipleThe Economy of mechanism design principle requires that systems should be designed as simple and small as possible. Design and implementation errors result in unauthorized access to resources that would not be noticed during normal use. Complete Mediation Design PrincipleThe Complete Mediation design principle states that every access to every resource must be validated for authorization. Open Design Design PrincipleThe Open Design Design Principle is a concept that the security of a system and its algorithms should not be dependent on secrecy of its design or implementation Separation Privilege Design PrincipleThe separation privilege design principle requires that all resource approved resource access attempts be granted based on more than a single condition. For example a user should be validated for active status and has access to the specific resource. Least Common Mechanism Design PrincipleThe Least Common Mechanism design principle declares that mechanisms used to access resources should not be shared. Psychological Acceptability Design PrincipleThe Psychological Acceptability design principle refers to security mechanisms not make resources more difficult to access than if the security mechanisms were not present Defense in Depth Design PrincipleThe Defense in Depth design principle is a concept of layering resource access authorization verification in a system reduces the chance of a successful attack. This layered approach to resource authorization requires unauthorized users to circumvent each authorization attempt to gain access to a resource. When designing a system that requires meeting a security quality attribute architects need consider the scope of security needs and the minimum required security qualities. Not every system will need to use all of the basic security design principles but will use one or more in combination based on a company’s and architect’s threshold for system security because the existence of security in an application adds an additional layer to the overall system and can affect performance. That is why the definition of minimum security acceptably is need when a system is design because this quality attributes needs to be factored in with the other system quality attributes so that the system in question adheres to all qualities based on the priorities of the qualities. Resources: Barnum, Sean. Gegick, Michael. (2005). Least Privilege. Retrieved on August 28, 2011 from https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/principles/351-BSI.html Saltzer, Jerry. (2011). BASIC PRINCIPLES OF INFORMATION PROTECTION. Retrieved on August 28, 2011 from  http://web.mit.edu/Saltzer/www/publications/protection/Basic.html Barnum, Sean. Gegick, Michael. (2005). Defense in Depth. Retrieved on August 28, 2011 from  https://buildsecurityin.us-cert.gov/bsi/articles/knowledge/principles/347-BSI.html Bertino, Elisa. (2005). Design Principles for Security. Retrieved on August 28, 2011 from  http://homes.cerias.purdue.edu/~bhargav/cs526/security-9.pdf

    Read the article

  • SQL SERVER – Thinking about Deprecated, Discontinued Features and Breaking Changes while Upgrading to SQL Server 2012 – Guest Post by Nakul Vachhrajani

    - by pinaldave
    Nakul Vachhrajani is a Technical Specialist and systems development professional with iGATE having a total IT experience of more than 7 years. Nakul is an active blogger with BeyondRelational.com (150+ blogs), and can also be found on forums at SQLServerCentral and BeyondRelational.com. Nakul has also been a guest columnist for SQLAuthority.com and SQLServerCentral.com. Nakul presented a webcast on the “Underappreciated Features of Microsoft SQL Server” at the Microsoft Virtual Tech Days Exclusive Webcast series (May 02-06, 2011) on May 06, 2011. He is also the author of a research paper on Database upgrade methodologies, which was published in a CSI journal, published nationwide. In addition to his passion about SQL Server, Nakul also contributes to the academia out of personal interest. He visits various colleges and universities as an external faculty to judge project activities being carried out by the students. Disclaimer: The opinions expressed herein are his own personal opinions and do not represent his employer’s view in anyway. Blog | LinkedIn | Twitter | Google+ Let us hear the thoughts of Nakul in first person - Those who have been following my blogs would be aware that I am recently running a series on the database engine features that have been deprecated in Microsoft SQL Server 2012. Based on the response that I have received, I was quite surprised to know that most of the audience found these to be breaking changes, when in fact, they were not! It was then that I decided to write a little piece on how to plan your database upgrade such that it works with the next version of Microsoft SQL Server. Please note that the recommendations made in this article are high-level markers and are intended to help you think over the specific steps that you would need to take to upgrade your database. Refer the documentation – Understand the terms Change is the only constant in this world. Therefore, whenever customer requirements, newer architectures and designs require software vendors to make a change to the keywords, functions, etc; they ensure that they provide their end users sufficient time to migrate over to the new standards before dropping off the old ones. Microsoft does that too with it’s Microsoft SQL Server product. Whenever a new SQL Server release is announced, it comes with a list of the following features: Breaking changes These are changes that would break your currently running applications, scripts or functionalities that are based on earlier version of Microsoft SQL Server These are mostly features whose behavior has been changed keeping in mind the newer architectures and designs Lesson: These are the changes that you need to be most worried about! Discontinued features These features are no longer available in the associated version of Microsoft SQL Server These features used to be “deprecated” in the prior release Lesson: Without these changes, your database would not be compliant/may not work with the version of Microsoft SQL Server under consideration Deprecated features These features are those that are still available in the current version of Microsoft SQL Server, but are scheduled for removal in a future version. These may be removed in either the next version or any other future version of Microsoft SQL Server The features listed for deprecation will compose the list of discontinued features in the next version of SQL Server Lesson: Plan to make necessary changes required to remove/replace usage of the deprecated features with the latest recommended replacements Once a feature appears on the list, it moves from bottom to the top, i.e. it is first marked as “Deprecated” and then “Discontinued”. We know of “Breaking change” comes later on in the product life cycle. What this means is that if you want to know what features would not work with SQL Server 2012 (and you are currently using SQL Server 2008 R2), you need to refer the list of breaking changes and discontinued features in SQL Server 2012. Use the tools! There are a lot of tools and technologies around us, but it is rarely that I find teams using these tools religiously and to the best of their potential. Below are the top two tools, from Microsoft, that I use every time I plan a database upgrade. The SQL Server Upgrade Advisor Ever since SQL Server 2005 was announced, Microsoft provides a small, very light-weight tool called the “SQL Server upgrade advisor”. The upgrade advisor analyzes installed components from earlier versions of SQL Server, and then generates a report that identifies issues to fix either before or after you upgrade. The analysis examines objects that can be accessed, such as scripts, stored procedures, triggers, and trace files. Upgrade Advisor cannot analyze desktop applications or encrypted stored procedures. Refer the links towards the end of the post to know how to get the Upgrade Advisor. The SQL Server Profiler Another great tool that you can use is the one most SQL Server developers & administrators use often – the SQL Server profiler. SQL Server Profiler provides functionality to monitor the “Deprecation” event, which contains: Deprecation announcement – equivalent to features to be deprecated in a future release of SQL Server Deprecation final support – equivalent to features to be deprecated in the next release of SQL Server You can learn more using the links towards the end of the post. A basic checklist There are a lot of finer points that need to be taken care of when upgrading your database. But, it would be worth-while to identify a few basic steps in order to make your database compliant with the next version of SQL Server: Monitor the current application workload (on a test bed) via the Profiler in order to identify usage of features marked as Deprecated If none appear, you are all set! (This almost never happens) Note down all the offending queries and feature usages Run analysis sessions using the SQL Server upgrade advisor on your database Based on the inputs from the analysis report and Profiler trace sessions, Incorporate solutions for the breaking changes first Next, incorporate solutions for the discontinued features Revisit and document the upgrade strategy for your deployment scenarios Revisit the fall-back, i.e. rollback strategies in case the upgrades fail Because some programming changes are dependent upon the SQL server version, this may need to be done in consultation with the development teams Before any other enhancements are incorporated by the development team, send out the database changes into QA QA strategy should involve a comparison between an environment running the old version of SQL Server against the new one Because minimal application changes have gone in (essential changes for SQL Server version compliance only), this would be possible As an ongoing activity, keep incorporating changes recommended as per the deprecated features list As a DBA, update your coding standards to ensure that the developers are using ANSI compliant code – this code will require a change only if the ANSI standard changes Remember this: Change management is a continuous process. Keep revisiting the product release notes and incorporate recommended changes to stay prepared for the next release of SQL Server. May the power of SQL Server be with you! Links Referenced in this post Breaking changes in SQL Server 2012: Link Discontinued features in SQL Server 2012: Link Get the upgrade advisor from the Microsoft Download Center at: Link Upgrade Advisor page on MSDN: Link Profiler: Review T-SQL code to identify objects no longer supported by Microsoft: Link Upgrading to SQL Server 2012 by Vinod Kumar: Link Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Upgrade

    Read the article

  • Creating an SMF service for mercurial web server

    - by Chris W Beal
    I'm working on a project at the moment, which has a number of contributers. We're managing the project gate (which is stand alone) with mercurial. We want to have an easy way of seeing the changelog, so we can show management what is going on.  Luckily mercurial provides a basic web server which allows you to see the changes, and drill in to change sets. This can be run as a daemon, but as it was running on our build server, every time it was rebooted, someone needed to remember to start the process again. This is of course a classic usage of SMF. Now I'm not an experienced person at writing SMF services, so it took me 1/2 an hour or so to figure it out the first time. But going forward I should know what I'm doing a bit better. I did reference this doc extensively. Taking a step back, the command to start the mercurial web server is $ hg serve -p <port number> -d So we somehow need to get SMF to run that command for us. In the simplest form, SMF services are really made up of two components. The manifest Usually lives in /var/svc/manifest somewhere Can be imported from any location The method Usually live in /lib/svc/method I simply put the script straight in that directory. Not very repeatable, but it worked Can take an argument of start, stop, or refresh Lets start with the manifest. This looks pretty complex, but all it's doing is describing the service name, the dependencies, the start and stop methods, and some properties. The properties can be by instance, that is to say I could have multiple hg serve processes handling different mercurial projects, on different ports simultaneously Here is the manifest I wrote. I stole extensively from the examples in the Documentation. So my manifest looks like this $ cat hg-serve.xml <?xml version="1.0"?> <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type='manifest' name='hg-serve'> <service name='application/network/hg-serve' type='service' version='1'> <dependency name='network' grouping='require_all' restart_on='none' type='service'> <service_fmri value='svc:/milestone/network:default' /> </dependency> <exec_method type='method' name='start' exec='/lib/svc/method/hg-serve %m' timeout_seconds='2' /> <exec_method type='method' name='stop' exec=':kill' timeout_seconds='2'> </exec_method> <instance name='project-gate' enabled='true'> <method_context> <method_credential user='root' group='root' /> </method_context> <property_group name='hg-serve' type='application'> <propval name='path' type='astring' value='/src/project-gate'/> <propval name='port' type='astring' value='9998' /> </property_group> </instance> <stability value='Evolving' /> <template> <common_name> <loctext xml:lang='C'>hg-serve</loctext> </common_name> <documentation> <manpage title='hg' section='1' /> </documentation> </template> </service> </service_bundle> So the only things I had to decide on in this are the service name "application/network/hg-serve" the start and stop methods (more of which later) and the properties. This is the information I need to pass to the start method script. In my case the port I want to start the web server on "9998", and the path to the source gate "/src/project-gate". These can be read in to the start method. So now lets look at the method scripts $ cat /lib/svc/method/hg-serve #!/sbin/sh # # # Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. # # Standard prolog # . /lib/svc/share/smf_include.sh if [ -z $SMF_FMRI ]; then echo "SMF framework variables are not initialized." exit $SMF_EXIT_ERR fi # # Build the command line flags # # Get the port and directory from the SMF properties port=`svcprop -c -p hg-serve/port $SMF_FMRI` dir=`svcprop -c -p hg-serve/path $SMF_FMRI` echo "$1" case "$1" in 'start') cd $dir /usr/bin/hg serve -d -p $port ;; *) echo "Usage: $0 {start|refresh|stop}" exit 1 ;; esac exit $SMF_EXIT_OK This is all pretty self explanatory, we read the port and directory using svcprop, and use those simply to run a command in the start case. We don't need to implement a stop case, as the manifest says to use "exec=':kill'for the stop method. Now all we need to do is import the manifest and start the service, but first verify the manifest # svccfg verify /path/to/hg-serve.xml If that doesn't give an error try importing it # svccfg import /path/to/hg-serve.xml If like me you originally put the hg-serve.xml file in /var/svc/manifest somewhere you'll get an error and told to restart the import service svccfg: Restarting svc:/system/manifest-import The manifest being imported is from a standard location and should be imported with the command : svcadm restart svc:/system/manifest-import # svcadm restart svc:/system/manifest-import and you're nearly done. You can look at the service using svcs -l # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled false state disabled next_state none state_time Thu May 31 16:11:47 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15749 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) And look at the interesting properties # svcprop hg-serve hg-serve/path astring /src/project-gate hg-serve/port astring 9998 ...stuff deleted.... Then simply enable the service and if every things gone right, you can point your browser at http://server:9998 and get a nice graphical log of project activity. # svcadm enable hg-serve # svcs -l hg-serve fmri svc:/application/network/hg-serve:project-gate name hg-serve enabled true state online next_state none state_time Thu May 31 16:18:11 2012 logfile /var/svc/log/application-network-hg-serve:project-gate.log restarter svc:/system/svc/restarter:default contract_id 15858 manifest /var/svc/manifest/network/hg/hg-serve.xml dependency require_all/none svc:/milestone/network:default (online) None of this is rocket science, but a bit fiddly. Hence I thought I'd blog it. It might just be you see this in google and it clicks with you more than one of the many other blogs or how tos about it. Plus I can always refer back to it myself in 3 weeks, when I want to add another project to the server, and I've forgotten how to do it.

    Read the article

  • Workshops, online content show how Oracle infuses simplicity, mobility, extensibility into user experience

    - by mvaughan
    By Kathy Miedema & Misha Vaughan, Oracle Applications User Experience Oracle has made a huge investment into the user experience of its many different software product families, and recent releases showcase big changes and features that aim to promote end user engagement and efficiency by streamlining navigation and simplifying the user interface. But making Oracle’s enterprise software great-looking and usable doesn’t stop when Oracle products go out the door. The Applications User Experience (UX) team recognizes that our customers may need to customize software to fit their work processes. And that’s why we provide tools such as user experience design patterns to help you maintain the Oracle user experience as you tailor your application to fit your business needs. Often, however, customers may need some context around user experience. How has the Oracle user experience been designed and constructed? Why is a good user experience important for users? How does understanding what goes into the user experience benefit the people who purchase the software for users? There’s a short answer to these questions, and you can read about it on Usable Apps. But truly understanding Oracle’s investment and seeing how it applies across product families occasionally requires a deeper dive into the Oracle user experience, especially if you’re an influencer or decision-maker about Oracle products. To help frame these decisions, the Communications & Outreach team has developed several targeted workshops that explore what Oracle means when it talks about user experience, and provides a roadmap into where the Oracle user experience is going. These workshops require non-disclosure agreements, and have been delivered to Oracle sales folks, Oracle partners, Oracle ACE Directors and ACEs, and a few customers. Some of these audience members have been developers or have a technical background; just as many did not. Here’s a breakdown of the kind of training you can get around the Oracle user experience from the OAUX Communications & Outreach team.For Partners: George Papazzian, Principal, Naviscent with Joyce Ohgi, Oracle Oracle Fusion Applications HCM Pre-Sales Seminar:  In concert with Worldwide Alliances  and  Channels under Applications Partner Enablement Director Jonathan Vinoskey’s guidance, the Applications User Experience team delivers a two-day workshop.  Day one focuses on Oracle Fusion Applications HCM and pre-sales strategy, and Day two focuses on positioning and leveraging Oracle’s investment in the Oracle Fusion Applications user experience.  The next workshops will occur on the following dates: December 4-5, 2013 @ Manchester, UK January 29-30, 2014 @ Reston, Virginia February 2014 @ Guadalajara, Mexico (email: Shannon Whiteman) March 11-12, 2014 @ Dubai, United Arab Emirates April 1-2, 2014 @ Chicago, Illinois Partner Advisory Board: A two-day board meeting in the U.S. and U.K. to discuss four main user experience areas for Oracle Fusion Applications: simplicity, visualization & analytics, mobility, & futures. This event is limited to Oracle Diamond Partners, UX bloggers, and key UX influencers and requires legal documentation.  We will be talking about the Oracle applications UX strategy and roadmap. Partner Implementation Training on User Interface: How to Build Great-Looking, Usable Apps:  In this two-day, hands-on workshop built around Oracle’s Application Development Framework, learn how to build desktop and mobile user interfaces and mobile user interfaces based on Oracle’s experience with Fusion Applications. This workshop is for partners with a technology background who are looking for ways to tailor Fusion Applications using ADF, or have built their own custom solutions using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs. Nov 5-6, 2013 @ Redwood Shores, CA, USA January 28-29th, 2014 @ Reston, Virginia, USA February 25-26, 2014 @ Guadalajara, Mexico March 9-10, 2014 @ Dubai, United Arab Emirates To register, contact [email protected] Simplified UI Customization & Extensibility:  Pilot workshop:  We will be reviewing the proposed content for communicating the user experience tool kit available with the next release of Oracle Fusion Applications.  Our core focus will be on what toolkit components our system implementors and independent software vendors will need to respond to customer demand, whether they are extending Fusion Applications, or building custom applications, that will need to leverage the simplified UI. Dec 11th, 2013 @ Reading, UK For information: contact [email protected] Private lab tour and demos: Interested in seeing what’s going on in the Apps UX Labs?  If you are headed to the San Francisco Bay Area, let us know. We can arrange a spin through our usability labs at headquarters. OAUX Expo: This open-house forum gives partners a look at what the UX team is working on, and showcases the next-generation user experiences in a demo environment where attendees can see and touch the applications. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For CustomersAngela Johnston, Gozel Aamoth, Teena Singh, and Yen Chan, Oracle Lab tours: See demos of soon-to-be-released products, and take a spin on usability research equipment such as our eye-tracker. Watch this video to get an idea of what you’ll see. Get our newsletter: Learn about newly released products and see where you can meet us at user group conferences. Participate in a feedback session: Join a focus group or customer feedback session to get an early look at user experience designs for the next generation of software, and provide your thoughts on how well it will work. Join the OUAB: The Oracle Usability Advisory Board meets several times a year to discuss trends in the workforce and provide direction on user experience designs. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Developers (customers, partners, and consultants): Plinio Arbizu, SP Solutions, Richard Bingham, Oracle, Balaji Kamepalli, EiSTechnoogies, Praveen Pillalamarri, EiSTechnologies How to Build Great-Looking, Usable Apps: This workshop is for attendees with a strong technology background who are looking for ways to tailor customer software using ADF. It includes an introduction to UX design patterns and provides tools to build usability-tested UX designs.  See above for dates and times. UX design patterns web site: Cut the length of your project down by months. Use these patterns to build out the task flow you need to develop for your users. The patterns have already been usability-tested and represent the best practices that the Oracle UX research team has found in its studies. UX Direct: Use the same methods that Oracle uses to develop its own user experiences. We help you define your users and their needs, and then provide direction on how to tailor the best user experience you can for them. For Oracle Sales Mike Klein, Jeremy Ashley, Brent White, Oracle Contact your local sales person for more information about the Oracle user experience and the training available from the Applications User Experience Communications & Outreach team. See customer-friendly user experience collateral ranging from the new simplified UI in Oracle Fusion Applications Release 7, to E-Business Suite user experience highlights, to Siebel, PeopleSoft, and JD Edwards user experience highlights.   Receive access to the same pre-sales and implementation training we provide to partners. For Oracle Sales only: Oracle-only training on the Oracle Fusion Applications UX Innovation Sales Kit.

    Read the article

  • SQL SERVER – Core Concepts – Elasticity, Scalability and ACID Properties – Exploring NuoDB an Elastically Scalable Database System

    - by pinaldave
    I have been recently exploring Elasticity and Scalability attributes of databases. You can see that in my earlier blog posts about NuoDB where I wanted to look at Elasticity and Scalability concepts. The concepts are very interesting, and intriguing as well. I have discussed these concepts with my friend Joyti M and together we have come up with this interesting read. The goal of this article is to answer following simple questions What is Elasticity? What is Scalability? How ACID properties vary from NOSQL Concepts? What are the prevailing problems in the current database system architectures? Why is NuoDB  an innovative and welcome change in database paradigm? Elasticity This word’s original form is used in many different ways and honestly it does do a decent job in holding things together over the years as a person grows and contracts. Within the tech world, and specifically related to software systems (database, application servers), it has come to mean a few things - allow stretching of resources without reaching the breaking point (on demand). What are resources in this context? Resources are the usual suspects – RAM/CPU/IO/Bandwidth in the form of a container (a process or bunch of processes combined as modules). When it is about increasing resources the simplest idea which comes to mind is the addition of another container. Another container means adding a brand new physical node. When it is about adding a new node there are two questions which comes to mind. 1) Can we add another node to our software system? 2) If yes, does adding new node cause downtime for the system? Let us assume we have added new node, let us see what the new needs of the system are when a new node is added. Balancing incoming requests to multiple nodes Synchronization of a shared state across multiple nodes Identification of “downstate” and resolution action to bring it to “upstate” Well, adding a new node has its advantages as well. Here are few of the positive points Throughput can increase nearly horizontally across the node throughout the system Response times of application will increase as in-between layer interactions will be improved Now, Let us put the above concepts in the perspective of a Database. When we mention the term “running out of resources” or “application is bound to resources” the resources can be CPU, Memory or Bandwidth. The regular approach to “gain scalability” in the database is to look around for bottlenecks and increase the bottlenecked resource. When we have memory as a bottleneck we look at the data buffers, locks, query plans or indexes. After a point even this is not enough as there needs to be an efficient way of managing such large workload on a “single machine” across memory and CPU bound (right kind of scheduling)  workload. We next move on to either read/write separation of the workload or functionality-based sharing so that we still have control of the individual. But this requires lots of planning and change in client systems in terms of knowing where to go/update/read and for reporting applications to “aggregate the data” in an intelligent way. What we ideally need is an intelligent layer which allows us to do these things without us getting into managing, monitoring and distributing the workload. Scalability In the context of database/applications, scalability means three main things Ability to handle normal loads without pressure E.g. X users at the Y utilization of resources (CPU, Memory, Bandwidth) on the Z kind of hardware (4 processor, 32 GB machine with 15000 RPM SATA drives and 1 GHz Network switch) with T throughput Ability to scale up to expected peak load which is greater than normal load with acceptable response times Ability to provide acceptable response times across the system E.g. Response time in S milliseconds (or agreed upon unit of measure) – 90% of the time The Issue – Need of Scale In normal cases one can plan for the load testing to test out normal, peak, and stress scenarios to ensure specific hardware meets the needs. With help from Hardware and Software partners and best practices, bottlenecks can be identified and requisite resources added to the system. Unfortunately this vertical scale is expensive and difficult to achieve and most of the operational people need the ability to scale horizontally. This helps in getting better throughput as there are physical limits in terms of adding resources (Memory, CPU, Bandwidth and Storage) indefinitely. Today we have different options to achieve scalability: Read & Write Separation The idea here is to do actual writes to one store and configure slaves receiving the latest data with acceptable delays. Slaves can be used for balancing out reads. We can also explore functional separation or sharing as well. We can separate data operations by a specific identifier (e.g. region, year, month) and consolidate it for reporting purposes. For functional separation the major disadvantage is when schema changes or workload pattern changes. As the requirement grows one still needs to deal with scale need in manual ways by providing an abstraction in the middle tier code. Using NOSQL solutions The idea is to flatten out the structures in general to keep all values which are retrieved together at the same store and provide flexible schema. The issue with the stores is that they are compromising on mostly consistency (no ACID guarantees) and one has to use NON-SQL dialect to work with the store. The other major issue is about education with NOSQL solutions. Would one really want to make these compromises on the ability to connect and retrieve in simple SQL manner and learn other skill sets? Or for that matter give up on ACID guarantee and start dealing with consistency issues? Hybrid Deployment – Mac, Linux, Cloud, and Windows One of the challenges today that we see across On-premise vs Cloud infrastructure is a difference in abilities. Take for example SQL Azure – it is wonderful in its concepts of throttling (as it is shared deployment) of resources and ability to scale using federation. However, the same abilities are not available on premise. This is not a mistake, mind you – but a compromise of the sweet spot of workloads, customer requirements and operational SLAs which can be supported by the team. In today’s world it is imperative that databases are available across operating systems – which are a commodity and used by developers of all hues. An Ideal Database Ability List A system which allows a linear scale of the system (increase in throughput with reasonable response time) with the addition of resources A system which does not compromise on the ACID guarantees and require developers to learn new paradigms A system which does not force fit a new way interacting with database by learning Non-SQL dialect A system which does not force fit its mechanisms for providing availability across its various modules. Well NuoDB is the first database which has all of the above abilities and much more. In future articles I will cover my hands-on experience with it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • A Good Developer is So Hard to Find

    - by James Michael Hare
    Let me start out by saying I want to damn the writers of the Toughest Developer Puzzle Ever – 2. It is eating every last shred of my free time! But as I've been churning through each puzzle and marvelling at the brain teasers and trivia within, I began to think about interviewing developers and why it seems to be so hard to find good ones.  The problem is, it seems like no matter how hard we try to find the perfect way to separate the chaff from the wheat, inevitably someone will get hired who falls far short of expectations or someone will get passed over for missing a piece of trivia or a tricky brain teaser that could have been an excellent team member.   In shops that are primarily software-producing businesses or other heavily IT-oriented businesses (Microsoft, Amazon, etc) there often exists a much tighter bond between HR and the hiring development staff because development is their life-blood. Unfortunately, many of us work in places where IT is viewed as a cost or just a means to an end. In these shops, too often, HR and development staff may work against each other due to differences in opinion as to what a good developer is or what one is worth.  It seems that if you ask two different people what makes a good developer, often you will get three different opinions.   With the exception of those shops that are purely development-centric (you guys have it much easier!), most other shops have management who have very little knowledge about the development process.  Their view can often be that development is simply a skill that one learns and then once aquired, that developer can produce widgets as good as the next like workers on an assembly-line floor.  On the other side, you have many developers that feel that software development is an art unto itself and that the ability to create the most pure design or know the most obscure of keywords or write the shortest-possible obfuscated piece of code is a good coder.  So is it a skill?  An Art?  Or something entirely in between?   Saying that software is merely a skill and one just needs to learn the syntax and tools would be akin to saying anyone who knows English and can use Word can write a 300 page book that is accurate, meaningful, and stays true to the point.  This just isn't so.  It takes more than mere skill to take words and form a sentence, join those sentences into paragraphs, and those paragraphs into a document.  I've interviewed candidates who could answer obscure syntax and keyword questions and once they were hired could not code effectively at all.  So development must be more than a skill.   But on the other end, we have art.  Is development an art?  Is our end result to produce art?  I can marvel at a piece of code -- see it as concise and beautiful -- and yet that code most perform some stated function with accuracy and efficiency and maintainability.  None of these three things have anything to do with art, per se.  Art is beauty for its own sake and is a wonderful thing.  But if you apply that same though to development it just doesn't hold.  I've had developers tell me that all that matters is the end result and how you code it is entirely part of the art and I couldn't disagree more.  Yes, the end result, the accuracy, is the prime criteria to be met.  But if code is not maintainable and efficient, it would be just as useless as a beautiful car that breaks down once a week or that gets 2 miles to the gallon.  Yes, it may work in that it moves you from point A to point B and is pretty as hell, but if it can't be maintained or is not efficient, it's not a good solution.  So development must be something less than art.   In the end, I think I feel like development is a matter of craftsmanship.  We use our tools and we use our skills and set about to construct something that satisfies a purpose and yet is also elegant and efficient.  There is skill involved, and there is an art, but really it boils down to being able to craft code.  Crafting code is far more than writing code.  Anyone can write code if they know the syntax, but so few people can actually craft code that solves a purpose and craft it well.  So this is what I want to find, I want to find code craftsman!  But how?   I used to ask coding-trivia questions a long time ago and many people still fall back on this.  The thought is that if you ask the candidate some piece of coding trivia and they know the answer it must follow that they can craft good code.  For example:   What C++ keyword can be applied to a class/struct field to allow it to be changed even from a const-instance of that class/struct?  (answer: mutable)   So what do we prove if a candidate can answer this?  Only that they know what mutable means.  One would hope that this would infer that they'd know how to use it, and more importantly when and if it should ever be used!  But it rarely does!  The problem with triva questions is that you will either: Approve a really good developer who knows what some obscure keyword is (good) Reject a really good developer who never needed to use that keyword or is too inexperienced to know how to use it (bad) Approve a really bad developer who googled "C++ Interview Questions" and studied like hell but can't craft (very bad) Many HR departments love these kind of tests because they are short and easy to defend if a legal issue arrises on hiring decisions.  After all it's easy to say a person wasn't hired because they scored 30 out of 100 on some trivia test.  But unfortunately, you've eliminated a large part of your potential developer pool and possibly hired a few duds.  There are times I've hired candidates who knew every trivia question I could throw out them and couldn't craft.  And then there are times I've interviewed candidates who failed all my trivia but who I took a chance on who were my best finds ever.    So if not trivia, then what?  Brain teasers?  The thought is, these type of questions measure the thinking power of a candidate.  The problem is, once again, you will either: Approve a good candidate who has never heard the problem and can solve it (good) Reject a good candidate who just happens not to see the "catch" because they're nervous or it may be really obscure (bad) Approve a candidate who has studied enough interview brain teasers (once again, you can google em) to recognize the "catch" or knows the answer already (bad). Once again, you're eliminating good candidates and possibly accepting bad candidates.  In these cases, I think testing someone with brain teasers only tests their ability to answer brain teasers, not the ability to craft code. So how do we measure someone's ability to craft code?  Here's a novel idea: have them code!  Give them a computer and a compiler, or a whiteboard and a pen, or paper and pencil and have them construct a piece of code.  It just makes sense that if we're going to hire someone to code we should actually watch them code.  When they're done, we can judge them on several criteria: Correctness - does the candidate's solution accurately solve the problem proposed? Accuracy - is the candidate's solution reasonably syntactically correct? Efficiency - did the candidate write or use the more efficient data structures or algorithms for the job? Maintainability - was the candidate's code free of obfuscation and clever tricks that diminish readability? Persona - are they eager and willing or aloof and egotistical?  Will they work well within your team? It may sound simple, or it may sound crazy, but when I'm looking to hire a developer, I want to see them actually develop well-crafted code.

    Read the article

  • In Case You Weren’t There: Blogwell NYC

    - by Mike Stiles
    0 0 1 1009 5755 Vitrue 47 13 6751 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman";} Your roving reporter roved out to another one of Socialmedia.org’s fantastic Blogwell events, this time in NYC. As Central Park and incredible weather beckoned, some of the biggest brand names in the world gathered to talk about how they’re incorporating social into marketing and CRM, as well as extending social across their entire organizations internally. Below we present a collection of the live tweets from many of the key sessions GE @generalelectricJon Lombardo, Leader of Social Media COE How GE builds and extends emotional connections with consumers around health and reaps the benefits of increased brand equity in the process. GE has a social platform around Healthyimagination to create better health for people. If you and a friend are trying to get healthy together, you’ll do better. Health is inherently. Get health challenges via Facebook and share with friends to achieve goals together. They’re creating an emotional connection around the health context. You don’t influence people at large. Your sphere of real influence is around 5-10 people. They find relevant conversations about health on Twitter and engage sounding like a friend, not a brand. Why would people share on behalf of a brand? Because you tapped into an activity and emotion they’re already having. To create better habits in health, GE gave away inexpensive, relevant gifts related to their goals. Create the context, give the relevant gift, get social acknowledgment for giving it. What you get when you get acknowledgment for your engagement and gift is user generated microcontent. GE got 12,000 unique users engaged and 1400 organic posts with the healthy gift campaign. The Dow Chemical Company @DowChemicalAbby Klanecky, Director of Digital & Social Media Learn how Dow Chemical is finding, training, and empowering their scientists to be their storytellers in social media. There are 1m jobs coming open in science. Only 200k are qualified for them. Dow Chemical wanted to use social to attract and talk to scientists. Dow Chemical decided to use real scientists as their storytellers. Scientists are incredibly passionate, the key ingredient of a great storyteller. Step 1 was getting scientists to focus on a few platforms, blog, Twitter, LinkedIn. Dow Chemical social flow is Core Digital Team - #CMs – ambassadors – advocates. The scientists were trained in social etiquette via practice scenarios. It’s not just about sales. It’s about growing influence and the business. Dow Chemical trained about 100 scientists, 55 are active and there’s a waiting list for the next sessions. In person social training produced faster results and better participation. Sometimes you have to tell pieces of the story instead of selling your execs on the whole vision. Social Media Ethics Briefing: Staying Out of TroubleAndy Sernovitz, CEO @SocialMediaOrg How do we get people to share our message for us? We have to have their trust. The difference between being honest and being sleazy is disclosure. Disclosure does not hurt the effectiveness of your marketing. No one will get mad if you tell them up front you’re a paid spokesperson for a company. It’s a legal requirement by the FTC, it’s the law, to disclose if you’re being paid for an endorsement. Require disclosure and truthfulness in all your social media outreach. Don’t lie to people. Monitor the conversation and correct misstatements. Create social media policies and training programs. If you want to stay safe, never pay cash for social media. Money changes everything. As soon as you pay, it’s not social media, it’s advertising. Disclosure, to the feds, means clear, conspicuous, and understandable to the average reader. This phrase will keep you in the clear, “I work for ___ and this is my personal opinion.” Who are you? Were you paid? Are you giving an honest opinion based on a real experience? You as a brand are responsible for what an agency or employee or contactor does in your behalf. SocialMedia.org makes available a Disclosure Best Practices Toolkit. Socialmedia.org/disclosure. The point is to not ethically mess up and taint social media as happened to e-mail. Not only is the FTC cracking down, so is Google and Facebook. Visa @VisaNewsLucas Mast, Senior Business Leader, Global Corporate Social Media Visa built a mobile studio for the Olympics for execs and athletes. They wanted to do postcard style real time coverage of Visa’s Olympics sponsorships, and on a shoestring. Challenges included Olympic rules, difficulty getting interviews, time zone trouble, and resourcing. Another problem was they got bogged down with their own internal approval processes. Despite all the restrictions, they created and published a variety of and fair amount of content. They amassed 1000+ views of videos posted to the Visa Communication YouTube channel. Less corporate content yields more interest from media outlets and bloggers. They did real world video demos of how their products work in the field vs. an exec doing a demo in a studio. Don’t make exec interview videos dull and corporate. Keep answers short, shoot it in an interesting place, do takes until they’re comfortable and natural. Not everything will work. Not everything will get a retweet. But like the lottery, you can’t win if you don’t play. Promoting content is as important as creating it. McGraw-Hill Companies @McGrawHillCosPatrick Durando, Senior Director of Global New Media McGraw-Hill has 26,000 employees. McGraw-Hill created a social intranet called Buzz. Intranets create operational efficiency, help product dev, facilitate crowdsourcing, and breaks down geo silos. Intranets help with talent development, acquisition, retention. They replaced the corporate directory with their own version of LinkedIn. The company intranet has really cut down on the use of email. Long email threats become organized, permanent social discussions. The intranet is particularly useful in HR for researching and getting answers surrounding benefits and policies. Using a profile on your company intranet can establish and promote your internal professional brand. If you’re going to make an intranet, it has to look great, work great, and employees are going have to want to go there. You can’t order them to like it. 

    Read the article

  • CodePlex Daily Summary for Tuesday, November 22, 2011

    CodePlex Daily Summary for Tuesday, November 22, 2011Popular ReleasesDeveloper Team Article System Management: DTASM v1.3: ?? ??? ???? 3 ????? ???? ???? ????? ??? : - ????? ?????? ????? ???? ?? ??? ???? ????? ?? ??? ? ?? ???? ?????? ???? ?? ???? ????? ?? . - ??? ?? ???? ????? ???? ????? ???? ???? ?? ????? , ?????? ????? ????? ?? ??? . - ??? ??????? ??? ??? ???? ?? ????? ????? ????? .VideoLan DotNet for WinForm, WPF & Silverlight 5: VideoLan DotNet for WinForm, WPF, SL5 - 2011.11.22: The new version contains Silverlight 5 library: Vlc.DotNet.Silverlight. A sample could be tested here The new version add and correct many features : Correction : Reinitialize some variables Deprecate : Logging API, since VLC 1.2 (08/20/2011) Add subitem in LocationMedia (for Youtube videos, ...) Update Wpf sample to use Youtube videos Many others correctionsSharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.2.0: Web parts are now fully customizable via html templates (Issue #323) FBA Pack is now completely localizable using resource files. Thank you David Chen for submitting the code as well as Chinese translations of the FBA Pack! The membership request web part now gives the option of having the user enter the password and removing the captcha (Issue # 447) The FBA Pack will now work in a zone that does not have FBA enabled (Another zone must have FBA enabled, and the zone must contain the me...SharePoint 2010 Education Demo Project: Release SharePoint SP1 for Education Solutions: This release includes updates to the Content Packs for SharePoint SP1. All Content Packs have been updated to install successfully under SharePoint SP1SQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 6: 1. improved support for schema 2. added find reference when right click on object list 3. added object rename supportBugNET Issue Tracker: BugNET 0.9.126: First stable release of version 0.9. Upgrades from 0.8 are fully supported and upgrades to future releases will also be supported. This release is now compiled against the .NET 4.0 framework and is a requirement. Because of this the web.config has significantly changed. After upgrading, you will need to configure the authentication settings for user registration and anonymous access again. Please see our installation / upgrade instructions for more details: http://wiki.bugnetproject.c...Anno 2070 Assistant: v0.1.0 (STABLE): Version 0.1.0 Features Production Chains Eco Production Chains (Complete) Tycoon Production Chains (Disabled - Incomplete) Tech Production Chains (Disabled - Incomplete) Supply (Disabled - Incomplete) Calculator (Disabled - Incomplete) Building Layouts Eco Building Layouts (Complete) Tycoon Building Layouts (Disabled - Incomplete) Tech Building Layouts (Disabled - Incomplete) Credits (Complete)Free SharePoint 2010 Sites Templates: SharePoint Server 2010 Sites Templates: here is the list of sites templates to be downloadedVsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 30 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 30 (beta)New: Support for TortoiseSVN 1.7 added. (the download contains both setups, for TortoiseSVN 1.6 and 1.7) New: OpenModifiedDocumentDialog displays conflicted files now. New: OpenModifiedDocument allows to group items by changelist now. Fix: OpenModifiedDocumentDialog caused Visual Studio 2010 to freeze sometimes. Fix: The installer didn...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.30: Highlight features & improvements: • Performance optimization. • Back in stock notifications. • Product special price support. • Catalog mode (based on customer role) To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).WPF Converters: WPF Converters V1.2.0.0: support for enumerations, value types, and reference types in the expression converter's equality operators the expression converter now handles DependencyProperty.UnsetValue as argument values correctly (#4062) StyleCop conformance (more or less)Json.NET: Json.NET 4.0 Release 4: Change - JsonTextReader.Culture is now CultureInfo.InvariantCulture by default Change - KeyValurPairConverter no longer cares about the order of the key and value properties Change - Time zone conversions now use new TimeZoneInfo instead of TimeZone Fix - Fixed boolean values sometimes being capitalized when converting to XML Fix - Fixed error when deserializing ConcurrentDictionary Fix - Fixed serializing some Uris returning the incorrect value Fix - Fixed occasional error when...Media Companion: MC 3.423b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Replaced 'Rebuild' with 'Refresh' throughout entire code. Rebuild will now be known as Refresh. mc_com.exe has been fully updated TV Show Resolutions... Resolved issue #206 - having to hit save twice when updating runtime manually Shrunk cache size and lowered loading times f...Delta Engine: Delta Engine Beta Preview v0.9.1: v0.9.1 beta release with lots of refactoring, fixes, new samples and support for iOS, Android and WP7 (you need a Marketplace account however). If you want a binary release for the games (like v0.9.0), just say so in the Forum or here and we will quickly prepare one. It is just not much different from v0.9.0, so I left it out this time. See http://DeltaEngine.net/Wiki.Roadmap for details.SharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblyAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.36: Fix for issue #16908: string literals containing ASP.NET replacement syntax fail if the ASP.NET code contains the same character as the string literal delimiter. Also, we shouldn't be changing the delimiter for those literals or combining them with other literals; the developer may have specifically chosen the delimiter used because of possible content inserted by ASP.NET code. This logic is normally off; turn it on via the -aspnet command-line flag (or the Code.Settings.AllowEmbeddedAspNetBl...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.7: Reworked API to be more consistent. See Supported formats table. Added some more helper methods - e.g. OpenEntryStream (RarArchive/RarReader does not support this) Fixed up testsSilverlight Toolkit: Windows Phone Toolkit - Nov 2011 (7.1 SDK): This release is coming soon! What's new ListPicker once again works in a ScrollViewer LongListSelector bug fixes around OutOfRange exceptions, wrong ordering of items, grouping issues, and scrolling events. ItemTuple is now refactored to be the public type LongListSelectorItem to provide users better access to the values in selection changed handlers. PerformanceProgressBar binding fix for IsIndeterminate (item 9767 and others) There is no longer a GestureListener dependency with the C...New ProjectsAndrecorder: Andrecorder???Android???????,???????????????????,????????????????,????????!Android Tree Bulletin: Android bulletin reader in tree format.Bài t?p l?p môn HCI: Name: Ph?n m?m qu?n lý thu h?c phí tru?ng d?i h?c Công Nghi?p Hà N?i Basic Grid Collision sample in XNA: This project shows how to implement a basic grid collision in XNA. The project uses the XNA 4.0 framework and C#Club Manager: Club Manager is a web site for managing sport clubs / teams.Create email with encrypt text implement TEA encryption and Web Service: RahaTEA Mail is an application to send messages in secret. These applications implement TEA encryption and web serviceCRM 2011 Layers: Several .net layers to customize CRM 2011CTEF: China Tomorrow Education Foundation websitedns?????: ??c#???dns?????。????????,???????,??????。EAF: Extensibility Application FrameworkEnergy SBA: In order to compete with large companies for Federal contracts, small business need information. This application seeks to show standard methods of using remote APIs to integrate information into a Metro interface using services provided by the Small Business Administration (SBA)EPiOptimiser - Scan your EPiServer configuration to optimise start up times: EPiScanner scans your EPiServer configuration to optimise start ups by generating a recommended exclude list of assemblies to include in EPiServer framework config. It can be used on command line, as a custom build task or integrated into Visual Studio as an external tool.FreeIDS - Free Intrusion Detection System: Don't want someone to use your computer? Don't want to use a system password? Want to see when someone accessed your computer? Time/Date? FreeIDS is it!FtpServerAdministrator: FtpServerAdministrator makes it easier to administer some ftp server by code, although it can only be used for FileZilla server now. It's developed in C#.GreenPoint Online: Tools and components that help you customize an Office 365 / SharePoint Online Environment.HCC C# Workshop: This project contains the code for the exercises of the HCC C# WorkshopKsigDo - Real time view model syncing across user screens: KsigDo show real time view model syncing across user screens - using ASP.NET, Knockout and SignalR. Real time data syncing across user views *was* hard, especially in web applications. Most of the time, the second user needs to refresh the screen, to see the changes made by first user, or we need to implement some long polling that fetches the data and does the update manually. Now, with SignalR and Knockout, ASP.NET developers can take advantage of view model syncing across users, that...lineseven: ???????????????。Mail Size Labeler for GMail: A small utility that labels large e-mails on your gmail account. This utility scan you gmail account, and adds labels to large e-mail so you can clean your mailbox and free space. The labels this utility adds are: Size 1M-2M Size 2M-5M Size 5M-10M Size 10M-15M Size 15M plus Note: a single e-mail thread may get multiple labels if different e-mails of the thread fit different filters.MathService: Complex digits, standart class extentions etc.MyGameProject: gamesMySQL Connect 2 ASP.NET: Example project to show how to connect MySQL database to ASP.NET web project. IDE: Visual Studio 2010 Pro Programming language: C# Detailed information in the article here: http://epavlov.net/blog/2011/11/13/connect-to-mysql-in-visual-studio/ nl: Nutri Leaf Devomr.event.js: Simple js event injecterPastebin4DotNet: This project is an example of how to consume an API, in this case I consummed the Pastebin API.Pomelo: Pomelo is a website example.QuickDevFrameWork: ????????,??,??,????,ioc ?????postsharp?aopReadable Passphrase Generator: Generates passphrases which are (mostly) grammatically correct but nonsensical. These are easy to remember but difficult to guess (for humans or computers). Developed in C# with a KeePass plugin, console app and public API.Rosyama.ru for Windows Phone 7: ?????????? Windows Phone 7 ??? ???????? ???????? ?? ???? rosyama.ru. ?????????? ??????? ?????????? ? ???????? ????????? ???????. SimpleBatch: As the name suggests, this is a simple batch framework allowing you to define batch jobs in XML format. Thus far, contains a basic selection of processors such as the following; File Email SQL (SQL Server Client) SharePoint Document Library Custom ProcessorSite de Notícias: Projeto de faculdade que consiste na criação de um site de notícias.SPWikiProvisioning: Create update and delete SharePoint wiki pages using feature activation and deactivation handlers.SVN Automated Control With C#: I Created this libaray because I need to control Tortoise SVN automactically with out an interface for my own build server and could not find any resuilts on google to achive this task so I went about creating this libaray which dos most of the task's that I needed. I round that you could control SVN by command line so using that as my basic idear I went about coding the most common commands for SVN most of the commads are done but not all. if you like this libaray then please use it we...TremplinCMS: TremplinCMS is a CMS framework for ASP .NET 4.vlu0206sms: SMSMaker by team0206 developingWCF DataService RequestStream Access on webInvoke HTTP POST: This library provides access to the message body request stream of a WCF Data Service (formerly ADO.NET Data Service), which is not possible with the original WCF Data Service class. You are enabled passing data (e.g. Json, files) via HTTP POST to the request body. It uses the operation context (DbContext) provided by the DataService<T> class to get access to the resquest stream.WebOS: Welcome to join us to build our os projectWp7StarterDantas: Iniciando com Wp7WpfCollaborative3D: WpfCollaborative3DXNA Content Preprocessor: The XNA Content Preprocessor allows you to compile all of your XNA assets outside of your normal XNA project. This means more time building your game or app instead of your content.

    Read the article

  • Massive Silverlight Giveaway! DevExpress , Syncfusion, Crypto Obfuscator and SL Spy!

    - by mbcrump
    Oh my, have we grown! Maybe I should change the name to Multiple Silverlight Giveaways. So far, my Silverlight giveaways have been such a success that I’m going to be able to give away more than one Silverlight product every month. Last month, we gave away 3 great products. 1) ComponentOne Silverlight Controls 2)  ComponentOne XAP Optimizer (with obfuscation) and 3) Silverlight Spy. This month, we will give away 4 great Silverlight products and have 4 different winners. This way the Silverlight community can grow with more than just one person winning all the prizes. This month we will be giving away: DevExpress Silverlight Controls – Over 50+ Silverlight Controls Syncfusion User Interface Edition - Create stunning line of business silverlight applications with a wide range of components including a high performance grid, docking manager, chart, gauge, scheduler and much more. Crypto Obfuscator – Works for all .NET including Silverlight/Windows Phone 7. Silverlight Spy – provides a license EVERY month for this giveaway. ----------------------------------------------------------------------------------------------------------------------------------------------------------- Win a FREE developer’s license of one of the products listed above! 4 winners will be announced on April 1st, 2011! To be entered into the contest do the following things: Subscribe to my feed. – Use Google Reader, email or whatever is best for you.  Leave a comment below with a valid email account (I WILL NOT share this info with anyone.) Retweet the following : I just entered to win free #Silverlight controls from @mbcrump . Register here: http://mcrump.me/fTSmB8 ! Don’t change the URL because this will allow me to track the users that Tweet this page. Don’t forget to visit each of the vendors sites because they made this possible. MichaelCrump.Net provides Silverlight Giveaways every month. You can also see the latest giveaway by bookmarking http://giveaways.michaelcrump.net . ---------------------------------------------------------------------------------------------------------------------------------------------------------- DevExpress Silverlight Controls Let’s take a quick look at some of the software that is provided in this giveaway. Before we get started with the Silverlight Controls, here is a couple of links to bookmark for the DevExpress Silverlight Controls: The Live Demos of the Silverlight Controls is located here. Great Video Tutorials of the Silverlight Controls are here. One thing that I liked about the DevExpress is how easy it was to find demos of each control. After you install the controls the following Program Group appears complete with “demos” that include full-source.   So, the first question that you may ask is, “What is included?” Here is the official list below. I wanted to show several of the controls that I think developers will use the most. The Book – Very rich animation between switching pages. Very easy to add your own images and custom text. The Menu – This is another control that just looked great. You can easily add images to the menu items with a few lines of XAML. The Window / Dialog Box – You can use this control to make a very beautiful “Wizard” to help your users navigate between pages. This is useful in setup or installation. Calculator – This would be useful for any type of Banking app. Also a first that I’ve seen from a 3rd party Control company. DatePicker – This controls feels a lot smoother than the one provided by Microsoft. It also provides the ability to “Clear” the selection. Overall the DevExpress Silverlight Controls feature a lot of quality controls that you should check out. You can go ahead and download a trial version of it right now by clicking here. If you win the contest you can simply enter your registration key and continue using the product without reinstalling. Syncfusion User Interface Edition Before we get started with the Syncfusion User Interface Edition, here is a couple of links to bookmark. The Live Demos can be found here. You can download a demo of it now at http://www.syncfusion.com/downloads/evalstart. After you install the Syncfusion, you can view the dashboard to run locally installed samples. You may also download the documentation to your local machine if needed. Since the name of the package is “User Interface Edition”, I decided to share several samples that struck me as “awesome”. Dashboard Gauges – I was very impressed with the various gauges they have included. The digital clock also looks very impressive. Diagram – The diagrams are also very easy to build. In the sample project below you can drag/drop the shapes onto the content pane. More complex lines like the Bezier lines are also easy to create using Syncfusion. Scheduling – Another strong component was the Scheduling with built-in support for Themes. Tools – If all of that wasn’t enough, it also comes with a nice pack of essential tools. Syncfusion has a nice variety of Silverlight Controls that you should check out. You can go ahead and download a trial version of it right now by clicking here. Crypto Obfuscator The following feature set is what is important to me in an Obfuscator since I am a Silverlight/WP7 Developer: And thankfully this is what you get in Crypto Obfuscator. You can download a trial version right now if you want to go ahead and play with it. Let’s spend a few moments taking a look at the application. After you have installed Crypto Obfuscator you will see the following screen: After you click on Assemblies you have the option to add your .XAP file in: I went ahead and loaded my .xap file from a Silverlight Application. At this point, you can simply save your project and hit “Obfuscate” and your done. You don’t have to mess with any of the other settings if you don’t want too. Of course, you can change the settings and add obfuscation rules, watermarks and signing if you wish.  After Obfuscation, it looks like this in .NET Reflector: I was trying to browse through methods and it actually crashed Reflector. This confirms the level of protection the obfuscator is providing. If this were a commercial application that my team built, I would have a huge smile on my face right now. Crypto Obfuscator is a great product and I hope you will spend the time learning more about it. Silverlight Spy Silverlight Spy is a runtime inspector tool that will tell you pretty much everything that is going on with the application. Basically, you give it a URL that contains a Silverlight application and you can explore the element tree, events, xaml and so much more. This has already been reviewed on MichaelCrump.net. _________________________________________________________________________________________ Thanks for reading and don’t forget to leave a comment below in order to win one of the four prizes available! Subscribe to my feed

    Read the article

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric 2D game with moderate-scale multiplayer, approximately 20-30 players connected at once to a persistent server. I've had some difficulty getting a good movement prediction implementation in place. Physics/Movement The game doesn't have a true physics implementation, but uses the basic principles to implement movement. Rather than continually polling input, state changes (ie/ mouse down/up/move events) are used to change the state of the character entity the player is controlling. The player's direction (ie/ north-east) is combined with a constant speed and turned into a true 3D vector - the entity's velocity. In the main game loop, "Update" is called before "Draw". The update logic triggers a "physics update task" that tracks all entities with a non-zero velocity uses very basic integration to change the entities position. For example: entity.Position += entity.Velocity.Scale(ElapsedTime.Seconds) (where "Seconds" is a floating point value, but the same approach would work for millisecond integer values). The key point is that no interpolation is used for movement - the rudimentary physics engine has no concept of a "previous state" or "current state", only a position and velocity. State Change and Update Packets When the velocity of the character entity the player is controlling changes, a "move avatar" packet is sent to the server containing the entity's action type (stand, walk, run), direction (north-east), and current position. This is different from how 3D first person games work. In a 3D game the velocity (direction) can change frame to frame as the player moves around. Sending every state change would effectively transmit a packet per frame, which would be too expensive. Instead, 3D games seem to ignore state changes and send "state update" packets on a fixed interval - say, every 80-150ms. Since speed and direction updates occur much less frequently in my game, I can get away with sending every state change. Although all of the physics simulations occur at the same speed and are deterministic, latency is still an issue. For that reason, I send out routine position update packets (similar to a 3D game) but much less frequently - right now every 250ms, but I suspect with good prediction I can easily boost it towards 500ms. The biggest problem is that I've now deviated from the norm - all other documentation, guides, and samples online send routine updates and interpolate between the two states. It seems incompatible with my architecture, and I need to come up with a better movement prediction algorithm that is closer to a (very basic) "networked physics" architecture. The server then receives the packet and determines the players speed from it's movement type based on a script (Is the player able to run? Get the player's running speed). Once it has the speed, it combines it with the direction to get a vector - the entity's velocity. Some cheat detection and basic validation occurs, and the entity on the server side is updated with the current velocity, direction, and position. Basic throttling is also performed to prevent players from flooding the server with movement requests. After updating its own entity, the server broadcasts an "avatar position update" packet to all other players within range. The position update packet is used to update the client side physics simulations (world state) of the remote clients and perform prediction and lag compensation. Prediction and Lag Compensation As mentioned above, clients are authoritative for their own position. Except in cases of cheating or anomalies, the client's avatar will never be repositioned by the server. No extrapolation ("move now and correct later") is required for the client's avatar - what the player sees is correct. However, some sort of extrapolation or interpolation is required for all remote entities that are moving. Some sort of prediction and/or lag-compensation is clearly required within the client's local simulation / physics engine. Problems I've been struggling with various algorithms, and have a number of questions and problems: Should I be extrapolating, interpolating, or both? My "gut feeling" is that I should be using pure extrapolation based on velocity. State change is received by the client, client computes a "predicted" velocity that compensates for lag, and the regular physics system does the rest. However, it feels at odds to all other sample code and articles - they all seem to store a number of states and perform interpolation without a physics engine. When a packet arrives, I've tried interpolating the packet's position with the packet's velocity over a fixed time period (say, 200ms). I then take the difference between the interpolated position and the current "error" position to compute a new vector and place that on the entity instead of the velocity that was sent. However, the assumption is that another packet will arrive in that time interval, and it's incredibly difficult to "guess" when the next packet will arrive - especially since they don't all arrive on fixed intervals (ie/ state changes as well). Is the concept fundamentally flawed, or is it correct but needs some fixes / adjustments? What happens when a remote player stops? I can immediately stop the entity, but it will be positioned in the "wrong" spot until it moves again. If I estimate a vector or try to interpolate, I have an issue because I don't store the previous state - the physics engine has no way to say "you need to stop after you reach position X". It simply understands a velocity, nothing more complex. I'm reluctant to add the "packet movement state" information to the entities or physics engine, since it violates basic design principles and bleeds network code across the rest of the game engine. What should happen when entities collide? There are three scenarios - the controlling player collides locally, two entities collide on the server during a position update, or a remote entity update collides on the local client. In all cases I'm uncertain how to handle the collision - aside from cheating, both states are "correct" but at different time periods. In the case of a remote entity it doesn't make sense to draw it walking through a wall, so I perform collision detection on the local client and cause it to "stop". Based on point #2 above, I might compute a "corrected vector" that continually tries to move the entity "through the wall" which will never succeed - the remote avatar is stuck there until the error gets too high and it "snaps" into position. How do games work around this?

    Read the article

  • Thoughts on the Nomination Committee Campaign 2014

    - by Testas
    Congratulations to Erin, Andy and Allen on making the Nomination Committee for 2014. As Mark Broadbent (@retracement) stated in his tweet, there’s a great set of individuals for the Nom Com, and I could not agree more. I know Erin and Allen, and I know how much value they will bring to the process. I don’t know Andy as well, but I am sure he will do a great job and I hope I can meet him at PASS soon. The final candidate appointed by the PASS board is Rick Bolesta, who brings a wealth of experience to the process. I also want to take the opportunity to thank all who have voted. Not just for me, but for all the candidates during the election. Your contribution is greatly appreciated. Would I apply for the Nom Com again?  Yes I would. My first election experience has been a learning experience in itself. So I accept the result and look forward to applying next year. Moving on from this, I do want to express my opinion about the lack of international representation in the election process. One of the tweets that I saw after the result was from Adam Machanic (@AdamMachanic) who commented on the lack of international members on the Nom Com. If truth be told, I was disappointed – when the candidate list was released -- that for the second time in recent elections there was a lack of international candidates on the candidate list. It feels that only Brits and Americans partake in such elections. This is a real shame, and I can’t help thinking why this is the case. Hugo Kornelis (@Hugo_Kornelis) wrote a blog here to express his thoughts. He did raise some valid points. I don’t know why there is an absence of international candidates. I know that the team at PASS are looking to improve the situation, so I do not want to give the impression that PASS are doing nothing. For reference please see Bill Graziano’ s article here to see how PASS are addressing the situation. There is a clear direction to change the rules within PASS to give greater inclusion of international members. In addition to this, I wanted to explore a couple of potential approaches to address the situation. I am not saying that they are the right answer, but when I see challenges, I like to bring potential solutions to the table. 1.       Use the PASS mission statement to define a tactical objective that engages community leaders into the election process. If you are not familiar with the PASS mission statement, let me provide it here as laid out on the PASS website. “Empower data professionals who leverage Microsoft technologies to connect, share, and learn through networking, knowledge sharing, and peer-based learning” PASS fulfil this mission statement regularly. Whether you attend SQL Saturday, SQLRally, SQLPASS and BA conference itself. The biggest value of PASS is the ability to bring our profession together. And the 24 hour hop allows you to learn from the comfort of your own office/home. This mission should be extended to define a tactical objectives that bring greater networking and knowledge sharing between PASS Chapter leaders/Regional Mentors and PASS HQ. It should help educate the leaders about the opportunities of elections and how leaders can become involved. I know PASS engage with Chapter leaders on a regular basis to discuss community matters for the benefit of PASS members. How could this be achieved? Perhaps PASS could perform a quarterly virtual meeting that specifically looks at helping leaders become more involved with the election process 2.       Evolve the Global Growth Strategy into a Global Engagement Strategy. One of the remits of the PASS board over the last couple of years is the Global Growth strategy. This has been very successful as we have seen the massive growth of events across the world. For that, I congratulate the board for this success. Perhaps the time is now right to look at solidifying this success, through a Global Engagement Strategy that starts with the collaboration of Chapter Leaders, Regional Mentors and Evangelists in their respective Countries or Regions. The engagement strategy should look at increasing collaboration between community leaders for the benefit of their respective communities. It should also provide a channel for encouraging leaders to put themselves forward for the elections. How could this be achieved? In the UK, there has been a big growth in PASS Chapters and SQL Server Events that was approaching saturation point. The introduction of the Community Engagement Day -- channelled through the SQLBits conference -- has enabled Chapter Leaders to collaborate, connect and share with PASS, Sponsors and Microsoft. It also provides the ability for Chapter Leaders to speak directly to the PASS representatives from PASSHQ. This brings with it the ability for PASS community evangelists to communicate PASS objectives. It has also been the event where we have found out; and/or encouraged, Chapter Leaders to put themselves forward for elections. People like encouragement and validation when going for something like an election, and being able to discuss this with peers at a dedicated event provides a useful platform. PASS has the people in place already to facilitate such an event. Regional Mentors could potentially help organise such events on an annual basis, with PASSHQ providing support in providing a room/Lync access for the event to take place. It would be really good if a PASSHQ representative could attend in person as well.   3.       Restrict candidates to serve only a limited number of terms. A frequent comment I saw on social networking was that the elections can be seen by some as a popularity conference. Perhaps by limiting the number of terms that an individual can serve on either the Nom Com or the BOD, other candidates may be encouraged to be more actively involved within the PASS election process. I don’t think that the current byelaws deal with this particular suggestion. I also saw a couple of tweets that stated that more active community members did not apply for the Nom Com. I struggled to understand how the individuals of the tweets measured “more active”. It just also further solidified the subjective nature of elections. In the absence of how candidates are put forward for the elections. Then a restriction of terms enables the opportunity to be extended to others. How could this be achieved? Set a resolution that is put to a community vote as to the viability of such a solution. For example, the questions for the vote could be: Should individuals in the Nom Com and BoD be limited to a certain number of terms?  Yes/No. What is the maximum number of terms a candidate could serve?   It would be simple to execute such a vote, and the community will have an opportunity to have a say in an important aspect of the PASS organisation. And is the change is successful, then add it as a byelaw.   So there are some of my thoughts. I am not saying they are right or wrong. But I do hope that there is a concerted effort to encourage more candidates from other reaches of the Globe to become involved with future elections.   It would be good to hear your thoughts   Thanks   Chris

    Read the article

  • Self-signed certificates for a known community

    - by costlow
    Recently announced changes scheduled for Java 7 update 51 (January 2014) have established that the default security slider will require code signatures and the Permissions Manifest attribute. Code signatures are a common practice recommended in the industry because they help determine that the code your computer will run is the same code that the publisher created. This post is written to help users that need to use self-signed certificates without involving a public Certificate Authority. The role of self-signed certificates within a known community You may still use self-signed certificates within a known community. The difference between self-signed and purchased-from-CA is that your users must import your self-signed certificate to indicate that it is valid, whereas Certificate Authorities are already trusted by default. This works for known communities where people will trust that my certificate is mine, but does not scale widely where I cannot actually contact or know the systems that will need to trust my certificate. Public Certificate Authorities are widely trusted already because they abide by many different requirements and frequent checks. An example would be students in a university class sharing their public certificates on a mailing list or web page, employees publishing on the intranet, or a system administrator rolling certificates out to end-users. Managed machines help this because you can automate the rollout, but they are not required -- the major point simply that people will trust and import your certificate. How to distribute self-signed certificates for a known community There are several steps required to distribute a self-signed certificate to users so that they will properly trust it. These steps are: Creating a public/private key pair for signing. Exporting your public certificate for others Importing your certificate onto machines that should trust you Verify work on a different machine Creating a public/private key pair for signing Having a public/private key pair will give you the ability both to sign items yourself and issue a Certificate Signing Request (CSR) to a certificate authority. Create your public/private key pair by following the instructions for creating key pairs.Every Certificate Authority that I looked at provided similar instructions, but for the sake of cohesiveness I will include the commands that I used here: Generate the key pair.keytool -genkeypair -alias erikcostlow -keyalg EC -keysize 571 -validity 730 -keystore javakeystore_keepsecret.jks Provide a good password for this file. The alias "erikcostlow" is my name and therefore easy to remember. Substitute your name of something like "mykey." The sigalg of EC (Elliptical Curve) and keysize of 571 will give your key a good strong lifetime. All keys are set to expire. Two years or 730 days is a reasonable compromise between not-long-enough and too-long. Most public Certificate Authorities will sign something for one to five years. You will be placing your keys in javakeystore_keepsecret.jks -- this file will contain private keys and therefore should not be shared. If someone else gets these private keys, they can impersonate your signature. Please be cautious about automated cloud backup systems and private key stores. Answer all the questions. It is important to provide good answers because you will stick with them for the "-validity" days that you specified above.What is your first and last name?  [Unknown]:  First LastWhat is the name of your organizational unit?  [Unknown]:  Line of BusinessWhat is the name of your organization?  [Unknown]:  MyCompanyWhat is the name of your City or Locality?  [Unknown]:  City NameWhat is the name of your State or Province?  [Unknown]:  CAWhat is the two-letter country code for this unit?  [Unknown]:  USIs CN=First Last, OU=Line of Business, O=MyCompany, L=City, ST=CA, C=US correct?  [no]:  yesEnter key password for <erikcostlow>        (RETURN if same as keystore password): Verify your work:keytool -list -keystore javakeystore_keepsecret.jksYou should see your new key pair. Exporting your public certificate for others Public Key Infrastructure relies on two simple concepts: the public key may be made public and the private key must be private. By exporting your public certificate, you are able to share it with others who can then import the certificate to trust you. keytool -exportcert -keystore javakeystore_keepsecret.jks -alias erikcostlow -file erikcostlow.cer To verify this, you can open the .cer file by double-clicking it on most operating systems. It should show the information that you entered during the creation prompts. This is the file that you will share with others. They will use this certificate to prove that artifacts signed by this certificate came from you. If you do not manage machines directly, place the certificate file on an area that people within the known community should trust, such as an intranet page. Import the certificate onto machines that should trust you In order to trust the certificate, people within your known network must import your certificate into their keystores. The first step is to verify that the certificate is actually yours, which can be done through any band: email, phone, in-person, etc. Known networks can usually do this Determine the right keystore: For an individual user looking to trust another, the correct file is within that user’s directory.e.g. USER_HOME\AppData\LocalLow\Sun\Java\Deployment\security\trusted.certs For system-wide installations, Java’s Certificate Authorities are in JAVA_HOMEe.g. C:\Program Files\Java\jre8\lib\security\cacerts File paths for Mac and Linux are included in the link above. Follow the instructions to import the certificate into the keystore. keytool -importcert -keystore THEKEYSTOREFROMABOVE -alias erikcostlow -file erikcostlow.cer In this case, I am still using my name for the alias because it’s easy for me to remember. You may also use an alias of your company name. Scaling distribution of the import The easiest way to apply your certificate across many machines is to just push the .certs or cacerts file onto them. When doing this, watch out for any changes that people would have made to this file on their machines. Trusted.certs: When publishing into user directories, your file will overwrite any keys that the user has added since last update. CACerts: It is best to re-run the import command with each installation rather than just overwriting the file. If you just keep the same cacerts file between upgrades, you will overwrite any CAs that have been added or removed. By re-importing, you stay up to date with changes. Verify work on a different machine Verification is a way of checking on the client machine to ensure that it properly trusts signed artifacts after you have added your signing certificate. Many people have started using deployment rule sets. You can validate the deployment rule set by: Create and sign the deployment rule set on the computer that holds the private key. Copy the deployment rule set on to the different machine where you have imported the signing certificate. Verify that the Java Control Panel’s security tab shows your deployment rule set. Verifying an individual JAR file or multiple JAR files You can test a certificate chain by using the jarsigner command. jarsigner -verify filename.jar If the output does not say "jar verified" then run the following command to see why: jarsigner -verify -verbose -certs filename.jar Check the output for the term “CertPath not validated.”

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >