Search Results

Search found 9725 results on 389 pages for 'androi developer'.

Page 356/389 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Forcing an External Activation with Service Broker

    - by Davide Mauri
    In these last days I’ve been working quite a lot with Service Broker, a technology I’m really happy to work with, since it can give a lot of satisfaction. The scale-out solution one can easily build is simply astonishing. I’m helping a company to build a very scalable and – yet almost inexpensive – invoicing system that has to be able to scale out using commodity hardware. To offload the work from the main server to satellite “compute nodes” (yes, I’ve borrowed this term from PDW) we’re using Service Broker and the External Activator application available in the SQL Server Feature Pack. For those who are not used to work with SSB, the External Activation is a feature that allows you to intercept the arrival of a message in a queue right from your application code. http://msdn.microsoft.com/en-us/library/ms171617.aspx (Look for “Event-Based Activation”) In order to make life even more easier, Microsoft released the External Activation application that saves you even from writing even this code. http://blogs.msdn.com/b/sql_service_broker/archive/tags/external+activator/ The External Activator application can be configured to execute your own application so that each time a message – an invoice in my case – arrives in the target queue, the invoking application is executed and the invoice is calculated. The very nice feature of External Activator is that it can automatically execute as many configured application in order to process as many messages as your system can handle.  This also a lot of create a scale-out solution, leaving to the developer only a fraction of the problems that usually came with asynchronous programming. Developers are also shielded from Service Broker since everything can be encapsulated in Stored Procedures, so that – for them – developing such scale-out asynchronous solution is not much more complex than just executing a bunch of Stored Procedures. Now, if everything works correctly, you don’t have to bother of anything else. You put messages in the queue and your application, invoked by the External Activator, process them. But what happen if for some reason your application fails to process the messages. For examples, it crashes? The message is safe in the queue so you just need to process it again. But your application is invoked by the External Activator application, so now the question is, how do you wake up that app? Service Broker will engage the activation process only if certain conditions are met: http://msdn.microsoft.com/en-us/library/ms171601.aspx But how we can invoke the activation process manually, without having to wait for another message to arrive (the arrival of a new message is a condition that can fire the activation process)? The “trick” is to do manually with the activation process does: sending a system message to a queue in charge of handling External Activation messages: declare @conversationHandle uniqueidentifier; declare @n xml = N' <EVENT_INSTANCE>   <EventType>QUEUE_ACTIVATION</EventType>   <PostTime>' + CONVERT(CHAR(24),GETDATE(),126) + '</PostTime>   <SPID>' + CAST(@@SPID AS VARCHAR(9)) + '</SPID>   <ServerName>[your_server_name]</ServerName>   <LoginName>[your_login_name]</LoginName>   <UserName>[your_user_name]</UserName>   <DatabaseName>[your_database_name]</DatabaseName>   <SchemaName>[your_queue_schema_name]</SchemaName>   <ObjectName>[your_queue_name]</ObjectName>   <ObjectType>QUEUE</ObjectType> </EVENT_INSTANCE>' begin dialog conversation     @conversationHandle from service        [<your_initiator_service_name>] to service          '<your_event_notification_service>' on contract         [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] with     encryption = off,     lifetime = 6000 ; send on conversation     @conversationHandle message type     [http://schemas.microsoft.com/SQL/Notifications/EventNotification] (@n) ;     end conversation @conversationHandle; That’s it! Put the code in a Stored Procedure and you can add to your application a button that says “Force Queue Processing” (or something similar) in order to start the activation process whenever you need it (which should not occur too frequently but it may happen). PS I know that the “fire-and-forget” (ending the conversation without waiting for an answer) technique is not a best practice, but in this case I don’t see how it can hurts so I decided to stay very close to the KISS principle []

    Read the article

  • SSAS Compare: an intern’s journey

    - by Red Gate Software BI Tools Team
    About a month ago, David mentioned an intern working in the BI Tools Team. That intern happens to be me! In five weeks’ time, I’ll start my second year of Computer Science at the University of Cambridge and be a full-time student again, but for the past eight weeks, I’ve been living a completely different life. As Jon mentioned before, the teams here at Red Gate are small and everyone (including the interns!) is responsible for the product as a whole. I’ve attended planning sessions, UX tests, daily meetings, and everything else a full-time member of the team would; I had as much say in where we would go next with the product as anyone; I was able to see that what I was doing was an important part of the product from the feedback we got in the UX tests. All these things almost made me forget that this is just an internship and not my full-time job. First steps at Red Gate Being based in Cambridge, Red Gate has many Cambridge university graduates working for them. They also hire some Cambridge undergraduates for internships each summer. With its popularity with university graduates and its great working environment, Red Gate has managed to build up a great reputation. When I thought of doing an internship here in Cambridge, Red Gate just seemed to be the obvious choice for my first real work experience. On my first day at Red Gate, David, the lead developer for SSAS Compare, helped me settle in and explained what I’d be doing. My task was to improve the user experience of displaying differences between MDX scripts by syntax highlighting, script formatting, and improving the difference identification in the first place. David suggested how I should approach the problem, but left all the details and design decisions to me. That was when I realised how much independence and responsibility I’d have. What I’ve done If you launch the latest version of SSAS Compare and drill down to an MDX script difference, you can see the changes that have been made. In earlier versions, you could only see the scripts in plain text on both sides — either in black or grey, depending on whether they were the same or not. However, you couldn’t see exactly where the scripts were different, which was especially annoying when the two scripts were large – as they often are. Furthermore, if parts of the two scripts were formatted differently, they seemed to be different but were actually the same, which caused even more confusion and made it difficult to see where the differences were. All these issues have been fixed now. The two scripts are automatically formatted by the tool so that if two things are syntactically equivalent, they look the same – including case differences in keywords! The actual difference is highlighted in grey, which makes them easy to spot. The difference identification has been improved as well, so two scripts aren’t identified as different if there’s just a difference in meaningless whitespace characters, or when you have “select” on one side and “SELECT” on the other. We also have syntax highlighting, which makes it easier to read the scripts. How I did it In order to do the formatting properly, we decided to parse the MDX scripts. After some investigation into parser builders, I decided to go with the GOLD Parser builder and the bsn-goldparser .NET engine. GOLD Parser builder provides a fairly nice GUI to write, build, and test grammar in. We also liked the idea of separating the grammar building from parsing a text. The bsn-goldparser is one of many .NET engines for GOLD, and although it doesn’t support the newest features of GOLD Parser, it has “the ability to map semantic action classes to terminals or reduction rules, so that a completely functional semantic AST can be created directly without intermediate token AST representation, and without the need for glue code.” That makes it much easier for us to change the implementation in our program when we change the grammar. As bsn-goldparser is open source, and I wanted some more features in it, I contributed two new features which have now been merged to the project. Unfortunately, there wasn’t an MDX grammar written for GOLD already, so I had to write it myself. I was referencing MSDN to get the formal grammar specification, but the specification was all over the place, so it wasn’t that easy to implement and find. We’re aware that we don’t yet fully support all valid MDX, so sometimes you’ll just see the MDX script difference displayed the old way. In that case, there is some grammar construct we don’t yet recognise. If you come across something SSAS Compare doesn’t recognise, we’d love to hear about it so we can add it to our grammar. When some MDX script gets parsed, a tree is produced. That tree can then be processed into a list of inlines which deal with the correct formatting and can be outputted to the screen. Doing all this has led me to many new technologies and projects I haven’t worked with before. This was my first experience with C# and Visual Studio, although I have done things in Java before. I have learnt how to unit test with NUnit, how to do dependency injection with Ninject, how to source-control code with SVN and Mercurial, how to build with TeamCity, how to use GOLD, and many other things. What’s coming next Sadly, my internship comes to an end this week, so there will be less development on MDX difference view for a while. But the team is going to work on marking the differences better and making it consistent with difference indication in the top part of comparison window, and will keep adding support for more MDX grammar so you can see the differences easily in every comparison you make. So long! And maybe I’ll see you next summer!

    Read the article

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

  • Building the Elusive Windows Phone Panorama Control

    When the Windows Phone 7 Developer SDK was released a couple of weeks ago at MIX10 many people noticed the SDK doesnt include a template for a Panorama control.   Here at Clarity we decided to build our own Panorama control for use in some of our prototypes and I figured I would share what we came up with. There have been a couple of implementations of the Panorama control making their way through the interwebs, but I didnt think any of them really nailed the experience that is shown in the simulation videos.   One of the key design principals in the UX Guide for Windows Phone 7 is the use of motion.  The WP7 OS is fairly stripped of extraneous design elements and makes heavy use of typography and motion to give users the necessary visual cues.  Subtle animations and wide layouts help give the user a sense of fluidity and consistency across the phone experience.  When building the panorama control I was fairly meticulous in recreating the motion as shown in the videos.  The effect that is shown in the application hubs of the phone is known as a Parallax Scrolling effect.  This this pseudo-3D technique has been around in the computer graphics world for quite some time. In essence, the background images move slower than foreground images, creating an illusion of depth in 2D.  Here is an example of the traditional use: http://www.mauriciostudio.com/.  One of the animation gems I've learned while building interactive software is the follow animation.  The premise is straightforward: instead of translating content 1:1 with the interaction point, let the content catch up to the mouse or finger.  The difference is subtle, but the impact on the smoothness of the interaction is huge.  That said, it became the foundation of how I achieved the effect shown below.   Source Code Available HERE Before I briefly describe the approach I took in creating this control..and Ill add some **asterisks ** to the code below as my coding skills arent up to snuff with the rest of my colleagues.  This code is meant to be an interpretation of the WP7 panorama control and is not intended to be used in a production application.  1.  Layout the XAML The UI consists of three main components :  The background image, the Title, and the Content.  You can imagine each  these UI Elements existing on their own plane with a corresponding Translate Transform to create the Parallax effect.  2.  Storyboards + Procedural Animations = Sexy As I mentioned above, creating a fluid experience was at the top of my priorities while building this control.  To recreate the smooth scroll effect shown in the video we need to add some place holder storyboards that we can manipulate in code to simulate the inertia and snapping.  Using the easing functions built into Silverlight helps create a very pleasant interaction.    3.  Handle the Manipulation Events With Silverlight 3 we have some new touch event handlers.  The new Manipulation events makes handling the interactivity pretty straight forward.  There are two event handlers that need to be hooked up to enable the dragging and motion effects: the ManipulationDelta event :  (the most relevant code is highlighted in pink) Here we are doing some simple math with the Manipulation Deltas and setting the TO values of the animations appropriately. Modifying the storyboards dynamically in code helps to create a natural feel.something that cant easily be done with storyboards alone.   And secondly, the ManipulationCompleted event:  Here we take the Final Velocities from the Manipulation Completed Event and apply them to the Storyboards to create the snapping and scrolling effects.  Most of this code is determining what the next position of the viewport will be.  The interesting part (shown in pink) is determining the duration of the animation based on the calculated velocity of the flick gesture.  By using velocity as a variable in determining the duration of the animation we can produce a slow animation for a soft flick and a fast animation for a strong flick. Challenges to the Reader There are a couple of things I didnt have time to implement into this control.  And I would love to see other WPF/Silverlight approaches.  1.  A good mechanism for deciphering when the user is manipulating the content within the panorama control and the panorama itself.   In other words, being able to accurately determine what is a flick and what is click. 2.  Dynamically Sizing the panorama control based on the width of its content.  Right now each control panel is 400px, ideally the Panel items would be measured and then panorama control would update its size accordingly.  3.  Background and content wrapping.  The WP7 UX guidelines specify that the content and background should wrap at the end of the list.  In my code I restrict the drag at the ends of the list (like the iPhone).  It would be interesting to see how this would effect the scroll experience.     Well, Its been fun building this control and if you use it Id love to know what you think.  You can download the Source HERE or from the Expression Gallery  Erik Klimczak  | [email protected] | twitter.com/eklimczDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQLAuthority News – Community Tech Days – A SQL Legends in Ahmedabad – December 11, 2010

    - by pinaldave
    Ahmedabad is going to be fortunate city again on December 11. We are going to have SQL Server Legends present at the prestigious event of Community Tech Days in Ahmedabad. The venue details are as following: H K Hall, H K College Campus, Near Handloom House, Opp. Natraj Cinema, Ashram Road, Ahmedabad – 380009 Click here to Registration for the event. Agenda of the event is as following. 10:15am – 10:30am     Welcome – Pinal Dave 10:30am – 11:15am     SQL Tips and Tricks for .NET Developers by Jacob Sebastian 11:15am – 11:30am     Tea Break 11:30am – 12:15pm     Best Database Practice for SharePoint Server by Pinal Dave 12:15pm – 01:00pm     Self Service Business Intelligence by Rushabh Mehta 01:00pm – 02:00pm     Lunch 02:00pm – 02:45pm     Managing your future, Managing your time by Vinod Kumar 02:45pm – 03:30pm     Windows Azure News and Introducing Storage Services by Mahesh Devjibhai Dhola 03:30pm – 03:45pm     Tea Break 03:45pm – 04:30pm     Improve Silverlight application with Threads and MEF by Prabhjot Singh Bakshi 04:30pm – 04:45pm     Thank you – Mahesh Devjibhai Dhola Ahmedabad considers itself extremely fortunate when there are SQL Legends presenting on various subjects in front of community. Here is brief introduction about them in my own words. (Their names are in order of the agenda). 1) Jacob Sebastian (SQL Server MVP) – This person needs no introduction. Every developer and programmer in Ahmedabad and India knows him. He is the one man who is founder of various community-related ideas like SQL Challenges, SQL Quiz and BeyondRelational. He works with me on all the community-related activities; we are extremely good friends. 2) Rushabh Mehta (SQL Server MVP) – If you use SQL Server – you know this man. He is the President of SQL Server of Professional Association (PASS) and one of the leading Business Intelligence (BI) Experts renowned in the world. He has blessed Ahmedabad once before and now doing once again this year. 3) Vinod Kumar (Microsoft Evangelist – SQL Server & BI) – Ahmedabad remembers him very well. During his last visit to Ahmedabad, a fight had almost broke outside the hall amidst the rush to listen him. There were more people standing and listening to him than those who were seated. This is one man Ahmedabad will never forget. 4) and Myself. I will not rate myself in the league of abovementioned experts, but I must say that I am fortunate to have friends like those above. We also have two strong .NET presenters – Mahesh and Prabhjot. During this event, there will be plenty of giveaways, lots of fun, demos and pure technical talk, specifically no marketing and promotion – just pure technical talk. The most interesting part is that all the SQL Legends – Jacob, Rushabh and Vinod are for sure presenting on SQL Server but with a twist. Jacob – He is going to talk about .NET and SQL – Optimization Techniques Rushabh – He is going to talk about SQL and BI – Self Service BI Vinod – He is going to talk about professional development of developers – Managing Time Pinal – Best Practices for SharePoint Database Administrators – SharePoint DBA – I have presented this session earlier. I promise this event is going to be one of the best events held ever. You can read about the earlier event over here. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Dynamic Grouping and Columns

    - by Tim Dexter
    Some good collaboration between myself and Kan Nishida (Oracle BIP Consulting) over at bipconsulting on a question that came in yesterday to an internal mailing list. Is there a way to allow columns to be place into a template dynamically? This would be similar to the Answers Column selector. A customer has said Crystal can do this and I am trying to see how BI Pub can do the same. Example: Report has Regions as a dimension in a table, they want the user to select a parameter that will insert either Units or Dollars without having to create multiple templates. Now whether Crystal can actually do it or not is another question, can Publisher? Yes we can! Kan took the first stab. His approach, was to allow to swap out columns in a table in the report. Some quick steps: 1. Create a parameter from BIP server UI 2. Declare the parameter in RTF template You can check this post to see how you can declare the parameter from the server. http://bipconsulting.blogspot.com/2010/02/how-to-pass-user-input-values-to-report.html 3. Use the parameter value to condition if a particular column needs to be displayed or not. You can use <?if@column:.....?> syntax for Column level IF condition. The if@column is covered in user documentation. This would allow a developer to create a report with the parameter or multiple parameters to allow the user to pick a column to be included in the report. I took a slightly different tack, with the mention of the column selector in the Answers report I took that to mean that the user wanted to select more of a dimensional column and then have the report recalculate all its totals and subtotals based on that selected column. This is a little bit more involved and involves some smart XSL and XPATH expressions, but still very doable. The user can select a column as a parameter, that is passed to the template rather than the query. The parameter value that is actually passed is the element name that you want to regroup the data by. Inside the template we then reference that parameter value in our for-each-group loop. That's where we need the trixy XSL/XPATH code to get the regrouping to happen. At this juncture, I need to hat tip to Klaus, for his article on dynamic sorting that he wrote back in 2006. I basically took his sorting code and applied it to the for-each loop. You can follow both of Kan's first two steps above i.e. Create a parameter from BIP server UI - this just needs to be based on a 'list' type list of value with name/value pairs e.g. Department/DEPARTMENT_NAME, Job/JOB_TITLE, etc. The user picks the 'friendly' value and the server passes the element name to the template. Declare the parameter in RTF template - been here before lots of times right? <?param@begin:group1;'"DEPARTMENT_NAME"'?> I have used a default value so that I can test the funtionality inside the template builder (notice the single and double quotes.) Next step is to use the template builder to build a re-grouped report layout. It does not matter if its hard coded right now; we will add in the dynamic piece next. Once you have a functioning template that is re-grouping correctly. Open up the for-each-group field and modify it to use the parameter: <?for-each-group:ROW;./*[name(.) = $group1]?> 'group1' is my grouping parameter, declared above. We need the XPATH expression to find the column in the XML structure we want to group that matches the one passed by the parameter. Its essentially looking through the data tree for a match. We can show the actual grouping value in the report output with a similar XPATH expression <?./*[name(.) = $group1]?> In my example, I took things a little further so that I could have a dynamic label for the parameter value. For instance if I am using MANAGER as the parameter I want to show: Manager: Tim Dexter My XML elements are readable e.g. DEPARTMENT_NAME. Its a simple case of replacing the underscore with a space and then 'initcapping' the result: <?xdoxslt:init_cap(translate($group1,'_',' '))?> With this in place, the user can now select a grouping column in the BIP report viewer and the layout will re-group the data and any calculations based on that column. I built a group above report but you could equally build the group left version to truly mimic the Answers column selector. If you are interested you can get an example report, sample data and layout template here. Of course, you can combine Klaus' dynamic sorting, Kan's conditional column approach and this dynamic grouping to build a real kick ass report for users that will keep them happy for hours..

    Read the article

  • Using Web Services from an XNA 4.0 WP7 Game

    - by Michael Cummings
    Now that the Windows Phone 7 development tools have been out for a while, let’s talk about how you can use them. Windows Phone 7 ( WP7 ) has two application types that you can create, either Silverlight or XNA, and you can’t really mix the two together. The development environment for WP7 is a special edition of Visual Studio 2010 called Visual Studio 2010 Express for Windows Phone. This edition will be installed with the WP7 tools, even if you have a full edition of VS2010 already installed. While you can use your full edition of VS2010 to do WP7 development, this astute developer has noticed that there are a few things that you can only do in the Express for Windows Phone edition. So lets start by discussing WP7 networking. On the WP7 platform the only networking available is through Web Services using WCF or if you’re really masochistic, you’ll use the WebClient to do http. In Silverlight, it’s fairly easy to wire up a WCF proxy to call a web service and get some data. In the XNA projects, not so much. Create WCF Service First, we’ll create our service that will return some information that we need in our game. Open Visual Studio 2010, and create a new WCF Web Service project. We’ll use the default implementation as we only need to see how to use a service, we are not interested in creating a really cool service at this point. However you may want to follow the instructions in the comments of Service1.svc.cs to change the name to something better, I used DataService and IDataService for the interface. You should now be able to run the project and the WCF Test Client will load and properly enumerate your service. At this point we have a functional service that can be consumed by our XNA game. Consume the WCF Service Open Visual Studio 2010 Express for Windows Phone and create a new XNA Game Studio 4.0 Windows Phone Game project. Now if you try to add a service reference to the project, you’ll notice that the option is not available. However, if you add a Silverlight application to your solution, you’ll notice that you can create a service reference there. So using the Silverlight project, we can create the service reference. Unfortunately you can’t reference the Silverlight project from the XNA Game project, so using Windows Explorer copy the Service References folder from the Silverlight project directory to the XNA Game project directory, then add the folder to your XNA Game project. You’ll need to set the property Build Action to None for all the files, except for Reference.cs, which should be Build. Truely, we only need Reference.cs but I find it easier to copy the whole folder. If you try to compile at this point, you’ll notice that we are missing  a couple of references, System.Runtime.Serialization, System.Net and System.ServiceModel. Add these to the XNA Game project and you should build successfully. You’ll also need to copy the ServiceReference.ClientConfig file and add it to your project. The WCF infrastructure looks for this file and will complain if it can’t find it. You’ll need to set the Copy to Output Directory property to Copy if Newer. We now need to add the code to call the service and display the results on the screen. Go ahead and add a SpriteFont resource to the Content project and load it in the Game project. There’s nothing here that’s changed much from 3.1 other than your Content project is now under the Solution node and not the Project node. While you’re at it, add a string field to store the result of the service call, and intialize it to string.Empty. Then in the Draw method, write the string out to the screen, only if it does not equal string.Empty. Now to wrap this up, lets create a new field that’s of the type DataServiceClient. In the Initialize Method, create a new instance of this type using its default contructor, then in the LoadContent we can call the service. Since we can only call the GetData method of our service asynchronously we need to set up a Completed event handler first. Thankfully, Visual Studio helps out a lot there just create, using the tab key whatever VS says to. In the GetDataAsyncCompleted event handler assign the service result ( e.Result) to your string field. If you run your game, you should get something like this : Enjoy!

    Read the article

  • SharePoint Saturday Michigan 2010 Recap, Slides, and Photos

    - by Brian Jackett
    This past weekend I attended SharePoint Saturday Michigan (SPSMI) in Ann Arbor, Michigan.  For those unfamiliar, SharePoint Saturday is a community driven event where various speakers gather to present at a FREE conference on all topics related to SharePoint.  This made my third SharePoint Saturday attended and second I’ve spoken at.  I believe today it was announced that about 210 people total attended the event.  I was very happy with the turnout, especially the ratio of male to female attendees.  Typically with computer related conferences the ratio leans towards more males attending, but both Peter Serzo (one of conference organizers) and I both commented to each other that at the end of the day it appeared to be close to 40% women in the crowd.  So here’s my recap of the weekend. Arrival     Friday afternoon I drove up from Columbus, OH to Ann Arbor, MI and arrived around 4pm.  I was attempting to avoid the rush hour traffic and construction backups.  Turned out to be a good idea because other speakers coming up Friday got stuck on a highway which literally closed down in both directions due to a bad accident.  I was talking my friend Sean McDonough through the highway closing and this was the first time I had seen a solid black traffic line on Google Maps.  Most of us are familiar with Green, Yellow, and Red, but this line was black if that tells you how bad it got. Speaker “Dinner”     Fast forward a few hours and it was time for the speaker “dinner.”  I put “dinner” in quotes because with this night alone SPSMI set a new bar for nicest and most extravagant speaker appreciation events for SharePoint Saturday.  By tapping into some very influential contacts, the conference organizers were able to provide a truck limo (yep you heard right) with refreshments, access to an underground suite at the Palace of Auburn Hills, and courtside tickets to see the Detroit Pistons play that night.  Being a Michigan native I have to say that I was absolutely floored by this experience and very thankful to our conference organizers Peter, Sebastian, and Jesse along with Trillium Teamologies. Sessions     The actual conference started Saturday morning at 9am with the keynote by Rob Collie who is the Microsoft program manager for PowerPivot.  The day continued and I attended the following sessions: Mike Watson (@mikewat) – “SharePoint 2010 Fight Night: Devs vs. Admins” Karl Swedeberg (@kswedberg) – “A Walk on the Client Side with jQuery“ [my session] Brian Jackett (@briantjackett) - “Real World Deployment of SharePoint 2007 Solutions” Jeff Willinger (@jwillie) - “Social Computing and Collaboration Inside and Outside the 4 Walls” Paul Schaeflein (@paulschaeflein) – “PowerShell for the SharePoint Developer” My Presentation     I had a great time presenting my session on Deploying SharePoint 2007 Solutions, but it wasn’t without its fair share of technical issues.  As my session was right after lunch I came in to my room 10 mins early to set up my laptop, slides, and demos.  As a quick background note, a few months ago I got an upgraded laptop from my company Sogeti and have been dual booting it between XP (factory installed) and Windows Server 2008 R2 w/ Hyper-V.  As such I had prepared all of my demo virtual machines to run under Hyper-V.  About 3 minutes before my session was scheduled to start though it became apparent that I did not have the correct display drivers to connect Windows Server 2008 R2 to the projector…     As you can imagine this was a slight cause for concern as I was potentially going to be unable to give my presentation.  Luckily for me I usually prepare for such unforeseen issues and had my presentation and some spare VMs that would run on XP on my external hard drive.  Knowing this I rebooted my machine into XP and began my presentation without slides until about 5 mins into the session when everything was up and running on XP.  Despite this being the first time I gave this presentation I have to say it was one of my favorites I’ve given so far.  The audience was very engaged in the session and I received some great, positive feedback afterwards.  Thanks to all who attended my session, I appreciate it very much. Link to Presentation Files     For those of you who attended my session and would like my slides or demo PowerShell scripts they can be found on my SkyDrive at the link below.  Also, if you have a few minutes and wouldn’t mind rating my session I have this session posted on SpeakerRate.  As speakers we always appreciate any and all feedback attendees offer, so thank you if you are able to provide any. SkyDrive folder with session files Rate my SharePoint 2007 Solutions session   Picture Albums     For everyone else, here are my pictures from the weekend.  The first link is to my FaceBook album which will have tagging (recommend this one.)  The second is to my Live album if you care for higher resolution images. http://www.facebook.com/album.php?aid=2154482&id=21905041&l=a3fb72ee8c View Full Album Conclusion     A big thank you goes out to all of the organizers, speakers, sponsors, and attendees of SPSMI.  As I’ve said so many times, without each and every one of you these events wouldn’t be possible.  I thoroughly enjoyed this trip back to my home state and presenting a new session.  For those interested in my upcoming schedule I will be giving two sessions on PowerShell at SharePoint Saturday Charlotte in April, helping plan Stir Trek: Iron Man Edition in May, and I’m submitting sessions to Day of .Net Ann Arbor in May as well.  Beyond that I haven’t planned out any travels.  Thanks for reading my recap.  Look forward to more technical posts now that I have a short break in conferences.         -Frog Out   links: Michigan image

    Read the article

  • LLBLGen Pro v3.5 has been released!

    - by FransBouma
    Last weekend we released LLBLGen Pro v3.5! Below the list of what's new in this release. Of course, not everything is on this list, like the large amount of work we put in refactoring the runtime framework. The refactoring was necessary because our framework has two paradigms which are added to the framework at a different time, and from a design perspective in the wrong order (the paradigm we added first, SelfServicing, should have been built on top of Adapter, the other paradigm, which was added more than a year after the first released version). The refactoring made sure the framework re-uses more code across the two paradigms (they already shared a lot of code) and is better prepared for the future. We're not done yet, but refactoring a massive framework like ours without breaking interfaces and existing applications is ... a bit of a challenge ;) To celebrate the release of v3.5, we give every customer a 30% discount! Use the coupon code NR1ORM with your order :) The full list of what's new: Designer Rule based .NET Attribute definitions. It's now possible to specify a rule using fine-grained expressions with an attribute definition to define which elements of a given type will receive the attribute definition. Rules can be assigned to attribute definitions on the project level, to make it even easier to define attribute definitions in bulk for many elements in the project. More information... Revamped Project Settings dialog. Multiple project related properties and settings dialogs have been merged into a single dialog called Project Settings, which makes it easier to configure the various settings related to project elements. It also makes it easier to find features previously not used  by many (e.g. type conversions) More information... Home tab with Quick Start Guides. To make new users feel right at home, we added a home tab with quick start guides which guide you through four main use cases of the designer. System Type Converters. Many common conversions have been implemented by default in system type converters so users don't have to develop their own type converters anymore for these type conversions. Bulk Element Setting Manipulator. To change setting values for multiple project elements, it was a little cumbersome to do that without a lot of clicking and opening various editors. This dialog makes changing settings for multiple elements very easy. EDMX Importer. It's now possible to import entity model data information from an existing Entity Framework EDMX file. Other changes and fixes See for the full list of changes and fixes the online documentation. LLBLGen Pro Runtime Framework WCF Data Services (OData) support has been added. It's now possible to use your LLBLGen Pro runtime framework powered domain layer in a WCF Data Services application using the VS.NET tools for WCF Data Services. WCF Data Services is a Microsoft technology for .NET 4 to expose your domain model using OData. More information... New query specification and execution API: QuerySpec. QuerySpec is our new query specification and execution API as an alternative to Linq and our more low-level API. It's build, like our Linq provider, on top of our lower-level API. More information... SQL Server 2012 support. The SQL Server DQE allows paging using the new SQL Server 2012 style. More information... System Type converters. For a common set of types the LLBLGen Pro runtime framework contains built-in type conversions so you don't need to write your own type converters anymore. Public/NonPublic property support. It's now possible to mark a field / navigator as non-public which is reflected in the runtime framework as an internal/friend property instead of a public property. This way you can hide properties from the public interface of a generated class and still access it through code added to the generated code base. FULL JOIN support. It's now possible to perform FULL JOIN joins using the native query api and QuerySpec. It's left to the developer to check whether the used target database supports FULL (OUTER) JOINs. Using a FULL JOIN with entity fetches is not recommended, and should only be used when both participants in the join aren't the target of the fetch. Dependency Injection Tracing. It's now possible to enable tracing on dependency injection. Enable tracing at level '4' on the traceswitch 'ORMGeneral'. This will emit trace information about which instance of which type got an instance of type T injected into property P. Entity Instances in projections in Linq. It's now possible to return an entity instance in a custom Linq projection. It's now also possible to pass this instance to a method inside the query projection. Inheritance fully supported in this construct. Entity Framework support The Entity Framework has been updated in the recent year with code-first support and a new simpler context api: DbContext (with DbSet). The amount of code to generate is smaller and the context simpler. LLBLGen Pro v3.5 comes with support for DbContext and DbSet and generates code which utilizes these new classes. NHibernate support NHibernate v3.2+ built-in proxy factory factory support. By default the built-in ProxyFactoryFactory is selected. FluentNHibernate Session Manager uses 1.2 syntax. Fluent NHibernate mappings generate a SessionManager which uses the v1.2 syntax for the ProxyFactoryFactory location Optionally emit schema / catalog name in mappings Two settings have been added which allow the user to control whether the catalog name and/or schema name as known in the project in the designer is emitted into the mappings.

    Read the article

  • Integrate BING API for Search inside ASP.Net web application

    - by sreejukg
    As you might already know, Bing is the Microsoft Search engine and is getting popular day by day. Bing offers APIs that can be integrated into your website to increase your website functionality. At this moment, there are two important APIs available. They are Bing Search API Bing Maps The Search API enables you to build applications that utilize Bing’s technology. The API allows you to search multiple source types such as web; images, video etc. and supports various output prototypes such as JSON, XML, and SOAP. Also you will be able to customize the search results as you wish for your public facing website. Bing Maps API allows you to build robust applications that use Bing Maps. In this article I am going to describe, how you can integrate Bing search into your website. In order to start using Bing, First you need to sign in to http://www.bing.com/toolbox/bingdeveloper/ using your windows live credentials. Click on the Sign in button, you will be asked to enter your windows live credentials. Once signed in you will be redirected to the Developer page. Here you can create applications and get AppID for each application. Since I am a first time user, I don’t have any applications added. Click on the Add button to add a new application. You will be asked to enter certain details about your application. The fields are straight forward, only thing you need to note is the website field, here you need to enter the website address from where you are going to use this application, and this field is optional too. Of course you need to agree on the terms and conditions and then click Save. Once you click on save, the application will be created and application ID will be available for your use. Now we got the APP Id. Basically Bing supports three protocols. They are JSON, XML and SOAP. JSON is useful if you want to call the search requests directly from the browser and use JavaScript to parse the results, thus JSON is the favorite choice for AJAX application. XML is the alternative for applications that does not support SOAP, e.g. flash/ Silverlight etc. SOAP is ideal for strongly typed languages and gives a request/response object model. In this article I am going to demonstrate how to search BING API using SOAP protocol from an ASP.Net application. For the purpose of this demonstration, I am going to create an ASP.Net project and implement the search functionality in an aspx page. Open Visual Studio, navigate to File-> New Project, select ASP.Net empty web application, I named the project as “BingSearchSample”. Add a Search.aspx page to the project, once added the solution explorer will looks similar to the following. Now you need to add a web reference to the SOAP service available from Bing. To do this, from the solution explorer, right click your project, select Add Service Reference. Now the new service reference dialog will appear. In the left bottom of the dialog, you can find advanced button, click on it. Now the service reference settings dialog will appear. In the bottom left, you can find Add Web Reference button, click on it. The add web reference dialog will appear now. Enter the URL as http://api.bing.net/search.wsdl?AppID=<YourAppIDHere>&version=2.2 (replace <yourAppIDHere> with the appID you have generated previously) and click on the button next to it. This will find the web service methods available. You can change the namespace suggested by Bing, but for the purpose of this demonstration I have accepted all the default settings. Click on the Add reference button once you are done. Now the web reference to Search service will be added your project. You can find this under solution explorer of your project. Now in the Search.aspx, that you previously created, place one textbox, button and a grid view. For the purpose of this demonstration, I have given the identifiers (ID) as txtSearch, btnSearch, gvSearch respectively. The idea is to search the text entered in the text box using Bing service and show the results in the grid view. In the design view, the search.aspx looks as follows. In the search.aspx.cs page, add a using statement that points to net.bing.api. I have added the following code for button click event handler. The code is very straight forward. It just calls the service with your AppID, a query to search and a source for searching. Let us run this page and see the output when I enter Microsoft in my textbox. If you want to search a specific site, you can include the site name in the query parameter. For e.g. the following query will search the word Microsoft from www.microsoft.com website. searchRequest.Query = “site:www.microsoft.com Microsoft”; The output of this query is as follows. Integrating BING search API to your website is easy and there is no limit on the customization of the interface you can do. There is no Bing branding required so I believe this is a great option for web developers when they plan for site search.

    Read the article

  • New .NET Library for Accessing the Survey Monkey API

    - by Ben Emmett
    I’ve used Survey Monkey’s API for a while, and though it’s pretty powerful, there’s a lot of boilerplate each time it’s used in a new project, and the json it returns needs a bunch of processing to be able to use the raw information. So I’ve finally got around to releasing a .NET library you can use to consume the API more easily. The main advantages are: Only ever deal with strongly-typed .NET objects, making everything much more robust and a lot faster to get going Automatically handles things like rate-limiting and paging through results Uses combinations of endpoints to get all relevant data for you, and processes raw response data to map responses to questions To start, either install it using NuGet with PM> Install-Package SurveyMonkeyApi (easier option), or grab the source from https://github.com/bcemmett/SurveyMonkeyApi if you prefer to build it yourself. You’ll also need to have signed up for a developer account with Survey Monkey, and have both your API key and an OAuth token. A simple usage would be something like: string apiKey = "KEY"; string token = "TOKEN"; var sm = new SurveyMonkeyApi(apiKey, token); List<Survey> surveys = sm.GetSurveyList(); The surveys object is now a list of surveys with all the information available from the /surveys/get_survey_list API endpoint, including the title, id, date it was created and last modified, language, number of questions / responses, and relevant urls. If there are more than 1000 surveys in your account, the library pages through the results for you, making multiple requests to get a complete list of surveys. All the filtering available in the API can be controlled using .NET objects. For example you might only want surveys created in the last year and containing “pineapple” in the title: var settings = new GetSurveyListSettings { Title = "pineapple", StartDate = DateTime.Now.AddYears(-1) }; List<Survey> surveys = sm.GetSurveyList(settings); By default, whenever optional fields can be requested with a response, they will all be fetched for you. You can change this behaviour if for some reason you explicitly don’t want the information, using var settings = new GetSurveyListSettings { OptionalData = new GetSurveyListSettingsOptionalData { DateCreated = false, AnalysisUrl = false } }; Survey Monkey’s 7 read-only endpoints are supported, and the other 4 which make modifications to data might be supported in the future. The endpoints are: Endpoint Method Object returned /surveys/get_survey_list GetSurveyList() List<Survey> /surveys/get_survey_details GetSurveyDetails() Survey /surveys/get_collector_list GetCollectorList() List<Collector> /surveys/get_respondent_list GetRespondentList() List<Respondent> /surveys/get_responses GetResponses() List<Response> /surveys/get_response_counts GetResponseCounts() Collector /user/get_user_details GetUserDetails() UserDetails /batch/create_flow Not supported Not supported /batch/send_flow Not supported Not supported /templates/get_template_list Not supported Not supported /collectors/create_collector Not supported Not supported The hierarchy of objects the library can return is Survey List<Page> List<Question> QuestionType List<Answer> List<Item> List<Collector> List<Response> Respondent List<ResponseQuestion> List<ResponseAnswer> Each of these classes has properties which map directly to the names of properties returned by the API itself (though using PascalCasing which is more natural for .NET, rather than the snake_casing used by SurveyMonkey). For most users, Survey Monkey imposes a rate limit of 2 requests per second, so by default the library leaves at least 500ms between requests. You can request higher limits from them, so if you want to change the delay between requests just use a different constructor: var sm = new SurveyMonkeyApi(apiKey, token, 200); //200ms delay = 5 reqs per sec There’s a separate cap of 1000 requests per day for each API key, which the library doesn’t currently enforce, so if you think you’ll be in danger of exceeding that you’ll need to handle it yourself for now.  To help, you can see how many requests the current instance of the SurveyMonkeyApi object has made by reading its RequestsMade property. If the library encounters any errors, including communicating with the API, it will throw a SurveyMonkeyException, so be sure to handle that sensibly any time you use it to make calls. Finally, if you have a survey (or list of surveys) obtained using GetSurveyList(), the library can automatically fill in all available information using sm.FillMissingSurveyInformation(surveys); For each survey in the list, it uses the other endpoints to fill in the missing information about the survey’s question structure, respondents, and responses. This results in at least 5 API calls being made per survey, so be careful before passing it a large list. It also joins up the raw response information to the survey’s question structure, so that for each question in a respondent’s set of replies, you can access a ProcessedAnswer object. For example, a response to a dropdown question (from the /surveys/get_responses endpoint) might be represented in json as { "answers": [ { "row": "9384627365", } ], "question_id": "615487516" } Separately, the question’s structure (from the /surveys/get_survey_details endpoint) might have several possible answers, one of which might look like { "text": "Fourth item in dropdown list", "visible": true, "position": 4, "type": "row", "answer_id": "9384627365" } The library understands how this mapping works, and uses that to give you the following ProcessedAnswer object, which first describes the family and type of question, and secondly gives you the respondent’s answers as they relate to the question. Survey Monkey has many different question types, with 11 distinct data structures, each of which are supported by the library. If you have suggestions or spot any bugs, let me know in the comments, or even better submit a pull request .

    Read the article

  • Oracle Unveils Industry’s Broadest Cloud Strategy

    - by kellsey.ruppel
    Oracle Unveils Industry’s Broadest Cloud Strategy Adds Social Cloud and Showcases early customers Redwood Shores, Calif. – June 6, 2012 “Almost seven years of relentless engineering and innovation plus key strategic acquisitions. An investment of billions. We are now announcing the most comprehensive Cloud on the planet Earth,” said Oracle CEO, Larry Ellison. “Most cloud vendors only have niche assets. They don’t have platforms to extend. Oracle is the only vendor that offers a complete suite of modern, socially-enabled applications, all based on a standards-based platform.” News Facts In a major strategy update today, Larry Ellison announced the industry’s broadest and most advanced Cloud strategy and introduced Oracle Cloud Social Services, a broad Enterprise Social Platform offering. Oracle Cloud delivers a broad set of industry-standards based, integrated services that provide customers with subscription-based access to Oracle Platform Services, Application Services, and Social Services, all completely managed, hosted and supported by Oracle. Offering a wide range of business applications and platform services, the Oracle Cloud is the only cloud to enable customers to avoid the data and business process fragmentation that occurs when using multiple, siloed public clouds. Oracle Cloud is powered by leading enterprise-grade infrastructure, including Oracle Exadata and Oracle Exalogic, providing customers and partners with a high-performance, reliable, and secure infrastructure for running critical business applications. Oracle Cloud enables easy self-service for both business users and developers. Business users can order, configure, extend, and monitor their applications. Developers and administrators can easily develop, deploy, monitor and manage their applications. As part of the event, Oracle also showcased several early Oracle Cloud customers and partners including system integrators and independent software vendors. Oracle Cloud Platform Services Built on a common, complete, standards-based and enterprise-grade set of infrastructure components, Oracle Cloud Platform Services enable customers to speed time to market and lower costs by quickly building, deploying and managing bespoke applications. Oracle Cloud Platform Services will include: Database Services to manage data and build database applications with the Oracle Database. Java Services to develop, deploy and manage Java applications with Oracle WebLogic. Developer Services to allow application developers to collaboratively build applications. Web Services to build Web applications rapidly using PHP, Ruby, and Python. Mobile Services to allow developers to build cross-platform native and HTML5 mobile applications for leading smartphones and tablets. Documents Services to allow project teams to collaborate and share documents through online workspaces and portals. Sites Services to allow business users to develop and maintain visually engaging .com sites Analytics Services to allow business users to quickly build and share analytic dashboards and reports through the Cloud. Oracle Cloud Application Services Oracle Cloud Application Services provides customers access to the industry’s broadest range of enterprise applications available in the cloud today, with built-in business intelligence, social and mobile capabilities. Easy to setup, configure, extend, use and administer, Oracle Cloud Application Services will include: ERP Services: A complete set of Financial Accounting, Project Management, Procurement, Sourcing, and Governance, Risk & Compliance solutions. HCM Services: A complete Human Capital Management solution including Global HR, Workforce Lifecycle Management, Compensation, Benefits, Payroll and other solutions. Talent Management Services: A complete Talent Management solution including Recruiting, Sourcing, Performance Management, and Learning. Sales and Marketing Services: A complete Sales and Marketing solution including Sales Planning, Territory Management, Leads & Opportunity Management, and Forecasting. Customer Experience Services: A complete Customer Service solution including Web Self-Service, Contact Centers, Knowledge Management, Chat, and e-mail Management. Oracle Cloud Social Services Oracle Cloud Social Services provides the most broad and complete enterprise social platform available in the cloud today.  With Oracle Cloud Social Services, enterprises can engage with their customers on a range of social media properties in a comprehensive and meaningful fashion including social marketing, commerce, service and listening. The platform also provides enterprises with a rich social networking solution for their employees to collaborate effectively inside the enterprise. Oracle’s integrated social platform will include: Oracle Social Network to enable secure enterprise collaboration and purposeful social networking for business. Oracle Social Data Services to aggregate data from social networks and enterprise data sources to enrich business applications. Oracle Social Marketing and Engagement Services to enable marketers to centrally create, publish, moderate, manage, measure and report on their social marketing campaigns. Oracle Social Intelligence Services to enable marketers to analyze social media interactions and to enable customer service and sales teams to engage with customers and prospects effectively. Supporting Resources Oracle Cloud – learn more cloud.oracle.com – sign up now Webcast – watch the replay About Oracle Oracle engineers hardware and software to work together in the cloud and in your data center. For more information about Oracle (NASDAQ:ORCL), visit www.oracle.com. TrademarksOracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

    Read the article

  • SanjayP&rsquo;s venture after Microsoft involves no Microsoft

    - by eddraper
    When I was at Microsoft, I always found Sanjay Parthasarathy to be a bright and passionate leader.  While he was a bit disconnected at times with what was really going on out in the trenches, I always thought he was true believer in what we in Developer Platform and Evangelism (DPE) were doing.  He got it.  He had started DPE and kicked a lot of doors down up in Redmond to make it happen.  Back in the early 2000s, battles over platform choices at large customers was trench warfare… bayonets and hand grenades at the P-Code level.  This model was not at all suited to Microsoft’s org structure at the time.  While there were plenty of people fully able to have competitive conversations around Windows Server, or AD, or Exchange, or the desktop, there weren’t many that could have deep technical conversations around Java vs .NET and the platform “stack” as a cohesive, unified unit of value.  This task fell to DPE. Sanjay ended up leaving Microsoft a number of months before me in 2009 and I remember thinking these exact words: “holy shit, SanjayP left Microsoft.”  When SanjayP left DPE years before that,  Sheila Gulati had stepped into his shoes and I thought we where starting to miss a beat.  Sheila had built an amazing business at Microsoft India, but I don’t recall being inspired by her as a leader.  SanjayP’s talks felt like the opening scene of “Patton” with George C. Scott pacing in front of the American flag.  Sheila was a voice on a con-call.  When she moved on in 2007, Walid Abu-Hadba was given the reigns.  Personally, I don’t ever recall even seeing his face.  I think I might recall hearing his voice on some con-calls, but for all intents and purposes he was invisible to me.  Perhaps this was the beginning of my carelessness around seeking “visibility.” Fast forward to Build 2011.  First off, we have no PDC – we have Build.  Microsoft had made an 11 year investment by this time in building an organization to make its technology relevant to developers.  One would think such an org would be in the driver’s seat of such an event, but we see Windows product group people on the podiums.  Watching, I could see the messaging unfold… but no story.  It was like the old days.  Demos and PowerPoints by team members building the tech, and in many cases VPs.  The ensuing confusion is almost legendary now.  Windows 8 was, and is, a pretty big deal… but who is telling the story – not just features and benefits, but the story around how it all fits together. Having been out of Microsoft for two years now, and looking in, I can only conclude that the “DPE of old” has at best been emasculated, and at worst been completely marginalized by internal politics, or perhaps the eternal march of the corporate entropy generator that resides at all large companies.  I don’t think this is a good thing for anyone. And now, back to Sanjay who is the father of Microsoft DPE… I noticed that he has moved back to India and is doing start-up work.  His current company Indix looks to be doing some interesting things with “big data” and here’s their stack: Nary a trace of anything Microsoft.  What could account for this?  I wonder….  Better availability of labor and expertise in India for this stack?  Donno, but even in India, leet R and Hadoop skills have to be hard to find. Technical superiority?  This, I sincerely doubt. This stack, with SanjayP’s name as CEO leaves me with an unsettling feeling.  If he did believe, he no longer does.  One doesn’t place bets with real money on things they don’t believe in.  Perhaps he never did believe, and was a corporate creature seeking to find a niche for himself after which he manipulated me and others.  Or perhaps… anger… be it passive aggression or an outright “in your face F*** you” to his former masters. I guess in the end, only he knows the true reason… But I have my theory...

    Read the article

  • .NET 3.5 Installation Problems in Windows 8

    - by Rick Strahl
    Windows 8 installs with .NET 4.5. A default installation of Windows 8 doesn't seem to include .NET 3.0 or 3.5, although .NET 2.0 does seem to be available by default (presumably because Windows has app dependencies on that). I ran into some pretty nasty compatibility issues regarding .NET 3.5 which I'll describe in this post. I'll preface this by saying that depending on how you install Windows 8 you may not run into these issues. In fact, it's probably a special case, but one that might be common with developer folks reading my blog. Specifically it's the install order that screwed things up for me -  installing Visual Studio before explicitly installing .NET 3.5 from Windows Features - in particular. If you install Visual Studio 2010 I highly recommend you install .NET 3.5 from Windows features BEFORE you install Visual Studio 2010 and save yourself the trouble I went through. So when I installed Windows 8, and then looked at the Windows Features to install after the fact in the Windows Feature dialog, I thought - .NET 3.5 - who needs it. I'd be happy to not have to install .NET 3.5, but unfortunately I found out quite a while after initial installation that one of my applications/tools (DevExpress's awesome CodeRush) depends on it and won't install without it. Enabling .NET 3.5 in Windows 8 If you want to run .NET 3.5 on Windows 8, don't download an installer - those installers don't work on Windows 8, and you don't need to do this because you can use the Windows Features dialog to enable .NET 3.5: And that *should* do the trick. If you do this before you install other apps that require .NET 3.5 and install a non-SP1 one version of it, you are going to have no problems. Unfortunately for me, even after I've installed the above, when I run the CodeRush installer I still get this lovely dialog: Now I double checked to see if .NET 3.5 is installed - it is, both for 32 bit and 64 bit. I went as far as creating a small .NET Console app and running it to verify that it actually runs. And it does… So naturally I thought the CodeRush installer is a little whacky. After some back and forth Alex Skorkin on Twitter pointed me in the right direction: He asked me to look in the registry for exact info on which version of .NET 3.5 is installed here: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP where I found that .NET 3.5 SP1 was installed. This is the 64 bit key which looks all correct. However, when I looked under the 32 bit node I found: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\NET Framework Setup\NDP\v3.5 Notice that the service pack number is set to 0, rather than 1 (which it was for the 64 bit install), which is what the installer requires. So to summarize: the 64 bit version is installed with SP1, the 32 bit version is not. Uhm, Ok… thanks for that! Easy to fix, you say - just install SP1. Nope, not so easy because the standalone installer doesn't work on Windows 8. I can't get either .NET 3.5 installer or the SP 1 installer to even launch. They simply start and hang (or exit immediately) without messages. I also tried to get Windows to update .NET 3.5 by checking for Windows Updates, which should pick up on the dated version of .NET 3.5 and pull down SP1, but that's also no go. Check for Updates doesn't bring down any updates for me yet. I'm sure at some random point in the future Windows will deem it necessary to update .NET 3.5 to SP1, but at this point it's not letting me coerce it to do it explicitly. How did this happen I'm not sure exactly whether this is the cause and effect, but I suspect the story goes like this: Installed Windows 8 without support for .NET 3.5 Installed Visual Studio 2010 which installs .NET 3.5 (no SP) I now had .NET 3.5 installed but without SP1. I then: Tried to install CodeRush - Error: .NET 3.5 SP1 required Enabled .NET 3.5 in Windows Features I figured enabling the .NET 3.5 Windows Features would do the trick. But still no go. Now I suspect Visual Studio installed the 32 bit version of .NET 3.5 on my machine and Windows Features detected the previous install and didn't reinstall it. This left the 32 bit install at least with no SP1 installed. How to Fix it My final solution was to completely uninstall .NET 3.5 *and* to reboot: Go to Windows Features Uncheck the .NET Framework 3.5 Restart Windows Go to Windows Features Check .NET Framework 3.5 and voila, I now have a proper installation of .NET 3.5. I tried this before but without the reboot step in between which did not work. Make sure you reboot between uninstalling and reinstalling .NET 3.5! More Problems The above fixed me right up, but in looking for a solution it seems that a lot of people are also having problems with .NET 3.5 installing properly from the Windows Features dialog. The problem there is that the feature wasn't properly loading from the installer disks or not downloading the proper components for updates. It turns out you can explicitly install Windows features using the DISM tool in Windows.dism.exe /online /enable-feature /featurename:NetFX3 /Source:f:\sources\sxs You can try this without the /Source flag first - which uses the hidden Windows installer files if you kept those. Otherwise insert the DVD or ISO and point at the path \sources\sxs path where the installer lives. This also gives you a little more information if something does go wrong.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • The Winds of Change are a Blowin&rsquo;

    - by Ajarn Mark Caldwell
    For six years I have been an avid and outspoken fan and paying customer of SourceGear products…from Vault to Dragnet to Fortress and on to Vault Professional, but that is all changing now.  Not the fan part, but the paying customer part.  I’m still a huge fan.  I think that SourceGear does a great job with their product and support has been fantastic when needed (which is not very often).  I think that Eric Sink has done a fine job building a quality company and products, and I appreciate his contributions to the tech community through this blogging and books.  I still think their products are high quality and do a fantastic job of what they do.  But there’s the rub…what they do is no longer enough for me. As I have rebuilt our development team over the last couple of years, and we have begun to investigate Scrum and Kanban, I realize that I need more visibility into the progress of the team.  I need better project management tools, and this is where Vault Professional lags behind several other tools.  Granted, in the latest release (Vault 6.0) they added a nice time tracking feature, but I want more.  (Note, I did contact SourceGear about my quest for more, but apparently, the rest of their customer base has not been clamoring for this and so they have not built it.  Granted, I wasn’t clamoring for it either until just recently, but unfortunately for SourceGear, I want it now and don’t want to wait for them to build it into their system.) Ironically, it was SourceGear themselves who started to turn me on to the possibilities of other tools.  They built a limited integration with Axosoft OnTime which I read about several times on their support site (I used to regularly read and occasionally comment on their Support Forum).  I decided to check out OnTime and was very impressed with the tool for work item tracking and project management (not to mention their great Scrum Master in 10 Minutes video).  I fell in love with the capabilities of OnTime.  Unfortunately, the integration with Vault for source control management was, as I mentioned, limited.  I could have forfeited the integration between work items and source code, but there is too much benefit to linking check-ins to work items for me to give that up.  So then I did what was previously unthinkable for me, I considered switching not just the work tracking tool, but also the source code management tool.  This was really stepping outside my comfort zone because source code is Gold, and not to be trifled with.  When you find a good weapon to protect your gold, stick with it. I looked at Git and Tortoise SVN, but the integration methods for those was pretty rough compared to what I was used to.  The recommended tool from Axosoft’s point of view appeared to be RocketSVN, but I really wasn’t sure I wanted to go the “flavor of Subversion” route.  Then I started thinking about that other tool I liked back when I first chose to go with Vault, but couldn’t afford:  Team Foundation Server.  And what do you know…Microsoft has not only radically improved it over that version from back in 2006, but they also came to their senses about how it should be licensed, and it is much more affordable now.  So I started looking into the latest capabilities in the 2012 version, and I fell in love all over again. I really went deep on checking out the tools.  I watched numerous webcasts from Microsoft partners, went to a beta preview on Microsoft’s campus, and watched a lot of Channel 9 videos on the new ALM features (oooh…shiny).  Frankly, I was very impressed with the capabilities of the newest version, and figured this was probably our direction.  As an interesting twist of fate, one of my employees crossed paths with an ALM Consultant from Northwest Cadence, a local Microsoft Partner, and one of the companies that produced several of the webcasts that I had been watching.  So I gave Bryon a call and started grilling him to see if he really knew anything or was just another guy who couldn’t find a job so he called himself a consultant.  It turns out Bryon actually knows a lot, especially in an area that was becoming a frustration point for us: Branching strategies and automated builds (that’s probably a whole separate blog entry).  As we talked, Bryon suggested we look into doing a DTDPS (Developer Tools Deployment Planning Services) session with his company.  This is a service that can be paid for by Microsoft Enterprise Agreement planning services credits or SA training benefits, and, again, coincidentally, we had several that were just about to expire, so I put them to good use. The DTDPS sessions were great; and Bryon, Rick, and the rest of the folks at Northwest Cadence have been a pleasure to work with.  We have just purchased a new server for our TFS rollout and are planning the steps and options right now.  This is still a big project ahead of us to not only install and configure TFS, but also to load all of our source code (many different systems, not just one program) and transition to the new way of life with TFS, but I am convinced that it is the right move for my team at this point in time.  We need the new capabilities that are in alignment with Scrum and Kanban methodologies in order to more efficiently manage all the different projects that we have going on at one time. I would still wholeheartedly endorse SourceGear’s products and Axosoft’s OnTime for those whose needs are met by those tools, but for me and my team, I think that TFS is the right fit, and I am looking forward to the change.

    Read the article

  • Good ol fashioned debugging

    - by Tim Dexter
    I have been helping out one of our new customers over the last day or two and I have even managed to get to the bottom of their problem FTW! They use BIEE and BIP and wanted to mount a BIP report in a dashboard page, so far so good, BIP does that! Just follow the instructions in the BIEE user guide. The wrinkle is that they want to enter some fixed instruction strings into the dashboard prompts to help the user. These are added as fixed values to the prompt as the default values so they appear first. Once the user makes a selection, the default strings disappear. Its a fair requirement but the BIP report chokes Now, the BIP report had been setup with the Autorun checkbox, unchecked. I expected the BIP report to wait for the Go button to be hit but it was trying to run immediately and failing. That was the first issue. You can not stop the BIP report from trying to run in a dashboard. Even if the Autorun is turned off, it seems that dashboard still makes the request to BIP to run the report. Rather than BIP refusing because its waiting for input it goes ahead anyway, I guess the mechanism does not check the autorun flag when the request is coming from the dashboard. It appears that between BIEE and BIP, they collectively ignore the autorun flag. A bug? might be, at least an enhancement request. With that in mind, how could we get BIP to not at least not fail? This fact was stumping me on the parameter error, if the autorun flag was being respected then why was BIP complaining about the parameter values it should not even be doing anything until the Go button is clicked. I now knew that the autorun flag was being ignored, it was a simple case of putting BIP into debug mode. I use the OC4J server on my laptop so debug msgs are routed through the dos box used to start the OC4J container. When I changed a value on the dashboard prompt I spotted some debug text rushing by that subsequently disappeared from the log once the operation was complete. Another bug? I needed to catch that text as it went by, using the print screen function with some software to grab multiple screens as the log appeared and then disappeared. The upshot is that when you change the dashboard prompt value, BIP validates the value against its own LOVs, if its not in the list then it throws the error. Because 'Fill this first' and 'Fill this second' ie fixed strings from the dashboard prompts, are not in the LOV lists and because the report is auto running as soon as the dashboard page is brought up, the report complains about invalid parameters. To get around this, I needed to get the strings into the LOVs. Easily done with a UNION clause: select 'Fill this first' from SH.Products Products UNION select Products."Prod Category" as "Prod Category" from SH.Products Products Now when BIP wants to validate the prompt value, the LOV query fires and finds the fixed string -> No Error. No data, but definitely no errors :0) If users do run with the fixed values, you can capture that in the template. If there is no data in the report, either the fixed values were used or the parameters selected resulted in no rows. You can capture this in the template and display something like. 'Either your parameter values resulted in no data or you have not changed the default values' Thats the upside, the downside is that if your users run the report in the BP UI they re going to see the fixed strings. You could alleviate that by having BIP display the fixed strings in top of its parameter drop boxes (just set them as the default value for the parameter.) But they will not disappear like they do in the dashboard prompts, see below. If the expected autorun behaviour worked ie wait for the Go button, then we would not have to workaround it but for now, its a pretty good solution. It was an enjoyable hour or so for me, took me back to my developer daze, when we used to race each other for the most number of bug fixes. I used to run a distant 2nd behind 'Bugmeister Chen Hu' but led the chasing pack by a reasonable distance.

    Read the article

  • jQuery Datatable in MVC &hellip; extended.

    - by Steve Clements
    There are a million plugins for jQuery and when a web forms developer like myself works in MVC making use of them is par-for-the-course!  MVC is the way now, web forms are but a memory!! Grids / tables are my focus at the moment.  I don’t want to get in to righting reems of css and html, but it’s not acceptable to simply dump a table on the screen, functionality like sorting, paging, fixed header and perhaps filtering are expected behaviour.  What isn’t always required though is the massive functionality like editing etc you get with many grid plugins out there. You potentially spend a long time getting everything hooked together when you just don’t need it. That is where the jQuery DataTable plugin comes in.  It doesn’t have editing “out of the box” (you can add other plugins as you require to achieve such functionality). What it does though is very nicely format a table (and integrate with jQuery UI) without needing to hook up and Async actions etc.  Take a look here… http://www.datatables.net I did in the first instance start looking at the Telerik MVC grid control – I’m a fan of Telerik controls and if you are developing an in-house of open source app you get the MVC stuff for free…nice!  Their grid however is far more than I require.  Note: Using Telerik MVC controls with your own jQuery and jQuery UI does come with some hurdles, mainly to do with the order in which all your jQuery is executing – I won’t cover that here though – mainly because I don’t have a clear answer on the best way to solve it! One nice thing about the dataTable above is how easy it is to extend http://www.datatables.net/examples/plug-ins/plugin_api.html and there are some nifty examples on the site already… I however have a requirement that wasn’t on the site … I need a grid at the bottom of the page that will size automatically to the bottom of the page and be scrollable if required within its own space i.e. everything above the grid didn’t scroll as well.  Now a CSS master may have a great solution to this … I’m not that master and so didn’t! The content above the grid can vary so any kind of fixed positioning is out. So I wrote a little extension for the DataTable, hooked that up to the document.ready event and window.resize event. Initialising my dataTable ( s )… $(document).ready(function () {   var dTable = $(".tdata").dataTable({ "bPaginate": false, "bLengthChange": false, "bFilter": true, "bSort": true, "bInfo": false, "bAutoWidth": true, "sScrollY": "400px" });   My extension to the API to give me the resizing….   // ********************************************************************** // jQuery dataTable API extension to resize grid and adjust column sizes // $.fn.dataTableExt.oApi.fnSetHeightToBottom = function (oSettings) { var id = oSettings.nTable.id; var dt = $("#" + id); var top = dt.position().top; var winHeight = $(document).height(); var remain = (winHeight - top) - 83; dt.parent().attr("style", "overflow-x: auto; overflow-y: auto; height: " + remain + "px;"); this.fnAdjustColumnSizing(); } This is very much is debug mode, so pretty verbose at the moment – I’ll tidy that up later! You can see the last call is a call to an existing method, as the columns are fixed and that normally involves so CSS voodoo, a call to adjust those sizes is required. Just above is the style that the dataTable gives the grid wrapper div, I got that from some firebug action and stick in my new height. The –83 is to give me the space at the bottom i require for fixed footer!   Finally I hook that up to the load and window resize.  I’m actually using jQuery UI tabs as well, so I’ve got that in the open event of the tabs.   $(document).ready(function () { var oTable; $("#tabs").tabs({ "show": function (event, ui) { oTable = $('div.dataTables_scrollBody>table.tdata', ui.panel).dataTable(); if (oTable.length > 0) { oTable.fnSetHeightToBottom(); } } }); $(window).bind("resize", function () { oTable.fnSetHeightToBottom(); }); }); And that all there is too it.  Testament to the wonders of jQuery and the immense community surrounding it – to which I am extremely grateful. I’ve also hooked up some custom column filtering on the grid – pretty normal stuff though – you can get what you need for that from their website.  I do hide the out of the box filter input as I wanted column specific, you need filtering turned on when initialising to get it to work and that input come with it!  Tip: fnFilter is the method you want.  With column index as a param – I used data tags to simply that one.

    Read the article

  • HR According to Batman

    - by D'Arcy Lussier
    Any idea who that guy is running alongside the Caped Crusader? That’s Nightwing, but you may know him as Robin…well, the first Robin anyway. There were actually like 5 Robin’s according to Wikipedia: Dick Grayson, the original, who’s parents were circus performers killed by a gangster. Jason Todd, who was caught trying to steal tires off of the Batmobile. Tim Drake, who saw Dick’s parents die and figured out who Batman and Robin were. and a few others that get into recent time travel/altered reality storylines. What does this have to do with HR? Well, it somewhat ties in with an article by Alex Papadimoulis from 2008. In the article he talks about the “Cravath System”. The Craveth system was developed by a law firm called Cravath, Swaine & Moore back in the 19th century. In a nutshell, they believed in hiring the best and brightest straight out of school. These aspiring lawyers would then begin a fight for survival in the firm, with the strong surviving. In what’s termed the “Up and Out” rule, employees needed to be promoted within 3 years or leave the company. They should achieve partner within 7 – 8 years and no later than 10 after initially coming on board (read all about the system on Wikipedia here). Back to Alex’s article, he quotes from a book published in 1947 about the lawfirm: Under the “Cravath system” of taking a substantial number of men annually and keeping a current constantly moving up in the office, and its philosophy of tenure, men are constantly leaving… it is often difficult to keep the best men long enough to determine whether they shall be made partners, for Cravath-trained men are always in demand, usually at premium salaries. And so we see a pattern forming here: 1. Hire a whole whack of smart college graduates 2. Put them to work 3. The ones that stick around should move up the ladder. The ones that don’t stick around served the company well and left to expound the quality of the Cravath firm. Those that didn’t fall into either of those categories were just let go. There’s some interesting undercurrents to these ideas. If you stick around, you better keep your feet moving! I was at a Microsoft shindig a few months back, and was talking to a Microsoft employee. He shared that at MS you have 5 years to achieve a “senior” position within the company. Once you hit that mark, you can stay there for the rest of your career (he told about a guy who’s a “senior” developer and has been for the last 20+ years working on audio drivers for Windows), but you *must* hit that mark within the timeframe. What we see with Microsoft is Cravath’s system in action, whether intentional or not: bring in smart young people and see which ones stick. You need to give people something to work towards. Saying “You must reach this level or else!” is one way to look at it. The other way is to see achieving a higher rank in the organization as something for ambitious employees to reach towards. It’s important for an organization to always have the next generation of executives waiting in the wings, and unless you’re encouraging that early on you may find yourself in a position of needing to fill positions that nobody has been working towards. Now, you might suggest that this isn’t that big of a deal because you could just hire someone from outside the organization, but the Cravath system holds to the tenet of promoting internally; develop your own talent, since your business is the best place for the future leadership to learn teh business from. It’s OK for people to quit. Alex’s article really drives this point home, but its worth noting here also: its OK for your people to quit. In fact its inevitable…and more inevitable that it’ll be good people that leave. Some will stay and work towards the internal awards of promotion, but a number will get experience, serve the organization well, and then move on to something else. This should be expected and treated as a natural business occurrence. The idea of an alumni of an organization begins to come into play here: “That guy used to work for <insert company here>”. There’s a benefit in that: those best and brightest will be drawn to your organization and your reputation will permeate your market through former staff that are sought after because of how well you nurtured them. The Batman Hook All of this brings us back to Batman and his HR practice: when Dick decided he’d had enough of the Robin schtick, he quit and became his own…but he was always associated with Batman and people understood where his training had come from. To the Dark Knight’s credit, he continued training partners under the Robin brand. Luckily he didn’t have to worry about firing any of them (the ship sort of sails when you reveal a secret identity), although there was that unfortunate “quitting” of the second Robin when the Joker blew him up…but regardless, we see the Cravath system at work: bring in talent, expect great things, and be ok with whatever they decide for their careers. It’s an interesting way to approach HR, and luckily for us our business isn’t as dangerous or over-the-top as the caped crusader’s.

    Read the article

  • Download Microsoft MSDN Magazine PDF Issues For Offline Reading

    - by Kavitha
    MSDN Magazine is a must read for every Microsoft developer. It provides in-depth analysis and excellent guides on all the latest Microsoft development tools and technologies. Every month one can grab this magazine on the stands or read it online for free. What if you want to read the magazine offline on your PC or mobile devices? Just grab a PDF version of the magazine and read it whenever you want. The PDF version of MSDN magazines are very handy for travellers who don’t get access to internet always. In this post we are going to provide you links to download PDF version, source code and online version of every month MSDN Magazine issue starting from 2010. Bookmark this post and keep checking it monthly to get access to MSDN Magazine links. December 2010 Issue    Download PDF(not yet available)    Download Source Code    Read Magazine Online        November 2010 Issue    Download PDF (not yet available)    Read Magazine Online    Download Source Code       October 2010 Issue    Download PDF    Download Source Code    Read Magazine Online        September 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       August 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       July 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       June 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       May 2010 Issue      Download PDF    Download Source Code    Read Magazine Online       April 2010 Issue    Download PDF    Read Magazine Online    Download Source Code       March 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       February 2010 Issue    Download PDF    Download Source Code    Read Magazine Online       January 2010 Issue    Download PDF    Download Source Code    Read Magazine Online This article titled,Download Microsoft MSDN Magazine PDF Issues For Offline Reading, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • 10 Great Free Icon Packs To Theme Your Android Phone

    - by Chris Hoffman
    Android allows you to customize your home screen, adding widgets, arranging shortcuts and folders, choosing a background, and even replacing the included launcher entirely. You can install icon packs to theme your app icons, too. Third-party launchers use standard app icons by default, but they don’t have to. You can install icon packs that third-party launchers will use in place of standard app icons. How to Use Icon Packs To use icon packs, you’ll need to use a third-party launcher that supports them, such as Nova, Apex, ADW, Go Launcher, Holo Launcher, or Action Launcher Pro. Once you’re using a third-party launcher, you can install an icon pack and go into your launcher’s settings. You’ll find an option that allows you to choose between the icon packs you’ve installed. Many of these icon packs also include wallpapers, which you can set in the normal way. MIUI 5 Icons This icon pack offers over 1900 free icons that are similar to the icons used by the MIUi ROM developed by China’s Xiaomi Tech. The large list of icons is a big plus — this pack will give the majority of your app icons a very slick, consistent look. DCikonZ Theme DCikonZ is a free icon theme that includes a whopping 4000+ icons with a consistent look. This icon theme stands out not just because it’s huge, but also for offering for going in its own direction and avoiding the super-simple, flat look many icon packs use. Holo Icons Holo Icons replaces many app icons with simple, consistent-looking that match Google’s Holo style. If you’re a fan of Android’s Holo look, give it a try. It even tweaks many of the icons from Google’s own apps to make them look more consistent. Square Icon Pack Square Icon Pack turns your icons into simple squares. Even Google Chrome becomes an orb instead of a square. This makes every icon a consistent size and offers a unique look. The icons here almost look a bit like the small-size tiles available on Windows Phone and Windows 8.1. The free version doesn’t offer as many icons as the paid version, but it does offer icons for many popular apps. Rounded Want rounded icons instead? Try the Rounded icon theme, which offers simple rounded icons. The developer says they’re inspired by the consistently round icons used on Mozilla’s Firefox OS. Crumbled Icon Pack Crumbled Icon Pack applies an effect that makes icons look as if they’r crumbling. Rather than theming individual icons, Crumbled Icon Pack adds an effect to every app icon on your device. This means that all your app icons will be themed and consistent. Dainty Icon Pack Is your Android home screen too colorful? Dainty Icon Pack offers simple, gray-on-white icons for over 1200 apps. It’d be ideal over a simple background. The contrast may be a bit low here with the gray-on-white, but it’s otherwise very slick. Simplex Icons Simplex Icons offers more contrast, with black-on-gray icons. This icon pack could simplify busy home screens, allowing photographic wallpapers to come through. Min Icon Set Min attempts to go as minimal as possible, offering simple white icons for over 570 apps. It would be ideal over a simple wallpaper with app names hidden in your launcher, offering a calming, minimal home screen. For apps it doesn’t recognize, it will enclose part of the app’s icon in a white circle. Elegance Elegance goes in another direction entirely, offering icons that incorporate more details and gradients rather than going for minimalism. Its over 1200 icons offer another good option for people who aren’t into the minimal, flat look. Icon pack designers generally have to create and include their own icons to replace icons associated with specific apps, so you’ll probably find a few of your app icons aren’t replaced with most of these themes. Of course, a standard Android phone without an icon pack doesn’t have consistent icons, either. Even if all the icons in your app drawer aren’t themed, the few app icons you have on your home screen will be if you use widely used apps.     

    Read the article

  • NoSQL Memcached API for MySQL: Latest Updates

    - by Mat Keep
    With data volumes exploding, it is vital to be able to ingest and query data at high speed. For this reason, MySQL has implemented NoSQL interfaces directly to the InnoDB and MySQL Cluster (NDB) storage engines, which bypass the SQL layer completely. Without SQL parsing and optimization, Key-Value data can be written directly to MySQL tables up to 9x faster, while maintaining ACID guarantees. In addition, users can continue to run complex queries with SQL across the same data set, providing real-time analytics to the business or anonymizing sensitive data before loading to big data platforms such as Hadoop, while still maintaining all of the advantages of their existing relational database infrastructure. This and more is discussed in the latest Guide to MySQL and NoSQL where you can learn more about using the APIs to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database The native Memcached API is part of the MySQL 5.6 Release Candidate, and is already available in the GA release of MySQL Cluster. By using the ubiquitous Memcached API for writing and reading data, developers can preserve their investments in Memcached infrastructure by re-using existing Memcached clients, while also eliminating the need for application changes. Speed, when combined with flexibility, is essential in the world of growing data volumes and variability. Complementing NoSQL access, support for on-line DDL (Data Definition Language) operations in MySQL 5.6 and MySQL Cluster enables DevOps teams to dynamically update their database schema to accommodate rapidly changing requirements, such as the need to capture additional data generated by their applications. These changes can be made without database downtime. Using the Memcached interface, developers do not need to define a schema at all when using MySQL Cluster. Lets look a little more closely at the Memcached implementations for both InnoDB and MySQL Cluster. Memcached Implementation for InnoDB The Memcached API for InnoDB is previewed as part of the MySQL 5.6 Release Candidate. As illustrated in the following figure, Memcached for InnoDB is implemented via a Memcached daemon plug-in to the mysqld process, with the Memcached protocol mapped to the native InnoDB API. Figure 1: Memcached API Implementation for InnoDB With the Memcached daemon running in the same process space, users get very low latency access to their data while also leveraging the scalability enhancements delivered with InnoDB and a simple deployment and management model. Multiple web / application servers can remotely access the Memcached / InnoDB server to get direct access to a shared data set. With simultaneous SQL access, users can maintain all the advanced functionality offered by InnoDB including support for Foreign Keys, XA transactions and complex JOIN operations. Benchmarks demonstrate that the NoSQL Memcached API for InnoDB delivers up to 9x higher performance than the SQL interface when inserting new key/value pairs, with a single low-end commodity server supporting nearly 70,000 Transactions per Second. Figure 2: Over 9x Faster INSERT Operations The delivered performance demonstrates MySQL with the native Memcached NoSQL interface is well suited for high-speed inserts with the added assurance of transactional guarantees. You can check out the latest Memcached / InnoDB developments and benchmarks here You can learn how to configure the Memcached API for InnoDB here Memcached Implementation for MySQL Cluster Memcached API support for MySQL Cluster was introduced with General Availability (GA) of the 7.2 release, and joins an extensive range of NoSQL interfaces that are already available for MySQL Cluster Like Memcached, MySQL Cluster provides a distributed hash table with in-memory performance. MySQL Cluster extends Memcached functionality by adding support for write-intensive workloads, a full relational model with ACID compliance (including persistence), rich query support, auto-sharding and 99.999% availability, with extensive management and monitoring capabilities. All writes are committed directly to MySQL Cluster, eliminating cache invalidation and the overhead of data consistency checking to ensure complete synchronization between the database and cache. Figure 3: Memcached API Implementation with MySQL Cluster Implementation is simple: 1. The application sends reads and writes to the Memcached process (using the standard Memcached API). 2. This invokes the Memcached Driver for NDB (which is part of the same process) 3. The NDB API is called, providing for very quick access to the data held in MySQL Cluster’s data nodes. The solution has been designed to be very flexible, allowing the application architect to find a configuration that best fits their needs. It is possible to co-locate the Memcached API in either the data nodes or application nodes, or alternatively within a dedicated Memcached layer. The benefit of this flexible approach to deployment is that users can configure behavior on a per-key-prefix basis (through tables in MySQL Cluster) and the application doesn’t have to care – it just uses the Memcached API and relies on the software to store data in the right place(s) and to keep everything synchronized. Using Memcached for Schema-less Data By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table. Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster. Conclusion Download the Guide to MySQL and NoSQL to learn more about NoSQL APIs and how you can use them to scale new generations of web, cloud, mobile and social applications on the world's most widely deployed open source database See how to build a social app with MySQL Cluster and the Memcached API from our on-demand webinar or take a look at the docs Don't hesitate to use the comments section below for any questions you may have 

    Read the article

  • ASP.NET Web API - Screencast series Part 3: Delete and Update

    - by Jon Galloway
    We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools. In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods. In Part 3, we'll start to modify data on the server using DELETE and POST methods. So far we've been looking at GET requests, and the difference between standard browsing in a web browser and navigating an HTTP API isn't quite as clear. Delete is where the difference becomes more obvious. With a "traditional" web page, to delete something'd probably have a form that POSTs a request back to a controller that needs to know that it's really supposed to be deleting something even though POST was really designed to create things, so it does the work and then returns some HTML back to the client that says whether or not the delete succeeded. There's a good amount of plumbing involved in communicating between client and server. That gets a lot easier when we just work with the standard HTTP DELETE verb. Here's how the server side code works: public Comment DeleteComment(int id) { Comment comment; if (!repository.TryGet(id, out comment)) throw new HttpResponseException(HttpStatusCode.NotFound); repository.Delete(id); return comment; } If you look back at the GET /api/comments code in Part 2, you'll see that they start the exact same because the use cases are kind of similar - we're looking up an item by id and either displaying it or deleting it. So the only difference is that this method deletes the comment once it finds it. We don't need to do anything special to handle cases where the id isn't found, as the same HTTP 404 handling works fine here, too. Pretty much all "traditional" browsing uses just two HTTP verbs: GET and POST, so you might not be all that used to DELETE requests and think they're hard. Not so! Here's the jQuery method that calls the /api/comments with the DELETE verb: $(function() { $("a.delete").live('click', function () { var id = $(this).data('comment-id'); $.ajax({ url: "/api/comments/" + id, type: 'DELETE', cache: false, statusCode: { 200: function(data) { viewModel.comments.remove( function(comment) { return comment.ID == data.ID; } ); } } }); return false; }); }); So in order to use the DELETE verb instead of GET, we're just using $.ajax() and setting the type to DELETE. Not hard. But what's that statusCode business? Well, an HTTP status code of 200 is an OK response. Unless our Web API method sets another status (such as by throwing the Not Found exception we saw earlier), the default response status code is HTTP 200 - OK. That makes the jQuery code pretty simple - it calls the Delete action, and if it gets back an HTTP 200, the server-side delete was successful so the comment can be deleted. Adding a new comment uses the POST verb. It starts out looking like an MVC controller action, using model binding to get the new comment from JSON data into a c# model object to add to repository, but there are some interesting differences. public HttpResponseMessage<Comment> PostComment(Comment comment) { comment = repository.Add(comment); var response = new HttpResponseMessage<Comment>(comment, HttpStatusCode.Created); response.Headers.Location = new Uri(Request.RequestUri, "/api/comments/" + comment.ID.ToString()); return response; } First off, the POST method is returning an HttpResponseMessage<Comment>. In the GET methods earlier, we were just returning a JSON payload with an HTTP 200 OK, so we could just return the  model object and Web API would wrap it up in an HttpResponseMessage with that HTTP 200 for us (much as ASP.NET MVC controller actions can return strings, and they'll be automatically wrapped in a ContentResult). When we're creating a new comment, though, we want to follow standard REST practices and return the URL that points to the newly created comment in the Location header, and we can do that by explicitly creating that HttpResposeMessage and then setting the header information. And here's a key point - by using HTTP standard status codes and headers, our response payload doesn't need to explain any context - the client can see from the status code that the POST succeeded, the location header tells it where to get it, and all it needs in the JSON payload is the actual content. Note: This is a simplified sample. Among other things, you'll need to consider security and authorization in your Web API's, and especially in methods that allow creating or deleting data. We'll look at authorization in Part 6. As for security, you'll want to consider things like mass assignment if binding directly to model objects, etc. In Part 4, we'll extend on our simple querying methods form Part 2, adding in support for paging and querying.

    Read the article

  • My History with Agile

    - by Robert May
    I’m going to write my history with Agile here.  That way, in future posts, I can refer back to it, instead of typing it out in the post that contains information you may actually want to read.  Note that I’m actually a pretty senior developer, and do lots of technical interviews.  I’m an Agile fan because of the difference it makes in peoples lives and the improvement in quality it brings, and I’ll sacrifice my technological advance to help teams. Management History I started management pretty early in my career, starting with the first job that I ever had.  I actually do NOT have a CS or similar degree.  I have a Bachelor’s of Business Administration with an emphasis in Computer Information Systems. My first management gigs were around call center work and were very schedule oriented.  I didn’t understand the true value of teams, and I’m ashamed to admit, I actually installed a fingerprint scanner as a time clock in this job.  I shudder to think of the impact that I had on the team spirit.  I didn’t even trust them enough to fill out their time cards correctly.  How sad. I was managing nearly 100 people in this position, with the help of a great set of subordinates. I did try to come up with reward programs for the team, but again, didn’t understand the concept of team, so instead of letting the team determine how the rewards should work, I mandated from on high, which isn’t a good thing. I was told that I wasn’t the type that would be a good manager by people whom I respected a lot.  They said it because I was a computer geek, since they don’t understand good management either, but in retrospect, they were right about me then.  I was too green. After my first job, I went on to other jobs and with the exception of one job, I’ve managed people at them all.  The rest of the management story is important for understanding agile, so I’ll save it for my next post. Technical History I’ve been in software development for many, many years.  I technically started programming on a commodore 64 in basic.  I didn’t know that I was programming, but I was sure having fun.  That was followed by batch files, Gorilla hacking (I always had to win), WordPerfect Macro programming and other things that taught me the basics. My first “real” job was with a telephone company, and that’s where I made my first database application in DataEase, wrote my first VBA app and started using real programming tools, like turbo pascal, vb3-vb5, and semi-real tools like RPG and VisualRPG.  I wrote my first web page in 1994, and built my first data driven web page in 1995 using perlDB.  You really can do anything with Perl.  At this time, I also started a Linux based internet service provider that is still in operation today.  One of the people I worked with is now a Microsoft employee building and designing frameworks you probably know well.  Smart guy.  I also built my first ASP applications connecting to Sql Server 6.5, setup Exchange 5.5 for the company, and many other system administration stuff.  I’m a programmer by choice, mostly because I don’t really like PC support. From there, I went on to a large state agency.  I got to see and maintain true waterfall projects.  5 years of maintaining the 200 VB COM+ (MTS, actually) dlls that were used to calculate a single number is a long time.  That was all Microsoft DNS technologies.  SQL Server and VB6 were the tools of choice, although .net started to be a factor near the end of employment.  I did some heavy XML work at this job and even wrote an XSD parser and validator in VB6 that was a shim until MSXML 3.0 came out.  Prior to 3.0, XSD’s weren’t supported, and I didn’t want to write DTDs. Ironically, jobs after this were more generic.  I pretty much settled in on the .net framework and revisions of it.  Lots of WPF, some silverlight, lots of ASP.NET, some SQL Azure, lots of SQL Server, some Oracle, but I don’t think that I was as passionate about development and technologies.  I was more into the management of development.  I like people. Technorati Tags: Agile,history

    Read the article

  • Silverlight 5 &ndash; What&rsquo;s New? (Including Screenshots &amp; Code Snippets)

    - by mbcrump
    Silverlight 5 is coming next year (2011) and this blog post will tell you what you need to know before the beta ships. First, let me address people saying that it is dead after PDC 2010. I believe that it’s best to see what the market is doing, not the vendor. Below is a list of companies that are developing Silverlight 4 applications shown during the Silverlight Firestarter. Some of the companies have shipped and some haven’t. It’s just great to see the actual company names that are working on Silverlight instead of “people are developing for Silverlight”. The next thing that I wanted to point out was that HTML5, WPF and Silverlight can co-exist. In case you missed Scott Gutherie’s keynote, they actually had a slide with all three stacked together. This shows Microsoft will be heavily investing in each technology.  Even I, a Silverlight developer, am reading Pro HTML5. Microsoft said that according to the Silverlight Feature Voting site, 21k votes were entered. Microsoft has implemented about 70% of these votes in Silverlight 5. That is an amazing number, and I am crossing my fingers that Microsoft bundles Silverlight with Windows 8. Let’s get started… what’s new in Silverlight 5? I am going to show you some great application and actual code shown during the Firestarter event. Media Hardware Video Decode – Instead of using CPU to decode, we will offload it to GPU. This will allow netbooks, etc to play videos. Trickplay – Variable Speed Playback – Pitch Correction (If you speed up someone talking they won’t sound like a chipmunk). Power Management – Less battery when playing video. Screensavers will no longer kick in if watching a video. If you pause a video then screensaver will kick in. Remote Control Support – This will allow users to control playback functions like Pause, Rewind and Fastforward. IIS Media Services 4 has shipped and now supports Azure. Data Binding Layout Transitions – Just with a few lines of XAML you can create a really rich experience that is not using Storyboards or animations. RelativeSource FindAncestor – Ancestor RelativeSource bindings make it much easier for a DataTemplate to bind to a property on a container control. Custom Markup Extensions – Markup extensions allow code to be run at XAML parse time for both properties and event handlers. This is great for MVVM support. Changing Styles during Runtime By Binding in Style Setters – Changing Styles at runtime used to be a real pain in Silverlight 4, now it’s much easier. Binding in style setters allows bindings to reference other properties. XAML Debugging – Below you can see that we set a breakpoint in XAML. This shows us exactly what is going on with our binding.  WCF & RIA Services WS-Trust Support – Taken from Wikipedia: WS-Trust is a WS-* specification and OASIS standard that provides extensions to WS-Security, specifically dealing with the issuing, renewing, and validating of security tokens, as well as with ways to establish, assess the presence of, and broker trust relationships between participants in a secure message exchange. You can reduce network latency by using a background thread for networking. Supports Azure now.  Text and Printing Improved text clarity that enables better text rendering. Multi-column text flow, Character tracking and leading support, and full OpenType font support.  Includes a new Postscript Vector Printing API that provides control over what you print . Pivot functionality baked into Silverlight 5 SDK. Graphics Immediate mode graphics support that will enable you to use the GPU and 3D graphics supports. Take a look at what was shown in the demos below. 1) 3D view of the Earth – not really a real-world application though. A doctor’s portal. This demo really stood out for me as it shows what we can do with the 3D / GPU support. Out of Browser OOB applications can now create and manage childwindows as shown in the screenshot below.  Trusted OOB applications can use P/Invoke to call Win32 APIs and unmanaged libraries.  Enterprise Group Policy Support allow enterprises to lock down or up the sandbox capabilities of Silverlight 5 applications. In this demo, he tore the “notes” off of the application and it appeared in a new window. See the black arrow below. In this demo, he connected a USB Device which fired off a local Win32 application that provided the data off the USB stick to Silverlight. Another demo of a Silverlight 5 application exporting data right into Excel running inside of browser. Testing They demoed Coded UI, which is available now in the Visual Studio Feature Pack 2. This will allow you to create automated testing without writing any code manually. Performance: Microsoft has worked to improve the Silverlight startup time. Silverlight 5 provides 64-bit browser support.  Silverlight 5 also provides IE9 Hardware acceleration.   I am looking forward to Silverlight 5 and I hope you are too. Thanks for reading and I hope you visit again soon.  Subscribe to my feed CodeProject

    Read the article

  • .NET vs Windows 8

    - by Simon Cooper
    So, day 1 of DevWeek. Lots and lots of Windows 8 and WinRT, as you would expect. The keynote had some actual content in it, fleshed out some of the details of how your apps linked into the Metro infrastructure, and confirmed that there would indeed be an enterprise version of the app store available for Metro apps.) However, that's, not what I want to focus this post on. What I do want to focus on is this: Windows 8 does not make .NET developers obsolete. Phew! .NET in the New Ecosystem In all the hype around Windows 8 the past few months, a lot of developers have got the impression that .NET has been sidelined in Windows 8; C++ and COM is back in vogue, and HTML5 + JavaScript is the New Way of writing applications. You know .NET? It's yesterday's tech. Enter the 21st Century and write <div>! However, after speaking to people at the conference, and after a couple of talks by Dave Wheeler on the innards of WinRT and how .NET interacts with it, my views on the coming operating system have changed somewhat. To summarize what I've picked up, in no particular order (none of this is official, just my sense of what's been said by various people): Metro apps do not replace desktop apps. That is, Windows 8 fully supports .NET desktop applications written for every other previous version of Windows, and will continue to do so in the forseeable future. There are some apps that simply do not fit into Metro. They do not fit into the touch-based paradigm, and never will. Traditional desktop support is not going away anytime soon. The reason Silverlight has been hidden in all the Metro hype is that Metro is essentially based on Silverlight design principles. Silverlight developers will have a much easier time writing Metro apps than desktop developers, as they would already be used to all the principles of sandboxing and separation introduced with Silverlight. It's desktop developers who are going to have to adapt how they work. .NET + XAML is equal to HTML5 + JS in importance. Although the underlying WinRT system is built on C++ & COM, most application development will be done either using .NET or HTML5. Both systems have their own wrapper around the underlying WinRT infrastructure, hiding the implementation details. The CLR is unchanged; it's still the .NET 4 CLR, running IL in .NET assemblies. The thing that changes between desktop and Metro is the class libraries, which have more in common with the Silverlight libraries than the desktop libraries. In Metro, although all the types look and behave the same to callers, some of the core BCL types are now wrappers around their WinRT equivalents. These wrappers are then enhanced using standard .NET types and code to produce the Metro .NET class libraries. You can't simply port a desktop app into Metro. The underlying file IO, network, timing and database access is either completely different or simply missing. Similarly, although the UI is programmed using XAML, the behaviour of the Metro XAML is different to WPF or Silverlight XAML. Furthermore, the new design principles and touch-based interface for Metro applications demand a completely new UI. You will be able to re-use sections of your app encapsulating pure program logic, but everything else will need to be written from scratch. Microsoft has taken the opportunity to remove a whole raft of types and methods from the Metro framework that are obsolete (non-generic collections) or break the sandbox (synchronous APIs); if you use these, you will have to rewrite to use the alternatives, if they exist at all, to move your apps to Metro. If you want to write public WinRT components in .NET, there are some quite strict rules you have to adhere to. But the compilers know about these rules; you can write them in C# or VB, and the compilers will tell you when you do something that isn't allowed and deal with the translation to WinRT metadata rather than .NET assemblies. It is possible to write a class library that can be used in Metro and desktop applications. However, you need to be very careful not to use types that are available in one but not the other. One can imagine developers writing their own abstraction around file IO and UIs (MVVM anyone?) that can be implemented differently in Metro and desktop, but look the same within your shared library. So, if you're a .NET developer, you have a lot less to worry about. .NET is a viable platform on Metro, and traditional desktop apps are not going away. You don't have to learn HTML5 and JavaScript if you don't want to. Hurray!

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >