Search Results

Search found 7324 results on 293 pages for 'operations research'.

Page 269/293 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • VB.NET IF() Coalesce and “Expression Expected” Error

    - by Jeff Widmer
    I am trying to use the equivalent of the C# “??” operator in some VB.NET code that I am working in. This StackOverflow article for “Is there a VB.NET equivalent for C#'s ?? operator?” explains the VB.NET IF() statement syntax which is exactly what I am looking for... and I thought I was going to be done pretty quickly and could move on. But after implementing the IF() statement in my code I started to receive this error: Compiler Error Message: BC30201: Expression expected. And no matter how I tried using the “IF()” statement, whenever I tried to visit the aspx page that I was working on I received the same error. This other StackOverflow article Using VB.NET If vs. IIf in binding/rendering expression indicated that the VB.NET IF() operator was not available until VS2008 or .NET Framework 3.5.  So I checked the Web Application project properties but it was targeting the .NET Framework 3.5: So I was still not understanding what was going on, but then I noticed the version information in the detailed compiler output of the error page: This happened to be a C# project, but with an ASPX page with inline VB.NET code (yes, it is strange to have that but that is the project I am working on).  So even though the project file was targeting the .NET Framework 3.5, the ASPX page was being compiled using the .NET Framework 2.0.  But why?  Where does this get set?  How does ASP.NET know which version of the compiler to use for the inline code? For this I turned to the web.config.  Here is the system.codedom/compilers section that was in the web.config for this project: <system.codedom>     <compilers>         <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">             <providerOption name="CompilerVersion" value="v3.5" />             <providerOption name="WarnAsError" value="false" />         </compiler>     </compilers> </system.codedom> Keep in mind that this is a C# web application project file but my aspx file has inline VB.NET code.  The web.config does not have any information for how to compile for VB.NET so it defaults to .NET 2.0 (instead of 3.5 which is what I need). So the web.config needed to include the VB.NET compiler option.  Here it is with both the C# and VB.NET options (I copied the VB.NET config from a new VB.NET Web Application project file).     <system.codedom>         <compilers>             <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">                 <providerOption name="CompilerVersion" value="v3.5" />                 <providerOption name="WarnAsError" value="false" />             </compiler>       <compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" warningLevel="4" type="Microsoft.VisualBasic.VBCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">         <providerOption name="CompilerVersion" value="v3.5"/>         <providerOption name="OptionInfer" value="true"/>         <providerOption name="WarnAsError" value="false"/>       </compiler>     </compilers>     </system.codedom>   So the inline VB.NET code on my aspx page was being compiled using the .NET Framework 2.0 when it really needed to be compiled with the .NET Framework 3.5 compiler in order to take advantage of the VB.NET IF() coalesce statement.  Without the VB.NET web.config compiler option, the default is to compile using the .NET Framework 2.0 and the VB.NET IF() coalesce statement does not exist (at least in the form that I want it in).  FYI, there is an older IF statement in VB.NET 2.0 compiler which is why it is giving me the unusual “Expression Expected” error message – see this article for when VB.NET got the new updated version. EDIT (2011-06-20): I had made a wrong assumption in the first version of this blog post.  After a little more research and investigation I was able to figure out that the issue was in the web.config and not with the IIS App Pool.  Thanks to the comment from James which forced me to look into this again.

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • Silverlight 4 Tools for VS 2010 and WCF RIA Services Released

    - by ScottGu
    The final release of the Silverlight 4 Tools for Visual Studio 2010 and WCF RIA Services is now available for download.  Download and Install If you already have Visual Studio 2010 installed (or the free Visual Web Developer 2010 Express), then you can install both the Silverlight 4 Tooling Support as well as WCF RIA Services support by downloading and running this setup package (note: please make sure to uninstall the preview release of the Silverlight 4 Tools for VS 2010 if you have previously installed that).  The Silverlight 4 Tools for VS 2010 package extends the Silverlight support built into Visual Studio 2010 and enables support for Silverlight 4 applications as well.  It also installs WCF RIA Services application templates and libraries: Today’s release includes the English edition of the Silverlight 4 Tooling – localized versions will be available next month for other Visual Studio languages as well. Silverlight Tooling Support Visual Studio 2010 includes rich tooling support for building Silverlight and WPF applications. It includes a WYSIWYG designer surface that enables you to easily use controls to construct UI – including the ability to take advantage of layout containers, and apply styles and resources: The VS 2010 designer enables you to leverage the rich data binding support within Silverlight and WPF, and easily wire-up bindings on controls.  The Data Sources window within Silverlight projects can be used to reference POCO objects (plain old CLR objects), WCF Services, WCF RIA Services client proxies or SharePoint Lists.  For example, let’s assume we add a “Person” class like below to our project: We could then add it to the Data Source window which will cause it to show up like below in the IDE: We can optionally customize the default UI control types that are associated for each property on the object.  For example, below we’ll default the BirthDate property to be represented by a “DatePicker” control: And then when we drag/drop the Person type from the Data Sources onto the design-surface it will automatically create UI controls that are bound to the properties of our Person class: VS 2010 allows you to optionally customize each UI binding further by selecting a control, and then right-click on any of its properties within the property-grid and pull up the “Apply Bindings” dialog: This will bring up a floating data-binding dialog that enables you to easily configure things like the binding path on the data source object, specify a format convertor, specify string-format settings, specify how validation errors should be handled, etc: In addition to providing WYSIWYG designer support for WPF and Silverlight applications, VS 2010 also provides rich XAML intellisense and code editing support – enabling a rich source editing environment. Silverlight 4 Tool Enhancements Today’s Silverlight 4 Tooling Release for VS 2010 includes a bunch of nice new features.  These include: Support for Silverlight Out of Browser Applications and Elevated Trust Applications You can open up a Silverlight application’s project properties window and click the “Enable Running Application Out of Browser” checkbox to enable you to install an offline, out of browser, version of your Silverlight 4 application.  You can then customize a number of “out of browser” settings of your application within Visual Studio: Notice above how you can now indicate that you want to run with elevated trust, with hardware graphics acceleration, as well as customize things like the Window style of the application (allowing you to build a nice polished window style for consumer applications). Support for Implicit Styles and “Go to Value Definition” Support: Silverlight 4 now allows you to define “implicit styles” for your applications.  This allows you to style controls by type (for example: have a default look for all buttons) and avoid you having to explicitly reference styles from each control.  In addition to honoring implicit styles on the designer-surface, VS 2010 also now allows you to right click on any control (or on one of it properties) and choose the “Go to Value Definition…” context menu to jump to the XAML where the style is defined, and from there you can easily navigate onward to any referenced resources.  This makes it much easier to figure out questions like “why is my button red?”: Style Intellisense VS 2010 enables you to easily modify styles you already have in XAML, and now you get intellisense for properties and their values within a style based on the TargetType of the specified control.  For example, below we have a style being set for controls of type “Button” (this is indicated by the “TargetType” property).  Notice how intellisense now automatically shows us properties for the Button control (even within the <Setter> element): Great Video - Watch the Silverlight Designer Features in Action You can see all of the above Silverlight 4 Tools for Visual Studio 2010 features (and some more cool ones I haven’t mentioned) demonstrated in action within this 20 minute Silverlight.TV video on Channel 9: WCF RIA Services Today we also shipped the V1 release of WCF RIA Services.  It is included and automatically installed as part of the Silverlight 4 Tools for Visual Studio 2010 setup. WCF RIA Services makes it much easier to build business applications with Silverlight.  It simplifies the traditional n-tier application pattern by bringing together the ASP.NET and Silverlight platforms using the power of WCF for communication.  WCF RIA Services provides a pattern to write application logic that runs on the mid-tier and controls access to data for queries, changes and custom operations. It also provides end-to-end support for common tasks such as data validation, authentication and authorization based on roles by integrating with Silverlight components on the client and ASP.NET on the mid-tier. Put simply – it makes it much easier to query data stored on a server from a client machine, optionally manipulate/modify the data on the client, and then save it back to the server.  It supports a validation architecture that helps ensure that your data is kept secure and business rules are applied consistently on both the client and middle-tiers. WCF RIA Services uses WCF for communication between the client and the server  It supports both an optimized .NET to .NET binary serialization format, as well as a set of open extensions to the ATOM format known as ODATA and an optional JavaScript Object Notation (JSON) format that can be used by any client. You can hear Nikhil and Dinesh talk a little about WCF RIA Services in this 13 minutes Channel 9 video. Putting it all Together – the Silverlight 4 Training Kit Check out the Silverlight 4 Training Kit to learn more about how to build business applications with Silverlight 4, Visual Studio 2010 and WCF RIA Services. The training kit includes 8 modules, 25 videos, and several hands-on labs that explain Silverlight 4 and WCF RIA Services concepts and walks you through building an end-to-end application with them.    The training kit is available for free and is a great way to get started. Summary I’m really excited about today’s release – as they really complete the Silverlight development story and deliver a great end to end runtime + tooling story for building applications.  All of the above features are available for use both in VS 2010 as well as the free Visual Web Developer 2010 Express Edition – making it really easy to get started building great solutions. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • TFS 2010 Build Custom Activity for Merging Assemblies

    - by Jakob Ehn
    *** The sample build process template discussed in this post is available for download from here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/ILMerge.xaml ***   In my previous post I talked about library builds that we use to build and replicate dependencies between applications in TFS. This is typically used for common libraries and tools that several other application need to reference. When the libraries grow in size over time, so does the number of assemblies. So all solutions that uses the common library must reference all the necessary assemblies that they need, and if we for example do a refactoring and extract some code into a new assembly, all the clients must update their references to reflect these changes, otherwise it won’t compile. To improve on this, we use a tool from Microsoft Research called ILMerge (Download from here). It can be used to merge several assemblies into one assembly that contains all types. If you haven’t used this tool before, you should check it out. Previously I have implemented this in builds using a simple batch file that contains the full command, something like this: "%ProgramFiles(x86)%\microsoft\ilmerge\ilmerge.exe" /target:library /attr:ClassLibrary1.bl.dll /out:MyNewLibrary.dll ClassLibrary1.dll ClassLibrar2.dll ClassLibrary3.dll This merges 3 assemblies (ClassLibrary1, 2 and 3) into a new assembly called MyNewLibrary.dll. It will copy the attributes (file version, product version etc..) from ClassLibrary1.dll, using the /attr switch. For more info on ILMerge command line tool, see the above link. This approach works, but requires a little bit too much knowledge for the developers creating builds, therefor I have implemented a custom activity that wraps the use of ILMerge. This makes it much simpler to setup a new build definition and have the build automatically do the merging. The usage of the activity is then implemented as part of the Library Build process template mentioned in the previous post. For this article I have just created a simple build process template that only performs the ILMerge operation.   Below is the code for the custom activity. To make it compile, you need to reference the ILMerge.exe assembly. /// <summary> /// Activity for merging a list of assembies into one, using ILMerge /// </summary> public sealed class ILMergeActivity : BaseCodeActivity { /// <summary> /// A list of file paths to the assemblies that should be merged /// </summary> [RequiredArgument] public InArgument<IEnumerable<string>> InputAssemblies { get; set; } /// <summary> /// Full path to the generated assembly /// </summary> [RequiredArgument] public InArgument<string> OutputFile { get; set; } /// <summary> /// Which input assembly that the attibutes for the generated assembly should be copied from. /// Optional. If not specified, the first input assembly will be used /// </summary> public InArgument<string> AttributeFile { get; set; } /// <summary> /// Kind of assembly to generate, dll or exe /// </summary> public InArgument<TargetKindEnum> TargetKind { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { string message = InputAssemblies.Get(context).Aggregate("", (current, assembly) => current + (assembly + " ")); TrackMessage(context, "Merging " + message + " into " + OutputFile.Get(context)); ILMerge m = new ILMerge(); m.SetInputAssemblies(InputAssemblies.Get(context).ToArray()); m.TargetKind = TargetKind.Get(context) == TargetKindEnum.Dll ? ILMerge.Kind.Dll : ILMerge.Kind.Exe; m.OutputFile = OutputFile.Get(context); m.AttributeFile = !String.IsNullOrEmpty(AttributeFile.Get(context)) ? AttributeFile.Get(context) : InputAssemblies.Get(context).First(); m.SetTargetPlatform(RuntimeEnvironment.GetSystemVersion().Substring(0,2), RuntimeEnvironment.GetRuntimeDirectory()); m.Merge(); TrackMessage(context, "Generated " + m.OutputFile); } } [Browsable(true)] public enum TargetKindEnum { Dll, Exe } NB: The activity inherits from a BaseCodeActivity class which is an internal helper class which contains some methods and properties useful for moste custom activities. In this case, it uses the TrackeMessage method for writing to the build log. You either need to remove the TrackMessage method calls, or implement this yourself (which is not very hard… ) The custom activity has the following input arguments: InputAssemblies A list with the (full) paths to the assemblies to merge OutputFile The name of the resulting merged assembly AttributeFile Which assembly to use as the template for the attribute of the merged assembly. This argument is optional and if left blank, the first assembly in the input list is used TargetKind Decides what type of assembly to create, can be either a dll or an exe Of course, there are more switches to the ILMerge.exe, and these can be exposed as input arguments as well if you need it. To show how the custom activity can be used, I have attached a build process template (see link at the top of this post) that merges the output of the projects being built (CommonLibrary.dll and CommonLibrary2.dll) into a merged assembly (NewLibrary.dll). The build process template has the following custom process parameters:   The Assemblies To Merge argument is passed into a FindMatchingFiles activity to located all assemblies that are located in the BinariesDirectory folder after the compilation has been performed by Team Build. Here is the complete sequence of activities that performs the merge operation. It is located at the end of the Try, Compile, Test and Associate… sequence: It splits the AssembliesToMerge parameter and appends the full path (using the BinariesDirectory variable) and then enumerates the matching files using the FindMatchingFiles activity. When running the build, you can see that it merges two assemblies into a new one:     And the merged assembly (and associated pdb file) is copied to the drop location together with the rest of the assemblies:

    Read the article

  • Week in Geek: US Govt E-card Scam Siphons Confidential Data Edition

    - by Asian Angel
    This week we learned how to “back up photos to Flickr, automate repetitive tasks, & normalize MP3 volume”, enable “stereo mix” in Windows 7 to record audio, create custom papercraft toys, read up on three alternatives to Apple’s flaky iOS alarm clock, decorated our desktops & app docks with Google icon packs, and more. Photo by alexschlegel. Random Geek Links It has been a busy week on the security & malware fronts and we have a roundup of the latest news to help keep you updated. Photo by TopTechWriter.US. US govt e-card scam hits confidential data A fake U.S. government Christmas e-card has managed to siphon off gigabytes of sensitive data from a number of law enforcement and military staff who work on cybersecurity matters, many of whom are involved in computer crime investigations. Security tool uncovers multiple bugs in every browser Michal Zalewski reports that he discovered the vulnerability in Internet Explorer a while ago using his cross_fuzz fuzzing tool and reported it to Microsoft in July 2010. Zalewski also used cross_fuzz to discover bugs in other browsers, which he also reported to the relevant organisations. Microsoft to fix Windows holes, but not ones in IE Microsoft said that it will release two security bulletins next week fixing three holes in Windows, but it is still investigating or working on fixing holes in Internet Explorer that have been reportedly exploited in attacks. Microsoft warns of Windows flaw affecting image rendering Microsoft has warned of a Windows vulnerability that could allow an attacker to take control of a computer if the user is logged on with administrative rights. Windows 7 Not Affected by Critical 0-Day in the Windows Graphics Rendering Engine While confirming that details on a Critical zero-day vulnerability have made their way into the wild, Microsoft noted that customers running the latest iteration of Windows client and server platforms are not exposed to any risks. Microsoft warns of Office-related malware Microsoft’s Malware Protection Center issued a warning this week that it has spotted malicious code on the Internet that can take advantage of a flaw in Word and infect computers after a user does nothing more than read an e-mail. *Refers to a flaw that was addressed in the November security patch releases. Make sure you have all of the latest security updates installed. Unpatched hole in ImgBurn disk burning application According to security specialist Secunia, a highly critical vulnerability in ImgBurn, a lightweight disk burning application, can be used to remotely compromise a user’s system. Hole in VLC Media Player Virtual Security Research (VSR) has identified a vulnerability in VLC Media Player. In versions up to and including 1.1.5 of the VLC Media Player. Flash Player sandbox can be bypassed Flash applications run locally can read local files and send them to an online server – something which the sandbox is supposed to prevent. Chinese auction site touts hacked iTunes accounts Tens of thousands of reportedly hacked iTunes accounts have been found on Chinese auction site Taobao, but the company claims it is unable to take action unless there are direct complaints. What happened in the recent Hotmail outage Mike Schackwitz explains the cause of the recent Hotmail outage. DOJ sends order to Twitter for Wikileaks-related account info The U.S. Justice Department has obtained a court order directing Twitter to turn over information about the accounts of activists with ties to Wikileaks, including an Icelandic politician, a legendary Dutch hacker, and a U.S. computer programmer. Google gets court to block Microsoft Interior Department e-mail win The U.S. Federal Claims Court has temporarily blocked Microsoft from proceeding with the $49.3 million, five-year DOI contract that it won this past November. Google Apps customers get email lockdown Companies and organisations using Google Apps are now able to restrict the email access of selected users. LibreOffice Is the Default Office Suite for Ubuntu 11.04 Matthias Klose has announced some details regarding the replacement of the old OpenOffice.org 3.2.1 packages with the new LibreOffice 3.3 ones, starting with the upcoming Ubuntu 11.04 (Natty Narwhal) Alpha 2 release. Sysadmin Geek Tips Photo by Filomena Scalise. How to Setup Software RAID for a Simple File Server on Ubuntu Do you need a file server that is cheap and easy to setup, “rock solid” reliable, and has Email Alerting? This tutorial shows you how to use Ubuntu, software RAID, and SaMBa to accomplish just that. How to Control the Order of Startup Programs in Windows While you can specify the applications you want to launch when Windows starts, the ability to control the order in which they start is not available. However, there are a couple of ways you can easily overcome this limitation and control the startup order of applications. Random TinyHacker Links Using Opera Unite to Send Large Files A tutorial on using Opera Unite to easily send huge files from your computer. WorkFlowy is a Useful To-do List Tool A cool to-do list tool that lets you integrate multiple tasks in one single list easily. Playing Flash Videos on iOS Devices Yes, you can play flash videos on jailbroken iPhones. Here’s a tutorial. Clear Safari History and Cookies On iPhone A tutorial on clearing your browser history on iPhone and other iOS devices. Monitor Your Internet Usage Here’s a cool, cross-platform tool to monitor your internet bandwidth. Super User Questions See what the community had to say on these popular questions from Super User this week. Why is my upload speed much less than my download speed? Where should I find drivers for my laptop if it didn’t come with a driver disk? OEM Office 2010 without media – how to reinstall? Is there a point to using theft tracking software like Prey on my laptop, if you have login security? Moving an “all-in-one” PC when turned on/off How-To Geek Weekly Article Recap Get caught up on your HTG reading with our hottest articles from this past week. How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk How To Boot 10 Different Live CDs From 1 USB Flash Drive What is Camera Raw, and Why Would a Professional Prefer it to JPG? Did You Know Facebook Has Built-In Shortcut Keys? The How-To Geek Guide to Audio Editing: The Basics One Year Ago on How-To Geek Enjoy looking through our latest gathering of retro article goodness. Learning Windows 7: Create a Homegroup & Join a New Computer To It How To Disconnect a Machine from a Homegroup Use Remote Desktop To Access Other Computers On a Small Office or Home Network How To Share Files and Printers Between Windows 7 and Vista Allow Users To Run Only Specified Programs in Windows 7 The Geek Note That is all we have for you this week and we hope your first week back at work or school has gone very well now that the holidays are over. Know a great tip? Send it in to us at [email protected]. Photo by Pamela Machado. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 Arctic Theme for Windows 7 Gives Your Desktop an Icy Touch Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics Scenic Winter Lane Wallpaper to Create a Relaxing Mood Access Your Web Apps Directly Using the Context Menu in Chrome The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video]

    Read the article

  • Using jQuery and OData to Insert a Database Record

    - by Stephen Walther
    In my previous blog entry, I explored two ways of inserting a database record using jQuery. We added a new Movie to the Movie database table by using a generic handler and by using a WCF service. In this blog entry, I want to take a brief look at how you can insert a database record using OData. Introduction to OData The Open Data Protocol (OData) was developed by Microsoft to be an open standard for communicating data across the Internet. Because the protocol is compatible with standards such as REST and JSON, the protocol is particularly well suited for Ajax. OData has undergone several name changes. It was previously referred to as Astoria and ADO.NET Data Services. OData is used by Sharepoint Server 2010, Azure Storage Services, Excel 2010, SQL Server 2008, and project code name “Dallas.” Because OData is being adopted as the public interface of so many important Microsoft technologies, it is a good protocol to learn. You can learn more about OData by visiting the following websites: http://www.odata.org http://msdn.microsoft.com/en-us/data/bb931106.aspx When using the .NET framework, you can easily expose database data through the OData protocol by creating a WCF Data Service. In this blog entry, I will create a WCF Data Service that exposes the Movie database table. Create the Database and Data Model The MoviesDB database is a simple database that contains the following Movies table: You need to create a data model to represent the MoviesDB database. In this blog entry, I use the ADO.NET Entity Framework to create my data model. However, WCF Data Services and OData are not tied to any particular OR/M framework such as the ADO.NET Entity Framework. For details on creating the Entity Framework data model for the MoviesDB database, see the previous blog entry. Create a WCF Data Service You create a new WCF Service by selecting the menu option Project, Add New Item and selecting the WCF Data Service item template (see Figure 1). Name the new WCF Data Service MovieService.svc. Figure 1 – Adding a WCF Data Service Listing 1 contains the default code that you get when you create a new WCF Data Service. There are two things that you need to modify. Listing 1 – New WCF Data Service File using System; using System.Collections.Generic; using System.Data.Services; using System.Data.Services.Common; using System.Linq; using System.ServiceModel.Web; using System.Web; namespace WebApplication1 { public class MovieService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } First, you need to replace the comment /* TODO: put your data source class name here */ with a class that represents the data that you want to expose from the service. In our case, we need to replace the comment with a reference to the MoviesDBEntities class generated by the Entity Framework. Next, you need to configure the security for the WCF Data Service. By default, you cannot query or modify the movie data. We need to update the Entity Set Access Rule to enable us to insert a new database record. The updated MovieService.svc is contained in Listing 2: Listing 2 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllWrite); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } That’s all we have to do. We can now insert a new Movie into the Movies database table by posting a new Movie to the following URL: /MovieService.svc/Movies The request must be a POST request. The Movie must be represented as JSON. Using jQuery with OData The HTML page in Listing 3 illustrates how you can use jQuery to insert a new Movie into the Movies database table using the OData protocol. Listing 3 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>jQuery OData Insert</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { Title: $("#title").val(), Director: $("#director").val() }; // JSONify the data var data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Movies", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result var newMovie = result["d"]; // Show primary key alert("Movie added with primary key " + newMovie.Id); } </script> </body> </html> jQuery does not include a JSON serializer. Therefore, we need to include the JSON2 library to serialize the new Movie that we wish to create. The Movie is serialized by calling the JSON.stringify() method: var data = JSON.stringify(data); You can download the JSON2 library from the following website: http://www.json.org/js.html The jQuery ajax() method is called to insert the new Movie. Notice that both the contentType and dataType are set to use JSON. The jQuery ajax() method is used to perform a POST operation against the URL MovieService.svc/Movies. Because the POST payload contains a JSON representation of a new Movie, a new Movie is added to the database table of Movies. When the POST completes successfully, the insertCallback() method is called. The new Movie is passed to this method. The method simply displays the primary key of the new Movie: Summary The OData protocol (and its enabling technology named WCF Data Services) works very nicely with Ajax. By creating a WCF Data Service, you can quickly expose your database data to an Ajax application by taking advantage of open standards such as REST, JSON, and OData. In the next blog entry, I want to take a closer look at how the OData protocol supports different methods of querying data.

    Read the article

  • Parallelism in .NET – Part 5, Partitioning of Work

    - by Reed
    When parallelizing any routine, we start by decomposing the problem.  Once the problem is understood, we need to break our work into separate tasks, so each task can be run on a different processing element.  This process is called partitioning. Partitioning our tasks is a challenging feat.  There are opposing forces at work here: too many partitions adds overhead, too few partitions leaves processors idle.  Trying to work the perfect balance between the two extremes is the goal for which we should aim.  Luckily, the Task Parallel Library automatically handles much of this process.  However, there are situations where the default partitioning may not be appropriate, and knowledge of our routines may allow us to guide the framework to making better decisions. First off, I’d like to say that this is a more advanced topic.  It is perfectly acceptable to use the parallel constructs in the framework without considering the partitioning taking place.  The default behavior in the Task Parallel Library is very well-behaved, even for unusual work loads, and should rarely be adjusted.  I have found few situations where the default partitioning behavior in the TPL is not as good or better than my own hand-written partitioning routines, and recommend using the defaults unless there is a strong, measured, and profiled reason to avoid using them.  However, understanding partitioning, and how the TPL partitions your data, helps in understanding the proper usage of the TPL. I indirectly mentioned partitioning while discussing aggregation.  Typically, our systems will have a limited number of Processing Elements (PE), which is the terminology used for hardware capable of processing a stream of instructions.  For example, in a standard Intel i7 system, there are four processor cores, each of which has two potential hardware threads due to Hyperthreading.  This gives us a total of 8 PEs – theoretically, we can have up to eight operations occurring concurrently within our system. In order to fully exploit this power, we need to partition our work into Tasks.  A task is a simple set of instructions that can be run on a PE.  Ideally, we want to have at least one task per PE in the system, since fewer tasks means that some of our processing power will be sitting idle.  A naive implementation would be to just take our data, and partition it with one element in our collection being treated as one task.  When we loop through our collection in parallel, using this approach, we’d just process one item at a time, then reuse that thread to process the next, etc.  There’s a flaw in this approach, however.  It will tend to be slower than necessary, often slower than processing the data serially. The problem is that there is overhead associated with each task.  When we take a simple foreach loop body and implement it using the TPL, we add overhead.  First, we change the body from a simple statement to a delegate, which must be invoked.  In order to invoke the delegate on a separate thread, the delegate gets added to the ThreadPool’s current work queue, and the ThreadPool must pull this off the queue, assign it to a free thread, then execute it.  If our collection had one million elements, the overhead of trying to spawn one million tasks would destroy our performance. The answer, here, is to partition our collection into groups, and have each group of elements treated as a single task.  By adding a partitioning step, we can break our total work into small enough tasks to keep our processors busy, but large enough tasks to avoid overburdening the ThreadPool.  There are two clear, opposing goals here: Always try to keep each processor working, but also try to keep the individual partitions as large as possible. When using Parallel.For, the partitioning is always handled automatically.  At first, partitioning here seems simple.  A naive implementation would merely split the total element count up by the number of PEs in the system, and assign a chunk of data to each processor.  Many hand-written partitioning schemes work in this exactly manner.  This perfectly balanced, static partitioning scheme works very well if the amount of work is constant for each element.  However, this is rarely the case.  Often, the length of time required to process an element grows as we progress through the collection, especially if we’re doing numerical computations.  In this case, the first PEs will finish early, and sit idle waiting on the last chunks to finish.  Sometimes, work can decrease as we progress, since previous computations may be used to speed up later computations.  In this situation, the first chunks will be working far longer than the last chunks.  In order to balance the workload, many implementations create many small chunks, and reuse threads.  This adds overhead, but does provide better load balancing, which in turn improves performance. The Task Parallel Library handles this more elaborately.  Chunks are determined at runtime, and start small.  They grow slowly over time, getting larger and larger.  This tends to lead to a near optimum load balancing, even in odd cases such as increasing or decreasing workloads.  Parallel.ForEach is a bit more complicated, however. When working with a generic IEnumerable<T>, the number of items required for processing is not known in advance, and must be discovered at runtime.  In addition, since we don’t have direct access to each element, the scheduler must enumerate the collection to process it.  Since IEnumerable<T> is not thread safe, it must lock on elements as it enumerates, create temporary collections for each chunk to process, and schedule this out.  By default, it uses a partitioning method similar to the one described above.  We can see this directly by looking at the Visual Partitioning sample shipped by the Task Parallel Library team, and available as part of the Samples for Parallel Programming.  When we run the sample, with four cores and the default, Load Balancing partitioning scheme, we see this: The colored bands represent each processing core.  You can see that, when we started (at the top), we begin with very small bands of color.  As the routine progresses through the Parallel.ForEach, the chunks get larger and larger (seen by larger and larger stripes). Most of the time, this is fantastic behavior, and most likely will out perform any custom written partitioning.  However, if your routine is not scaling well, it may be due to a failure in the default partitioning to handle your specific case.  With prior knowledge about your work, it may be possible to partition data more meaningfully than the default Partitioner. There is the option to use an overload of Parallel.ForEach which takes a Partitioner<T> instance.  The Partitioner<T> class is an abstract class which allows for both static and dynamic partitioning.  By overriding Partitioner<T>.SupportsDynamicPartitions, you can specify whether a dynamic approach is available.  If not, your custom Partitioner<T> subclass would override GetPartitions(int), which returns a list of IEnumerator<T> instances.  These are then used by the Parallel class to split work up amongst processors.  When dynamic partitioning is available, GetDynamicPartitions() is used, which returns an IEnumerable<T> for each partition.  If you do decide to implement your own Partitioner<T>, keep in mind the goals and tradeoffs of different partitioning strategies, and design appropriately. The Samples for Parallel Programming project includes a ChunkPartitioner class in the ParallelExtensionsExtras project.  This provides example code for implementing your own, custom allocation strategies, including a static allocator of a given chunk size.  Although implementing your own Partitioner<T> is possible, as I mentioned above, this is rarely required or useful in practice.  The default behavior of the TPL is very good, often better than any hand written partitioning strategy.

    Read the article

  • TFS 2010 Basic Concepts

    - by jehan
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Here, I’m going to discuss some key Architectural changes and concepts that have taken place in TFS 2010 when compared to TFS 2008. In TFS 2010 Installation, First you need to do the Installation and then you have to configure the Installation Feature from the available features. This is bit similar to SharePoint Installation, where you will first do the Installation and then configure the SharePoint Farms. 1) Installation Features available in TFS2010: a) Basic: It is the most compact TFS installation possible. It will install and configure Source Control, Work Item tracking and Build Services only. (SharePoint and Reporting Integration will not be possible). b) Standard Single Server: This is suitable for Single Server deployment of TFS. It will install and configure Windows SharePoint Services for you and will use the default instance of SQL Server. c) Advanced: It is suitable, if you want use Remote Servers for SQL Server Databases, SharePoint Products and Technologies and SQL Server Reporting Services. d) Application Tier Only: If you want to configure high availability for Team Foundation Server in a Load Balanced Environment (NLB) or you want to move Team Foundation Server from one server to other or you want to restore TFS. e) Upgrade: If you want to upgrade from a prior version of TFS. Note: One more important thing to know here about  TFS 2010 Basic is that,  it can be installed on Client Operations Systems(Windows 7 and Windows Vista SP3), Where as  earlier you cannot Install previous version of TFS (2008 and 2005) on client OS. 2) Team Project Collections: Connect to TFS dialog box in TFS 2008:  In TFS 2008, the TFS Server contains a set of Team Projects and each project may or may not be independent of other projects and every checkin gets a ever increasing  changeset ID  irrespective of the team project in which it is checked in and the same applies to work items  also, who also gets unique Work Item Ids.The main problem with this approach was that there are certain things which were impossible to do; those were required as per the Application Development Process. a)      If something has gone wrong in one team project and now you want to restore it back to earlier state where it was working properly then it requires you to restore the Database of Team Foundation Server from the backup you have taken as per your Maintenance plans and because of this the other team projects may lose out on the work which is not backed up. b)       Your company had a merge with some other company and now you have two TFS servers. One TFS Server which you are working on and other TFS server which other company was working and now after the merge you want to integrate the team projects from two TFS servers into one, which is almost impossible to achieve in TFS 2008. Though you can create the Team Projects in one server manually (In Source Control) which you want to integrate from the other TFS Server, but will lose out on History of Change Sets and Work items and others which are very important. There were few more issues of this sort, which were difficult to resolve in TFS 2008. To resolve issues related to above kind of scenarios which were mainly related TFS Maintenance, Integration, migration and Security,  Microsoft has come up with Team Project Collections concept in TFS 2010.This concept is similar to SharePoint Site Collections and if you are familiar with SharePoint Architecture, then it will help you to understand TFS 2010 Architecture easily. Connect to TFS dialog box in TFS 2010: In above dialog box as you can see there are two Team Project Collections, each team project can contain any number of team projects as you can see on right side it shows the two Team Projects in Team Project Collection (Default Collection) which I have chosen. Note: You can connect to only one Team project Collection at a time using an instance of  TFS Team Explorer. How does it work? To introduce Team Project Collections, changes have been done in reorganization of TFS databases. TFS 2008 was composed of 5-7 databases partitioned by subsystem (each for Version Control, Work Item Tracking, Build, Integration, Project Management...) New TFS 2010 database architecture: TFS_Config: It’s the root database and it contains centralized TFS configuration data, including the list of all team projects exist in TFS server. TFS_Warehouse: The data warehouse contains all the reporting data of served by this server (farm). TFS_* : This contains individual team project collection data. This database contains all the operational data of team project collection regardless of subsystem.In additional to this, you will have databases for SharePoint and Report Server. 3) TFS Farms:  As TFS 2010 is more flexible to configure as multiple Application tiers and multiple Database tiers, so it will be more appropriate to call as TFS Farm if you going for multi server installation of TFS. NLB support for TFS application tiers – With TFS 2010: you can configure multiple TFS application tier machines to serve the same set of Team Project Collections. The primary purpose of NLB support is to enable a cleaner and more complete high availability than in TFS 2008. Even if any application tier in the farm fails then farm will automatically continue to work with hardly any indication to end users of a problem. SQL data tiers: With 2010 you can configure many SQL Servers. Each Database can be configured to be on any SQL Server because each Team Project Collection is an independent database. This feature can also be used to load balance databases across SQL Servers.These new capabilities will significantly change the way enterprises manage their TFS installations in the future. With Team Project Collections and TFS farms, you can create a single, arbitrarily large TFS installation. You can grow it incrementally by adding ATs and SQL Servers as needed.

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • Conversation as User Assistance

    - by ultan o'broin
    Applications User Experience members (Erika Web, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Anne Gentle (left) with Applications User Experience Senior Director Laurie Pattison In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help The Microsoft Development Network Community Center ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes reach out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

  • Introducing Oracle VM Server for SPARC

    - by Honglin Su
    As you are watching Oracle's Virtualization Strategy Webcast and exploring the great virtualization offerings of Oracle VM product line, I'd like to introduce Oracle VM Server for SPARC --  highly efficient, enterprise-class virtualization solution for Sun SPARC Enterprise Systems with Chip Multithreading (CMT) technology. Oracle VM Server for SPARC, previously called Sun Logical Domains, leverages the built-in SPARC hypervisor to subdivide supported platforms' resources (CPUs, memory, network, and storage) by creating partitions called logical (or virtual) domains. Each logical domain can run an independent operating system. Oracle VM Server for SPARC provides the flexibility to deploy multiple Oracle Solaris operating systems simultaneously on a single platform. Oracle VM Server also allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by the CMT architecture. Oracle VM Server for SPARC integrates both the industry-leading CMT capability of the UltraSPARC T1, T2 and T2 Plus processors and the Oracle Solaris operating system. This combination helps to increase flexibility, isolate workload processing, and improve the potential for maximum server utilization. Oracle VM Server for SPARC delivers the following: Leading Price/Performance - The low-overhead architecture provides scalable performance under increasing workloads without additional license cost. This enables you to meet the most aggressive price/performance requirement Advanced RAS - Each logical domain is an entirely independent virtual machine with its own OS. It supports virtual disk mutipathing and failover as well as faster network failover with link-based IP multipathing (IPMP) support. Moreover, it's fully integrated with Solaris FMA (Fault Management Architecture), which enables predictive self healing. CPU Dynamic Resource Management (DRM) - Enable your resource management policy and domain workload to trigger the automatic addition and removal of CPUs. This ability helps you to better align with your IT and business priorities. Enhanced Domain Migrations - Perform domain migrations interactively and non-interactively to bring more flexibility to the management of your virtualized environment. Improve active domain migration performance by compressing memory transfers and taking advantage of cryptographic acceleration hardware. These methods provide faster migration for load balancing, power saving, and planned maintenance. Dynamic Crypto Control - Dynamically add and remove cryptographic units (aka MAU) to and from active domains. Also, migrate active domains that have cryptographic units. Physical-to-virtual (P2V) Conversion - Quickly convert an existing SPARC server running the Oracle Solaris 8, 9 or 10 OS into a virtualized Oracle Solaris 10 image. Use this image to facilitate OS migration into the virtualized environment. Virtual I/O Dynamic Reconfiguration (DR) - Add and remove virtual I/O services and devices without needing to reboot the system. CPU Power Management - Implement power saving by disabling each core on a Sun UltraSPARC T2 or T2 Plus processor that has all of its CPU threads idle. Advanced Network Configuration - Configure the following network features to obtain more flexible network configurations, higher performance, and scalability: Jumbo frames, VLANs, virtual switches for link aggregations, and network interface unit (NIU) hybrid I/O. Official Certification Based On Real-World Testing - Use Oracle VM Server for SPARC with the most sophisticated enterprise workloads under real-world conditions, including Oracle Real Application Clusters (RAC). Affordable, Full-Stack Enterprise Class Support - Obtain worldwide support from Oracle for the entire virtualization environment and workloads together. The support covers hardware, firmware, OS, virtualization, and the software stack. SPARC Server Virtualization Oracle offers a full portfolio of virtualization solutions to address your needs. SPARC is the leading platform to have the hard partitioning capability that provides the physical isolation needed to run independent operating systems. Many customers have already used Oracle Solaris Containers for application isolation. Oracle VM Server for SPARC provides another important feature with OS isolation. This gives you the flexibility to deploy multiple operating systems simultaneously on a single Sun SPARC T-Series server with finer granularity for computing resources.  For SPARC CMT processors, the natural level of granularity is an execution thread, not a time-sliced microsecond of execution resources. Each CPU thread can be treated as an independent virtual processor. The scheduler is naturally built into the CPU for lower overhead and higher performance. Your organizations can couple Oracle Solaris Containers and Oracle VM Server for SPARC with the breakthrough space and energy savings afforded by Sun SPARC Enterprise systems with CMT technology to deliver a more agile, responsive, and low-cost environment. Management with Oracle Enterprise Manager Ops Center The Oracle Enterprise Manager Ops Center Virtualization Management Pack provides full lifecycle management of virtual guests, including Oracle VM Server for SPARC and Oracle Solaris Containers. It helps you streamline operations and reduce downtime. Together, the Virtualization Management Pack and the Ops Center Provisioning and Patch Automation Pack provide an end-to-end management solution for physical and virtual systems through a single web-based console. This solution automates the lifecycle management of physical and virtual systems and is the most effective systems management solution for Oracle's Sun infrastructure. Ease of Deployment with Configuration Assistant The Oracle VM Server for SPARC Configuration Assistant can help you easily create logical domains. After gathering the configuration data, the Configuration Assistant determines the best way to create a deployment to suit your requirements. The Configuration Assistant is available as both a graphical user interface (GUI) and terminal-based tool. Oracle Solaris Cluster HA Support The Oracle Solaris Cluster HA for Oracle VM Server for SPARC data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the Oracle VM Server guest domain service. In addition, applications that run on a logical domain, as well as its resources and dependencies can be controlled and managed independently. These are managed as if they were running in a classical Solaris Cluster hardware node. Supported Systems Oracle VM Server for SPARC is supported on all Sun SPARC Enterprise Systems with CMT technology. UltraSPARC T2 Plus Systems ·   Sun SPARC Enterprise T5140 Server ·   Sun SPARC Enterprise T5240 Server ·   Sun SPARC Enterprise T5440 Server ·   Sun Netra T5440 Server ·   Sun Blade T6340 Server Module ·   Sun Netra T6340 Server Module UltraSPARC T2 Systems ·   Sun SPARC Enterprise T5120 Server ·   Sun SPARC Enterprise T5220 Server ·   Sun Netra T5220 Server ·   Sun Blade T6320 Server Module ·   Sun Netra CP3260 ATCA Blade Server Note that UltraSPARC T1 systems are supported on earlier versions of the software.Sun SPARC Enterprise Systems with CMT technology come with the right to use (RTU) of Oracle VM Server, and the software is pre-installed. If you have the systems under warranty or with support, you can download the software and system firmware as well as their updates. Oracle Premier Support for Systems provides fully-integrated support for your server hardware, firmware, OS, and virtualization software. Visit oracle.com/support for information about Oracle's support offerings for Sun systems. For more information about Oracle's virtualization offerings, visit oracle.com/virtualization.

    Read the article

  • Issue 15: The Benefits of Oracle Exastack

    - by rituchhibber
         SOLUTIONS FOCUS The Benefits of Oracle Exastack Paul ThompsonDirector, Alliances and Solutions Partner ProgramsOracle EMEA Alliances & Channels RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Ready Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Exastack Labs Video Tour SUBSCRIBE FEEDBACK PREVIOUS ISSUES Exastack is a revolutionary programme supporting Oracle independent software vendor partners across the entire Oracle technology stack. Oracle's core strategy is to engineer software and hardware together, and our ISV strategy is the same. At Oracle we design engineered systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimises performance at every layer of the stack to simplify business operations, drive down costs and accelerate business innovation. Our engineered systems are optimised to achieve enterprise performance levels that are unmatched in the industry. Faster time to production is achieved by implementing pre-engineered and pre-assembled hardware and software bundles. Our strategy of delivering a single-vendor stack simplifies and reduces costs associated with purchasing, deploying, and supporting IT environments for our customers and partners. In parallel to this core engineered systems strategy, the Oracle Exastack Program enables our Oracle ISV partners to leverage a scalable, integrated infrastructure that delivers their applications tuned, tested and optimised for high-performance. Specifically, the Oracle Exastack Program helps ISVs run their solutions on the Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4 - integrated systems products in which the software and hardware are engineered to work together. These products provide OPN members with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Ready and Optimized Oracle Partners can now leverage our new Oracle Exastack Program to become Oracle Exastack Ready and Oracle Exastack Optimized. Partners can achieve Oracle Exastack Ready status through their support for Oracle Solaris, Oracle Linux, Oracle VM, Oracle Database, Oracle WebLogic Server, Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. By doing this, partners can demonstrate to their customers that their applications are available on the latest major releases of these products. The Oracle Exastack Ready programme helps customers readily differentiate Oracle partners from lesser software developers, and identify applications that support Oracle engineered systems. Achieving Oracle Exastack Optimized status demonstrates that an OPN member has proven itself against goals for performance and scalability on Oracle integrated systems. This status enables end customers to readily identify Oracle partners that have tested and tuned their solutions for optimum performance on an Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. These ISVs can display the Oracle Exadata Optimized, Oracle Exalogic Optimized or Oracle SPARC SuperCluster Optimized logos on websites and on all their collateral to show that they have tested and tuned their application for optimum performance. Deliver higher value to customers Oracle's investment in engineered systems enables ISV partners to deliver higher value to customer business processes. New innovations are enabled through extreme performance unachievable through traditional best-of-breed multi-vendor server/software approaches. Core product requirements can be launched faster, enabling ISVs to focus research and development investment on core competencies in order to bring value to market as quickly as possible. Through Exastack, partners no longer have to worry about the underlying product stack, which allows greater focus on the development of intellectual property above the stack. Partners are not burdened by platform issues and can concentrate simply on furthering their applications. The advantage to end customers is that partners can focus all efforts on business functionality, rather than bullet-proofing underlying technologies, and so will inevitably deliver application updates faster. Exastack provides ISVs with a number of flexible deployment options, such as on-premise or Cloud, while maintaining one single code base for applications regardless of customer deployment preference. Customers buying their solutions from Exastack ISVs can therefore be confident in deploying on their own networks, on private clouds or into a public cloud. The underlying platform will support all conceivable deployments, enabling a focus on the ISV's application itself that wouldn't be possible with other vendor partners. It stands to reason that Exastack accelerates time to value as well as lowering implementation costs all round. There is a big competitive advantage in partners being able to offer customers an optimised, pre-configured solution rather than an assortment of components and a suggested fit. Once a customer has decided to buy an Oracle Exastack Ready or Optimized partner solution, it will be up and running without any need for the customer to conduct testing of its own. Operational costs and complexity are also reduced, thanks to streamlined customer support through standardised configurations and pro-active monitoring. 'Engineered to Work Together' is a significant statement of Oracle strategy. It guarantees smoother deployment of a single vendor solution, clear ownership with no finger-pointing and the peace of mind of the Oracle Support Centre underpinning the entire product stack. Next steps Every OPN member with packaged applications must seriously consider taking steps to become Exastack Ready, or Exastack Optimized at the first opportunity. That first step down the track is to talk to an expert on the OPN Portal, at the Oracle Partner Business Center or to discuss the next steps with the closest Oracle account manager. Oracle Exastack lab environments and other technical enablement resources are available for OPN members wishing to further their knowledge of Oracle Exastack and qualify their applications for Oracle Exastack Optimized. New Boot Camps and Guided Learning Paths (GLPs), tailored specifically for ISVs, are available for Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle Linux, Oracle Solaris, Oracle Database, and Oracle WebLogic Server. More information about these GLPs and Boot Camps (including delivery dates and locations) are posted on the OPN Competency Center and corresponding OPN Knowledge Zones. Learn more about Oracle Exastack labs and ISV specific enablement resources. "Oracle Specialized partners are of course front-and-centre, with potential customers clearly directed to those partners and to Exadata Ready partners as a matter of priority." --More OpenWorld 2011 highlights for Oracle partners and customers Oracle Application Testing Suite 9.3 application testing solution for Web, SOA and Oracle Applications Oracle Application Express Release 4.1 improving the development of database-centric Web 2.0 applications and reports Oracle Unified Directory 11g helping customers manage the critical identity information that drives their business applications Oracle SOA Suite for healthcare integration Oracle Enterprise Pack for Eclipse 11g demonstrating continued commitment to the developer and open source communities Oracle Coherence 3.7.1, the latest release of the industry's leading distributed in-memory data grid Oracle Process Accelerators helping to simplify and accelerate time-to-value for customers' business process management initiatives Oracle's JD Edwards EnterpriseOne on the iPad meeting the increasingly mobile demands of today's workforces Oracle CRM On Demand Release 19 Innovation Pack introducing industry-leading hosted call centre and enterprise-marketing capabilities designed to drive further revenue and productivity while reducing costs and improving the customer experience Oracle's Primavera Portfolio Management 9 for businesses delivering on project portfolio goals with increased versatility, transparency and accuracy Oracle's PeopleSoft Human Capital Management (HCM) 9.1 On Demand Standard Edition helping customers manage their long-term investment in enterprise-wide business applications New versions of Oracle FLEXCUBE Universal Banking and Oracle FLEXCUBE Investor Servicing for Financial Institutions, as well as Oracle Financial Services Enterprise Case Management, Oracle Financial Services Pricing Management, Oracle Financial Management Analytics and Oracle Tax Analytics Oracle Utilities Network Management System 1.11 offering new modelling and analysis features to improve distribution-grid management for electric utilities Oracle Communications Network Charging and Control 4.4 helping communications service providers (CSPs) offer their customers more flexible charging options Plus many, many more technology announcements, enhancements, momentum news and community updates -- Oracle OpenWorld 2012 A date has already been set for Oracle OpenWorld 2012. Held once again in San Francisco, exhibitors, partners, customers and Oracle people will gather from 30 September until 4 November to meet, network and learn together with the rest of the global Oracle community. Register now for Oracle OpenWorld 2012 and save $$$! We'll reward your early planning for Oracle OpenWorld 2012 with reduced rates. Super Saver deals are now available! -- Back to the welcome page

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Soapi.CS : A fully relational fluent .NET Stack Exchange API client library

    - by Sky Sanders
    Soapi.CS for .Net / Silverlight / Windows Phone 7 / Mono as easy as breathing...: var context = new ApiContext(apiKey).Initialize(false); Question thisPost = context.Official .StackApps .Questions.ById(386) .WithComments(true) .First(); Console.WriteLine(thisPost.Title); thisPost .Owner .Questions .PageSize(5) .Sort(PostSort.Votes) .ToList() .ForEach(q=> { Console.WriteLine("\t" + q.Score + "\t" + q.Title); q.Timeline.ToList().ForEach(t=> Console.WriteLine("\t\t" + t.TimelineType + "\t" + t.Owner.DisplayName)); Console.WriteLine(); }); // if you can think it, you can get it. Output Soapi.CS : A fully relational fluent .NET Stack Exchange API client library 21 Soapi.CS : A fully relational fluent .NET Stack Exchange API client library Revision code poet Revision code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Answer code poet Revision code poet Revision code poet 14 SOAPI-WATCH: A realtime service that notifies subscribers via twitter when the API changes in any way. Votes code poet Revision code poet Votes code poet Comment code poet Comment code poet Comment code poet Votes lfoust Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Revision code poet Comment lfoust Votes code poet Revision code poet Votes code poet Votes lfoust Votes code poet Revision code poet Comment Dave DeLong Revision code poet Revision code poet Votes code poet Comment lfoust Comment Dave DeLong Comment lfoust Comment lfoust Comment Dave DeLong Revision code poet 11 SOAPI-EXPLORE: Self-updating single page JavaSript API test harness Votes code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Comment code poet Revision code poet Votes code poet Comment code poet Question code poet Votes code poet 11 Soapi.JS V1.0: fluent JavaScript wrapper for the StackOverflow API Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Answer George Edison Votes code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Answer code poet Comment code poet Revision code poet Comment code poet Comment code poet Comment code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Comment code poet 9 SOAPI-DIFF: Your app broke? Check SOAPI-DIFF to find out what changed in the API Votes code poet Revision code poet Comment Dennis Williamson Answer Dennis Williamson Votes code poet Votes Dennis Williamson Comment code poet Question code poet Votes code poet About A robust, fully relational, easy to use, strongly typed, end-to-end StackOverflow API Client Library. Out of the box, Soapi provides you with a robust client library that abstracts away most all of the messy details of consuming the API and lets you concentrate on implementing your ideas. A few features include: A fully relational model of the API data set exposed via a fully 'dot navigable' IEnumerable (LINQ) implementation. Simply tell Soapi what you want and it will get it for you. e.g. "On my first question, from the author of the first comment, get the first page of comments by that person on any post" my.Questions.First().Comments.First().Owner.Comments.ToList(); (yes this is a real expression that returns the data as expressed!) Full coverage of the API, all routes and all parameters with an intuitive syntax. Strongly typed Domain Data Objects for all API data structures. Eager and Lazy Loading of 'stub' objects. Eager\Lazy loading may be disabled. When finer grained control of requests is desired, the core RouteMap objects may be leveraged to request data from any of the API paths using all available parameters as documented on the help pages. A rich Asynchronous implementation. A configurable request cache to reduce unnecessary network traffic and to simplify your usage logic. There is no need to go out of your way to be frugal. You may set a distinct cache duration for any particular route. A configurable request throttle to ensure compliance with the api terms of usage and to simplify your code in that you do not have to worry about and respond to 50X errors. The RequestCache and Throttled Queue are thread-safe, so can make as many requests as you like from as many threads as you like as fast as you like and not worry about abusing the api or having to write reams of management/compensation code. Configurable retry threshold that will, by default, make up to 3 attempts to retrieve a request before failing. Every request made by Soapi is properly formed and directed so most any http error will be the result of a timeout or other network infrastructure. A retry buffer provides a level of fault tolerance that you can rely on. An almost identical javascript library, Soapi.JS, and it's full figured big brother, Soapi.JS2, that will enable you to leverage your server cycles and bandwidth for only those tasks that require it and offload things like status updates to the client's browser. License Licensed GPL Version 2 license. Why is Soapi.CS GPL? Can I get an LGPL license for Soapi.CS? (hint: probably) Platforms .NET 3.5 .NET 4.0 Silverlight 3 Silverlight 4 Windows Phone 7 Mono Download Source code lives @ http://soapics.codeplex.com. Binary releases are forthcoming. codeplex is acting up again. get the source and binaries @ http://bitbucket.org/bitpusher/soapi.cs/downloads The source is C# 3.5. and includes projects and solutions for the following IDEs Visual Studio 2008 Visual Studio 2010 ModoDevelop 2.4 Documentation Full documentation is available at http://soapi.info/help/cs/index.aspx Sample Code / Usage Examples Sample code and usage examples will be added as answers to this question. Full API Coverage all API routes are covered Full Parameter Parity If the API exposes it, Soapi giftwraps it for you. Building a simple app with Soapi.CS - a simple app that gathers all traces of a user in the whole stackiverse. Fluent Configuration - Setting up a Soapi.ApiContext could not be easier Bulk Data Import - A tiny app that quickly loads a SQLite data file with all users in the stackiverse. Paged Results - Soapi.CS transparently handles multi-page operations. Asynchronous Requests - Soapi.CS provides a rich asynchronous model that is especially useful when writing api apps in Silverlight or Windows Phone 7. Caching and Throttling - how and why Apps that use Soapi.CS Soapi.FindUser - .net utility for locating a user anywhere in the stackiverse Soapi.Explore - The entire API at your command Soapi.LastSeen - List users by last access time Add your app/site here - I know you are out there ;-) if you are not comfortable editing this post, simply add a comment and I will add it. The CS/SL/WP7/MONO libraries all compile the same code and with the exception of environmental considerations of Silverlight, the code samples are valid for all libraries. You may also find guidance in the test suites. More information on the SOAPI eco-system. Contact This library is currently the effort of me, Sky Sanders (code poet) and can be reached at gmail - sky.sanders Any who are interested in improving this library are welcome. Support Soapi You can help support this project by voting for Soapi's Open Source Ad post For more information about the origins of Soapi.CS and the rest of the Soapi eco-system see What is Soapi and why should I care?

    Read the article

  • Find More Streaming TV Online with Clicker.tv

    - by DigitalGeekery
    Looking for a way to access more of your favorite TV Shows and other online entertainment? Today we’ll take a look at Clicker.tv which offers an awesome way to find tons of TV programs and movies. Clicker.tv Clicker.tv is an HTML5 web application that indexes both free and premium content from sources like Hulu, Netflix, Amazon, iTunes, and more. Some movies or episodes, such as those from Netflix and Amazon.com’s Video on Demand, will require viewers to have a membership, or pay a fee to access content. There is also a Clicker.tv app for Boxee.   Navigation Navigating in Clicker.tv is rather easy with your keyboard. Directional Keys: navigate up, down, left, and right. Enter: make a selection Backspace: return to previous screen Escape: return to the Clicker.tv home screen. Note: You can also navigate through Clicker.tv with your PC remote. Recommended Browsers Firefox 3.6 + Safari 4.0 + Internet Explorer 8 + Google Chrome Note: You’ll need the latest version of Flash installed to play the majority of content. Earlier versions of the above browsers may work, but for full keyboard functionality, stick with the recommendations. Using Clicker.tv The first time you go to Clicker.tv, (link below) you’ll be met with a welcome screen and some helpful hints. Click Enter when finished.   The Home screen feature Headliners, Trending Shows, and Trending Episodes. You can scroll through the different options and category links along the left side.   The Search link pulls up an onscreen keyboard so you can enter search terms with a remote as well as a keyboard. Type in your search terms and matching items are displayed on the screen.   You can also browse by a wide variety of categories. Select TV to browse only available TV programs. Or, browse only Movies in the movie category. There are also links for Web content and Music.   Creating an Account You can access all Clicker.tv content without an account, but a Clicker account allows users to create playlists and subscribe to shows and have them automatically added to their playlist. You’ll need to go to Clicker.com and create an account. You’ll find the link at the upper right of the page. Enter a username, password and email address. There also an option to link with Facebook, or you can simply Skip this step.   Go to Clicker.tv and sign in. You can manually type in your credentials or use the onscreen keyboard with your remote.   Settings If you’d prefer not to display content from premium sites or Netflix, you can remove them through the Settings. Toggle Amazon, iTunes and Netflix on or off.   Watching Episodes To watch an episode, select the image to begin playing from the default source, or select one of the other options. You can see in the example below that you can choose to watch the episode from Fox, Hulu, or Amazon Video on Demand.   Your episode will then launch and begin playing from your chosen source. If you choose a premium content source such as iTunes or Amazon’s VOD, you’ll be taken to the Amazon’s website or iTunes and prompted to purchase the content.   Playlists Once you’ve created an account and signed in, you can begin adding Shows to your playlist. Choose a series and select Add to Playlist.   You’ll see in the example below that Family Guy has been Added and the number 142 is shown next to the playlist icon to indicate that 142 episodes has been added to your playlist. Underneath the listings for each episode in your playlist you can mark as Watched, or Remove individual episodes.   You can also view the playlist or make any changes from the Clicker.com website. Click on “Playlist” on the top right of the Clicker.com site to access your playlists. You can select individual episodes from your playlists, remove them, or mark them as watched or unwatched. Clicker.TV and Boxee Boxee offers a Clicker.TV app that features a limited amount of the Clicker.TV content. You’ll find Clicker.TV located in the Boxee Apps Library. Select the Clicker App and then choose Start. From the Clicker App interface you can search or browse for available content. Select an episode you’d like to view… Then select play in the pop up window. You can also add it to your Boxee queue, share it, or add a shortcut, just as you can from other Boxee apps. When you click play your episode will launch and begin playing in Boxee. Conclusion Clicker.TV is currently still in Beta and has some limitations. Typical remotes won’t work completely in all external websites. So, you’ll still need a keyboard to be able to perform some operations such as switching to full screen mode. The Boxee app offers a more fully remote friendly environment, but unfortunately lacks a good portion of the Clicker.tv content. As with many content sites, availability of certain programming may be limited by your geographic location. Want to add Clicker.TV functionality to Windows Media Center? You can do so through the Boxee Integration for Windows 7 Media Center plug-in. Clicker.tv Clicker.com Similar Articles Productive Geek Tips Share Digital Media With Other Computers on a Home Network with Windows 7Stream Music and Video Over the Internet with Windows Media Player 12Listen to Online Radio with AntennaEnable Media Streaming in Windows Home Server to Windows Media PlayerNorton Internet Security 2010 [Review] TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • Thursday Community Keynote: "By the Community, For the Community"

    - by Janice J. Heiss
    Sharat Chander, JavaOne Community Chairperson, began Thursday's Community Keynote. As part of the morning’s theme of "By the Community, For the Community," Chander noted that 60% of the material at the 2012 JavaOne conference was presented by Java Community members. "So next year, when the call for papers starts, put-in your submissions," he urged.From there, Gary Frost, Principal Member of Technical Staff, AMD, expanded upon Sunday's Strategy Keynote exploration of Project Sumatra, an OpenJDK project targeted at bringing Java to heterogeneous computing platforms (which combine the CPU and the parallel processor of the GPU into a single piece of silicon). Sumatra entails enhancing the JVM to make maximum use of these advanced platforms. Within this development space, AMD created the Aparapi API, which converts Java bytecode into OpenCL for execution on such GPU devices. The Aparapi API was open sourced in September 2011.Whether it was zooming-in on a Mandelbrot set, "the game of life," or a swarm of 10,000 Dukes in a space-bound gravitational dance, Frost's demos, using an Aparapi/OpenCL implementation, produced stunningly faster display results. He indicated that the Java 9 timeframe is where they see Project Sumatra coming to ultimate fruition, employing the Lamdas of Java 8.Returning to the theme of the keynote, Donald Smith, Director, Java Product Management, Oracle, explored a mind map graphic demonstrating the importance of Community in terms of fostering innovation. "It's the sharing and mixing of culture, the diversity, and the rapid prototyping," he said. Within this topic, Smith, brought up a panel of representatives from Cloudera, Eclipse, Eucalyptus, Perrone Robotics, and Twitter--ideal manifestations of community and innovation in the world of Java.Marten Mickos, CEO, Eucalyptus Systems, explored his company's open source cloud software platform, written in Java, and used by gaming companies, technology companies, media companies, and more. Chris Aniszczyk, Operations Engineering,Twitter, noted the importance of the JVM in terms of their multiple-language development environment. Mike Olson, CEO, Cloudera, described his company's Apache Hadoop-based software, support, and training. Mike Milinkovich, Executive Director, Eclipse Foundation, noted that they have about 270 tools projects at Eclipse, with 267 of them written in Java. Milinkovich added that Eclipse will even be going into space in 2013, as part of the control software on various experiments aboard the International Space Station. Lastly, Paul Perrone, CEO, Perrone Robotics, detailed his company's robotics and automation software platform built 100% on Java, including Java SE and Java ME--"on rat, to cat, to elephant-sized systems." Milinkovic noted that communities are by nature so good at innovation because of their very openness--"The more open you make your innovation process, the more ideas are challenged, and the more developers are focused on justifying their choices all the way through the process."From there, Georges Saab, VP Development Java SE OpenJDK, continued the topic of innovation and helping the Java Community to "Make the Future Java." Martijn Verburg, representing the London Java Community (winner of a Duke's Choice Award 2012 for their activity in OpenJDK and JCP), soon joined Saab onstage. Verburg detailed the LJC's "Adopt a JSR" program--"to get day-to-day developers more involved in the innovation that's happening around them."  From its London launching pad, the innovative program has spread to Brazil, Morocco, Latvia, India, and more.Other active participants in the program joined Verburg onstage--Ben Evans, London Java Community; James Gough, Stackthread; Bruno Souza, SOUJava; Richard Warburton, jClarity; and Cecelia Borg, Oracle--OpenJDK Onboarding. Together, the group explored the goals and tasks inherent in the Adopt a JSR program--from organizing hack days (testing prototype implementations), to managing mailing lists and forums, to triaging issues, to evangelism—all with the goal of fostering greater community/developer involvement, but equally importantly, building better open standards. “Come join us, and make your ecosystem better!" urged Verburg.Paul Perrone returned to profile the latest in his company's robotics work around Java--including the AARDBOTS family of smaller robotic vehicles, running the Perrone MAX platform on top of the Java JVM. Perrone took his "Rumbles" four-wheeled robot out for a spin onstage--a roaming, ARM-based security-bot vehicle, complete with IR, ultrasonic, and "cliff" sensors (the latter, for the raised stage at JavaOne). As an ultimate window into the future of robotics, Perrone displayed a "head-set" controller--a sensor directed at the forehead to monitor brainwaves, for the someday-implementation of brain-to-robot control.Then, just when it seemed this might be the end of the day's futuristic offerings, a mystery voice from offstage pronounced "I've got some toys"--proving to be guest-visitor James Gosling, there to explore his cutting-edge work with Liquid Robotics. While most think of robots as something with wheels or arms or lasers, Gosling explained, the Liquid Robotics vehicle is an entirely new and innovative ocean-going 'bot. Looking like a floating surfboard, with an attached set of underwater wings, the autonomous devices roam the oceans using only the energy of ocean waves to propel them, and a single actuated rudder to steer. "We have to accomplish all guidance just by wiggling the rudder," Gosling said. The devices offer applications from self-installing weather buoy, to pollution monitoring station, to marine mammal monitoring device, to climate change data gathering, to even ocean life genomic sampling. The early versions of the vehicle used C code on very tiny industrial micro controllers, where they had to "count the bytes one at a time."  But the latest generation vehicles, which just hit the water a week or so ago, employ an ARM processor running Linux and the ARM version of JDK 7. Gosling explained that vehicle communication from remote locations is achieved via the Iridium satellite network. But because of the costs of this communication path, the data must be sent in very small bursts--using SBD short burst data. "It costs $1/kb, so that rules everything in the software design,” said Gosling. “If you were trying to stream a Netflix video over this, it would cost a million dollars a movie. …We don't have a 'big data' problem," he quipped. There are currently about 150 Liquid Robotics vehicles out traversing the oceans. Gosling demonstrated real time satellite tracking of several vehicles currently at sea, noting that Java is actually particularly good at AI applications--due to the language having garbage collection, which facilitates complex data structures. To close-out his time onstage, Gosling of course participated in the ceremonial Java tee-shirt toss out to the audience…In parting, Chander passed the JavaOne Community Chairperson baton to Stephen Chin, Java Technology Evangelist, Oracle. Onstage in full motorcycle gear, Chin noted that he'll soon be touring Europe by motorcycle, meeting Java Community Members and streaming live via UStream--the ultimate manifestation of community and technology!  He also reminded attendees of the upcoming JavaOne Latin America 2012, São Paulo, Brazil (December 4-6, 2012), and stated that the CFP (call for papers) at the conference has been extended for one more week. "Remember, December is summer in Brazil!" Chin said.

    Read the article

  • Interesting articles and blogs on SPARC T4

    - by mv
    Interesting articles and blogs on SPARC T4 processor   I have consolidated all the interesting information I could get on SPARC T4 processor and its hardware cryptographic capabilities.  Hope its useful. 1. Advantages of SPARC T4 processor  Most important points in this T4 announcement are : "The SPARC T4 processor was designed from the ground up for high speed security and has a cryptographic stream processing unit (SPU) integrated directly into each processor core. These accelerators support 16 industry standard security ciphers and enable high speed encryption at rates 3 to 5 times that of competing processors. By integrating encryption capabilities directly inside the instruction pipeline, the SPARC T4 processor eliminates the performance and cost barriers typically associated with secure computing and makes it possible to deliver high security levels without impacting the user experience." Data Sheet has more details on these  : "New on-chip Encryption Instruction Accelerators with direct non-privileged support for 16 industry-standard cryptographic algorithms plus random number generation in each of the eight cores: AES, Camellia, CRC32c, DES, 3DES, DH, DSA, ECC, Kasumi, MD5, RSA, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512" I ran "isainfo -v" command on Solaris 11 Sparc T4-1 system. It shows the new instructions as expected  : $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32  2.  Dan Anderson's Blog have some interesting points about how these can be used : "New T4 crypto instructions include: aes_kexpand0, aes_kexpand1, aes_kexpand2,         aes_eround01, aes_eround23, aes_eround01_l, aes_eround_23_l, aes_dround01, aes_dround23, aes_dround01_l, aes_dround_23_l.       Having SPARC T4 hardware crypto instructions is all well and good, but how do we access it ?      The software is available with Solaris 11 and is used automatically if you are running Solaris a SPARC T4.  It is used internally in the kernel through kernel crypto modules.  It is available in user space through the PKCS#11 library." 3.   Dans' Blog on Where's the Crypto Libraries? Although this was written in 2009 but still is very useful  "Here's a brief tour of the major crypto libraries shown in the digraph:   The libpkcs11 library contains the PKCS#11 API (C_\*() functions, such as C_Initialize()). That in turn calls library pkcs11_softtoken or pkcs11_kernel, for userland or kernel crypto providers. The latter is used mostly for hardware-assisted cryptography (such as n2cp for Niagara2 SPARC processors), as that is performed more efficiently in kernel space with the "kCF" module (Kernel Crypto Framework). Additionally, for Solaris 10, strong crypto algorithms were split off in separate libraries, pkcs11_softtoken_extra libcryptoutil contains low-level utility functions to help implement cryptography. libsoftcrypto (OpenSolaris and Solaris Nevada only) implements several symmetric-key crypto algorithms in software, such as AES, RC4, and DES3, and the bignum library (used for RSA). libmd implements MD5, SHA, and SHA2 message digest algorithms" 4. Difference in T3 and T4 Diagram in this blog is good and self explanatory. Jeff's blog also highlights the differences  "The T4 servers have improved crypto acceleration, described at https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine. It is "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". Every physical or virtual CPU on a SPARC-T4 has full access to hardware based crypto acceleration at all times. .... For completeness sake, it's worth noting that the T4 adds more crypto algorithms, and accelerates Camelia, CRC32c, and more SHA-x." 5. About performance counters In this blog, performance counters are explained : "Note that unlike T3 and before, T4 crypto doesn't require kernel modules like ncp or n2cp, there is no visibility of crypto hardware with kstats or cryptoadm. T4 does provide hardware counters for crypto operations.  You can see these using cpustat: cpustat -c pic0=Instr_FGU_crypto 5 You can check the general crypto support of the hardware and OS with the command "isainfo -v". Since T4 crypto's implementation now allows direct userland access, there are no "crypto units" visible to cryptoadm.  " For more details refer Martin's blog as well. 6. How to turn off  SPARC T4 or Intel AES-NI crypto acceleration  I found this interesting blog from Darren about how to turn off  SPARC T4 or Intel AES-NI crypto acceleration. "One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.   The alternate to this is having the application coded to call getisax(2) system call and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so and libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  For SPARC T4 : export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" .. For Intel systems with AES-NI support: export LD_HWCAP="-aes"" Note that LD_HWCAP is explained in  http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html "LD_HWCAP, LD_HWCAP_32, and LD_HWCAP_64 -  Identifies an alternative hardware capabilities value... A “-” prefix results in the capabilities that follow being removed from the alternative capabilities." 7. Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing This Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing explains more details.  It has DTrace scripts which may come in handy : "To ensure the hardware-assisted cryptographic acceleration is configured to use and working with the security scenarios, it is recommended to use the following Solaris DTrace script. #!/usr/sbin/dtrace -s pid$1:libsoftcrypto:yf*:entry, pid$target:libsoftcrypto:rsa*:entry, pid$1:libmd:yf*:entry { @[probefunc] = count(); } tick-1sec { printa(@ops); trunc(@ops); }" Note that I have slightly modified the D Script to have RSA "libsoftcrypto:rsa*:entry" as well as per recommendations from Chi-Chang Lin. 8. References http://www.oracle.com/us/corporate/features/sparc-t4-announcement-494846.html http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-1-ds-487858.pdf https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine https://blogs.oracle.com/DanX/entry/where_s_the_crypto_libraries https://blogs.oracle.com/darren/entry/howto_turn_off_sparc_t4 http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html   https://blogs.oracle.com/hardware/entry/unleash_the_power_of_cryptography https://blogs.oracle.com/cmt/entry/t4_crypto_cheat_sheet https://blogs.oracle.com/martinm/entry/t4_performance_counters_explained  https://blogs.oracle.com/jsavit/entry/no_mau_required_on_a http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-business-wp-524472.pdf

    Read the article

  • Cloud MBaaS : The Next Big Thing in Enterprise Mobility

    - by shiju
    In this blog post, I will take a look at Cloud Mobile Backend as a Service (MBaaS) and how we can leverage Cloud based Mobile Backend as a Service for building enterprise mobile apps. Today, mobile apps are incredibly significant in both consumer and enterprise space and the demand for the mobile apps is unbelievably increasing in day to day business. An enterprise can’t survive in business without a proper mobility strategy. A better mobility strategy and faster delivery of your mobile apps will give you an extra mileage for your business and IT strategy. So organizations and mobile developers are looking for different strategy for meeting this demand and adopting different development strategy for their mobile apps. Some developers are adopting hybrid mobile app development platforms, for delivering their products for multiple platforms, for fast time-to-market. Others are adopting a Mobile enterprise application platform (MEAP) such as Kony for their enterprise mobile apps for fast time-to-market and better business integration. The Challenges of Enterprise Mobility The real challenge of enterprise mobile apps, is not about creating the front-end environment or developing front-end for multiple platforms. The most important thing of enterprise mobile apps is to expose your enterprise data to mobile devices where the real pain is your business data might be residing in lot of different systems including legacy systems, ERP systems etc., and these systems will be deployed with lot of security restrictions. Exposing your data from the on-premises servers, is not a easy thing for most of the business organizations. Many organizations are spending too much time for their front-end development strategy, but they are really lacking for building a strategy on their back-end for exposing the business data to mobile apps. So building a REST services layer and mobile back-end services, on the top of legacy systems and existing middleware systems, is the key part of most of the enterprise mobile apps, where multiple mobile platforms can easily consume these REST services and other mobile back-end services for building mobile apps. For some mobile apps, we can’t predict its user base, especially for products where customers can gradually increase at any time. And for today’s mobile apps, faster time-to-market is very critical so that spending too much time for mobile app’s scalability, will not be worth. The real power of Cloud is the agility and on-demand scalability, where we can scale-up and scale-down our applications very easily. It would be great if we could use the power of Cloud to mobile apps. So using Cloud for mobile apps is a natural fit, where we can use Cloud as the storage for mobile apps and hosting mechanism for mobile back-end services, where we can enjoy the full power of Cloud with greater level of on-demand scalability and operational agility. So Cloud based Mobile Backend as a Service is great choice for building enterprise mobile apps, where enterprises can enjoy the massive scalability power of their mobile apps, provided by public cloud vendors such as Microsoft Windows Azure. Mobile Backend as a Service (MBaaS) We have discussed the key challenges of enterprise mobile apps and how we can leverage Cloud for hosting mobile backend services. MBaaS is a set of cloud-based, server-side mobile services for multiple mobile platforms and HTML5 platform, which can be used as a backend for your mobile apps with the scalability power of Cloud. The information below provides the key features of a typical MBaaS platform: Cloud based storage for your application data. Automatic REST API services on the application data, for CRUD operations. Native push notification services with massive scalability power. User management services for authenticate users. User authentication via Social accounts such as Facebook, Google, Microsoft, and Twitter. Scheduler services for periodically sending data to mobile devices. Native SDKs for multiple mobile platforms such as Windows Phone and Windows Store, Android, Apple iOS, and HTML5, for easily accessing the mobile services from mobile apps, with better security.  Typically, a MBaaS platform will provide native SDKs for multiple mobile platforms so that we can easily consume the server-side mobile services. MBaaS based REST APIs can use for integrating to enterprise backend systems. We can use the same mobile services for multiple platform so hat we can reuse the application logic to multiple mobile platforms. Public cloud vendors are building the mobile services on the top of their PaaS offerings. Windows Azure Mobile Services is a great platform for a MBaaS offering that is leveraging Windows Azure Cloud platform’s PaaS capabilities. Hybrid mobile development platform Titanium provides their own MBaaS services. LoopBack is a new MBaaS service provided by Node.js consulting firm StrongLoop, which can be hosted on multiple cloud platforms and also for on-premises servers. The Challenges of MBaaS Solutions If you are building your mobile apps with a new data storage, it will be very easy, since there is not any integration challenges you have to face. But most of the use cases, you have to extract your application data in which stored in on-premises servers which might be under VPNs and firewalls. So exposing these data to your MBaaS solution with a proper security would be a big challenge. The capability of your MBaaS vendor is very important as you have to interact with your legacy systems for many enterprise mobile apps. So you should be very careful about choosing for MBaaS vendor. At the same time, you should have a proper strategy for mobilizing your application data which stored in on-premises legacy systems, where your solution architecture and strategy is more important than platforms and tools.  Windows Azure Mobile Services Windows Azure Mobile Services is an MBaaS offerings from Windows Azure cloud platform. IMHO, Microsoft Windows Azure is the best PaaS platform in the Cloud space. Windows Azure Mobile Services extends the PaaS capabilities of Windows Azure, to mobile devices, which can be used as a cloud backend for your mobile apps, which will provide global availability and reach for your mobile apps. Windows Azure Mobile Services provides storage services, user management with social network integration, push notification services and scheduler services and provides native SDKs for all major mobile platforms and HTML5. In Windows Azure Mobile Services, you can write server-side scripts in Node.js where you can enjoy the full power of Node.js including the use of NPM modules for your server-side scripts. In the previous section, we had discussed some challenges of MBaaS solutions. You can leverage Windows Azure Cloud platform for solving many challenges regarding with enterprise mobility. The entire Windows Azure platform can play a key role for working as the backend for your mobile apps where you can leverage the entire Windows Azure platform for your mobile apps. With Windows Azure, you can easily connect to your on-premises systems which is a key thing for mobile backend solutions. Another key point is that Windows Azure provides better integration with services like Active Directory, which makes Windows Azure as the de facto platform for enterprise mobility, for enterprises, who have been leveraging Microsoft ecosystem for their application and IT infrastructure. Windows Azure Mobile Services  is going to next evolution where you can expect some exciting features in near future. One area, where Windows Azure Mobile Services should definitely need an improvement, is about the default storage mechanism in which currently it is depends on SQL Server. IMHO, developers should be able to choose multiple default storage option when creating a new mobile service instance. Let’s say, there should be a different storage providers such as SQL Server storage provider and Table storage provider where developers should be able to choose their choice of storage provider when creating a new mobile services project. I have been used Windows Azure and Windows Azure Mobile Services as the backend for production apps for mobile, where it performed very well. MBaaS Over MEAP Recently, many larger enterprises has been adopted Mobile enterprise application platform (MEAP) for their mobile apps. I haven’t worked on any production MEAP solution, but I heard that developers are really struggling with MEAP in different way. The learning curve for a proprietary MEAP platform is very high. I am completely against for using larger proprietary ecosystem for mobile apps. For enterprise mobile apps, I highly recommend to use native iOS/Android/Windows Phone or HTML5  for front-end with a cloud hosted MBaaS solution as the middleware. A MBaaS service can be consumed from multiple mobile apps where REST APIs are using to integrating with enterprise backend systems. Enterprise mobility should start with exposing REST APIs on the enterprise backend systems and these REST APIs can host on Cloud where we can enjoy the power of Cloud for our services. If you are having REST APIs for your enterprise data, then you can easily build mobile frontends for multiple platforms.   You can follow me on Twitter @shijucv

    Read the article

  • Who IS Brian Solis?

    - by Michael Snow
    Q: Brian, Welcome to the WebCenter Blog. Can you tell our readers your current role and what career path brought you here? A: I’m proudly serving as a principal analyst at Altimeter Group, a research based advisory firm in Silicon Valley. My career path, well, let’s just say it’s a long and winding road. As a kid, I was fascinated with technology. I learned programming at an early age and found myself naturally drawn to all things tech. I started my career as a database programmer at a technology marketing agency in Southern California. When I saw the chance to work with tech companies and help them better market their capabilities to businesses and consumers, I switched focus from programming to marketing and advertising. As technologist, my approach to marketing was different. I didn’t believe in hype, fluff or buzz words. I believed in translating features into benefits and specifications and capabilities into solutions for real world problems and opportunities. In the mid 90’s I experimented with direct to consumer/customer engagement in dedicated technology forums and boards. I quickly realized that the entire approach to do so would need to change. Therefore, I learned and developed new methods for a more social and informed way of engaging people in ways that helped them, marketed the company, and also tied to tangible benefits for the company. This work would lead me to start an agency in 1999 dedicated to interactive marketing. As I continued to experiment with interactive platforms, I developed interesting methods for converting one-to-many forms of media into one-to-one-to-many programs. I ran that company until joining Altimeter Group. Along the way, in the early 2000s, I realized that everything was changing and that there were others like me finding success in what would become a more social form of media. I dedicated a significant amount of my time to sharing everything that I learned in the form of articles, blogs, and eventually books. My mission became to share my experience with anyone who’d listen. It would later become much bigger than marketing, this would lead to a decade of work, that still continues, in business transformation. Then and now, I find myself always assuming the role of a student. Q: As an industry analyst & technology change evangelist, what are you primarily focused on these days? A: As a digital analyst, I study how disruptive technology impacts business. As an aspiring social scientist, I study how technology affects human behavior. I explore both horizons professionally and personally to better understand the future of popular culture and also the opportunities that exist for organizations to improve relationships and experiences with customers and the people that are important to them. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? A: The line between work and life isn’t blurred it’s been overtly crossed and erased. We live in an always on society. The digital lifestyle keeps us connected to one another it keeps us connected all the time. Whether your sending or checking email, trying to catch up, or simply trying to get ahead, people are spending the equivalent of an extra day at work in the time they spend out of work…working. That’s absurd. It’s a matter of survival. It’s also a matter of unintended, subconscious self-causation. We brought this on ourselves and continue to do so. Think about your day. You’re in meetings for the better part of each day. You probably spend evenings and weekends catching up on email and actually doing the work you couldn’t get to during the day. And, your co-workers and executives are doing the same thing. So if you try to slow down, you find yourself at a disadvantage as you’re willfully pulling yourself out of an unfortunate culture of whenever wherever business dynamics. If you’re unresponsive or unreachable, someone within your organization or on your team is accessible. Over time, this could contribute to unfavorable impressions. I choose to steer my life balance in ways that complement one another. But, I don’t pretend to have this figured out by any means. In fact, I find myself swimming upstream like those around me. It’s essentially a competition for relevance and at some point I’ll learn how to earn attention and relevance while redrawing the line between work and life. Q: How can people keep up with what you’re working on? A: The easy answer is that people can keep up with me at briansolis.com. But, I also try to reach people where their attention is focused. Whether it’s Facebook (facebook.com/briansolis), Twitter (@briansolis), Google+ (+briansolis), Youtube (briansolis.tv) or through books and conferences, people can usually find me in a place of their choosing. Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon? A: I spent some time with the Oracle team reviewing the idea of Digital Darwinism and how technology and society are evolving faster than many organizations can adapt. Digital Darwinism: How Brands Can Survive the Rapid Evolution of Society and Technology Thursday, December 13, 2012, 10 a.m. PT / 1 p.m. ET Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast? A: We’re inviting guests to join us online as we dive into the future of business and how the convergence of technology and connected consumerism would ultimately impact how business is done. It’ll be an exciting and revealing conversation that explores just how much everything is changing. We’ll also review the importance of adapting to emergent trends and how to compete for the future. It’s important to recognize that change is not happening to us, it’s happening because of us. We are part of the revolution and therefore we need to help organizations adapt from the inside out. Watch the Entire Oracle Social Business Thought Leaders Webcast Series On-Demand and Stay Tuned for More to Come in 2013!

    Read the article

  • CodePlex Daily Summary for Sunday, April 04, 2010

    CodePlex Daily Summary for Sunday, April 04, 2010New ProjectsAcervo 2 - Gerenciador de coleções: Acervo 2 is a web application developed in ASP.NET 3.5 with Entity Framework, Coolite UI web controls and MySQL database that helps to catalog and ...AssemblyInfo Editor: AssemblyInfo Editor is a small Visual Studio 2010 extension I developed for my personal use mainly for automatically incrementing AssemblyVersion a...CommLine: It's a Command Line Interpreter. At the moment, it's a beta version, so I wait for developers that wanna help meFlowgraph Viewer: The flowgraph viewer enables users to view, build and share flowgraphs for the Crysis-franchise. It's built on Silverlight4, using MEF and Mvvmlight.Hash Calculator: WPF Windows 7 program to compute SHA1 & MD5 hash functions.MediaRSS library for .NET: This is a small set of libraries that allow you to create, read, and write MediaRSS files. By leveraging the syndication model object in .NET this...MEF Visualizer Tool: Helps to see what is going on inside the CompositionContainerone framework for developing asp.net project more elegent、flexible、and testable: if you are familiar with jsf、cdi、scoped javabean and work under asp.net, you may want to support aop and max flexibility and testability , all of ...Picasa Manager: A Silverlight Out Of Browser Application that Helps you manage your PicasaWeb albums in the easyest way possible.SharePhone: Windows Phone 7 library for connecting to SharePoint 2007/2010. Lets you work with SPWeb, SPList, reading/writing strong typed list items, user ...Silverlight Resource Extension: Silverlight Resource Extension. Extension silverlight project for use ResX resources and localize satellite dll.Silverlight Streamgraph: Streamgraph component for SilverlightTFTP Server: Managed TFTP server implementation, written in C#. Supports: - IPv4 and IPv6 - correct retry behavior. - TFTP options: block size, transfer size, a...Virtual UserGroup Video Helpers: This is a project that holds all the tools used by the C4MVC Virtual Usergroup. Tools written in C# and Powershell to automate, Live Meeting, Expr...xBlog: xBlog is a project to build a simple and extensible Blog Engine based on xml and linqXmlCodeEditor: XmlCodeEditor is a Silverlight 4 control based on RichTextControl that creates coloring and intellisense similar to the one in Visual Studio for ed...Zinc Launcher: Zinc Launcher is a simple Windows Media Center plugin that launches Zinc and attempts to manage the windows as seamlessly as possible. In addition ...New ReleasesAcervo 2 - Gerenciador de coleções: Acervo 2 - v1.0: Arquivos para implantação do sistema Acervo2 Aplicação web Web service Smart ClientAssemblyInfo Editor: Beta 1: Initial release of Assembly Info Editor. At this point, it is feature-complete and is relatively stable. There are undoubtedly some bugs to work o...Box2D.XNA: Box2D.XNA r70 Source Code and Solution: This version is synced to changeset 44697. This represents our official port of the C Box2D up to r70 on the Google Code project. With this versi...Boxee Launcher: Boxee Launcher Release 1.0.1.2: Will now stop Media Center playback before launching BoxeeBoxee Launcher: Boxee Launcher Release 1.0.1.3: Added a background window that attempts to display over the desktop and taskbar, and below Boxee and Media Center so that the desktop and taskbar a...CommLine: Beta Version 0.1: First Beta Of the AppCommLine: Source v0.1 Beta: Source Code C of 0.1 beta versionEncrypted Notes: Encrypted Notes 1.6.2: This is the latest version of Encrypted Notes (1.6.2), with general changes and improved randomness for the key generator. It has an installer that...Hash Calculator: HashCalculator: HashCalculator 1.0Hash Calculator: HashCalculator Source code: HashCalculator 1.0Hulu Launcher: Hulu Launcher 1.0.1.3: Added a background window that attempts to display over the desktop and taskbar, and below Hulu and Media Center so that the desktop and taskbar ar...Hulu Launcher: Hulu Launcher Release 1.0.1.2: Hulu Launcher will now stop playback in Media Center before launching Hulu Desktop.Innovative Games: 4.3 - Sprite Effects: Source code download for chapter 4.3 - "Sprite Effects"MediaRSS library for .NET: 0.1: Alpha release. Majority of MediaRSS spec is supported. A small set of unit test / sample code are included. A lightly tested CustomFormatter object...MEF Visualizer Tool: MEF Visualizer Tool 0.1: Help to see what going on in side CompositionContainer Container = new CompositionContainer( new AggregateCatalog( ...Ncqrs Framework - A CQRS framework for .NET: Ncqrs with sample application: This is the first release of the Ncqrs Framework. It contains the Ncqrs source code and a runnable sample application. All the code in this release...Rubik Cube's 3D Silverlight 3.0 Animated Solution: Rubik Cube 3D with Animated Solution: This project is a realization of Silverlight 3.0 Rubik Cube 3D with Animated Solution. The Solution is available for 3x3x3 cube, other features are...Scrabler: scrabler release 0.6.2.5: fixed a bug that werent executed some scriptsSharePhone: SharePhone: Initial release with basic functionality: Open SharePoint webs and subwebs Retrieve lists on SPWeb objects Read metadata/properties on lists ...SharePhone: SharePhone v.1.0.1: Fixed a bug that prevented saving list items to SharePointSharePoint Labs: SPLab4001A-FRA-Level100: SPLab4001A-FRA-Level100 This SharePoint Lab will teach you the first best practice you should apply when writing code with the SharePoint API. Lab ...Silverlight Resource Extension: ResourceExtension (alpha): Alpha version is not stable. Only for review.Silverlight Streamgraph: Port from processing.org: A port from the processing.org streamgraph. Code-heavy with very little XAML involved at this point.Theocratic Ministry School System: TMSS - Ver 1.1.1: What’s New! Added Menu Options 2010 Schedule Access 2007 Runtime There are still many uncompleted items so this is still a conceptual release....Theocratic Ministry School System: TMSS - Ver 1.1.2: Fixed the Schedule Import. Need needs to be tested. Click import button and make sure you can get the 2010 Schedule from the internet.thinktecture Starter STS (Community Edition): StarterSTS v1.0 RTW: Version 1.0 RTWTribe.Cache: Tribe.Cache Alpha - 0.2.0.0: Tribe.Cache Alpha - 0.2.0.0 - Now has sliding and absolute expiration on cache entries. Functional Alpha Release - But do not use in productionTwitterVB - A .NET Twitter Library: TwitterVB-2.3.1: This is mostly a minor release that adds br.st URL shortening to the menu (API key from http://br.st required)Virtu: Virtu 0.8.1: Source Requirements.NET Framework 3.5 with Service Pack 1 Visual Studio 2008 with Service Pack 1, or Visual C# 2008 Express Edition with Service Pa...Visual Studio DSite: Advanced C++ Calculator: An advanced visual c 2008 calculator that can do all your basic operations, plus some advanced mathematical functions. Source Code Only.xnaWebcam: xnaWebcam 0.3: xnaWebcam 0.3 Version 0.3: -ResolutionSet: 400x300 (Default), 800x600, 1024x720 -Settings Window got Icon -Settings Window Changes -DevConsole.cs ...Most Popular ProjectsRawrWBFS ManagerMicrosoft SQL Server Product Samples: DatabaseASP.NET Ajax LibrarySilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesDotNetNuke® Community EditionMost Active ProjectsGraffiti CMSnopCommerce. Open Source online shop e-commerce solution.RawrFacebook Developer ToolkitjQuery Library for SharePoint Web ServicesLINQ to TwitterBlogEngine.NETN2 CMSBase Class LibrariesFarseer Physics Engine

    Read the article

  • B2B and B2C Commerce are alike… but a little different – Oracle Commerce named Leader in Forrester B2B Commerce Wave

    - by Katrina Gosek
    We weren’t surprised to see Oracle Commerce positioned as a Leader in Forrester’s first Commerce Wave focused on B2B, released earlier this month. The reports validates much of what we’ve heard from our largest customers – the world’s largest distribution, manufacturing and high-tech customers who sell billions of dollars of goods and services to other businesses through their Web channels. More importantly, the report confirms something very important: B2B and B2C Commerce are alike… but a little different. B2B and B2C Commerce are alike… Clearly, B2C experiences have set expectations for B2B. Every B2B buyer is a consumer at home and brings the same expectations to a website selling electronic components, aftermarket parts, or MRO products. Forrester calls these rich consumer-based capabilities that help B2B customers do their jobs “table stakes”: search & navigation, promotions, cross-channel commerce and mobile: “Whether they are just beginning to sell online or are in the late stages of launching a next-generation site, B2B eCommerce operations today must: offer a customer experience standard comparable to what leading b2c sites now offer; address the growing influence that mobile devices are having in the workplace; make a qualitative and quantitative business case that drives sustained investment.” Just five years ago, many of our B2B customers’ online business comprised only 5-10% of their total revenue. Today, when we speak to those same brands, we hear about double and triple digit growth in their online channels. Many have seen the percentage of the business they perform in their web channels cross the 30-50% threshold. You can hear first-hand from several Oracle Commerce B2B customers about the success they are seeing, and what they’re trying to accomplish (Carolina Biological, Premier Farnell, DeliXL, Elsevier). This momentum is likely the reason Forrester broke out the separate B2B Commerce Wave from the B2C Wave. In fact, B2B is becoming the larger force in commerce, expected to collect twice the online dollars of B2C this year ($559 billion). But a little different… Despite the similarities, there is a key and very important difference between B2C and B2B. Unlike a consumer shopping for shoes, a business shopper buying from a distributor or manufacturer is coming to the Web channel as a part of their job. So in addition to a rich, consumer-like experience this shopper expects, these B2B buyers need quoting tools and complex pricing capabilities, like eProcurement, bulk order entry, and other self-service tools such as account, contract and organization management.  Forrester also is emphasizing three additional “back-end” tools and capabilities their clients say they need to drive growth in their B2B online channels: i) product information management (PIM), which provides a single system of record for large part lists and product catalogs; ii) web content management (WCM), needed to manage large volumes of unstructured marketing information, and iii) order management systems (OMS), which manage and orchestrate the complex B2B order life cycle from quote through approval, submission to manufacturing, distribution and delivery.  We would like to expand on each of these 3 areas: As Forrester highlights, back-end PIM is definitely needed by B2B Commerce providers. Most B2B companies have made significant investments in enterprise-grade PIMs, given the importance of product data management for aggregation and syndication of content, product attribution, analytics, and handling of complex workflows. While in principle it may sound appealing to have a PIM as part of a commerce offering (especially for SMBs who have to do more with less), our customers have typically found that PIM in a commerce platform is largely redundant with what they already have in-place, and is not fully-featured or robust enough to handle the complexity of the product data sets that B2B distributors and manufacturers usually handle. To meet the PIM needs for commerce, Oracle offers enterprise PIM (Product Hub/Fusion PIM) and a robust enterprise data quality product (EDQP) integrated with the Oracle Commerce solution. These are key differentiators of our offering and these capabilities are becoming even more tightly integrated with Oracle Commerce over time. For Commerce, what customers really need is a robust product catalog and content management system for enabling business users to further enrich and ready catalog and content data to be presented and sold online.  This has been a significant area of investment in the Oracle Commerce platform , which continue to get stronger. We see this combination of capabilities as best meeting the needs of our customers for a commerce platform without adding a largely redundant, less functional PIM in the commerce front-end.   On the topic of web content management, we were pleased to see Forrester recognize Oracle’s unique functional capabilities in this area and the “unique opportunity in the market to lead the convergence of commerce and content management with the amalgamation of Oracle Commerce with WebCenter Sites (formally FatWire).” Strong content management capabilities are critical for distributors and manufacturers who are frequently serving an engineering audience coming to their websites to conduct product research in search of technical data sheets, drawings, videos and more. The convergence of content, commerce, and experience is critical for B2B brands selling online. Regarding order management, Forrester notes that many businesses use their existing back-end enterprise resource planning (ERP) systems to manage order life cycles.  We hear the same from most of our B2B customers, as they already have an ERP system—if not several of them—and are not interested in yet another one.  So what do we take away from the Wave results? Forrester notes that the Oracle Commerce Platform “has always had strong B2B commerce capabilities and Oracle has an exhaustive list of B2B customers using the solution.”  What makes us excited about developing leading B2B solutions are the close relationships with our customers and the clear opportunity in the market – which we’ll address in an exciting new release in the coming months. Oracle has one of the world’s largest B2B customer bases, providing leading solutions across key business-to-business functions – from marketing, sales automation, and service to master data management, and ERP.  To learn more about Oracle’s Commerce product vision and strategy, visit our website and check out these other B2B Commerce Resources: - 2013 B2B Commerce Trends Report - B2B Commerce Whitepaper: Consumerization, Complexity, Change - B2B Commerce Webcast: What Industry Trend Setters Do Right - Internet Retailer, Web Drives Sales for B2B Companies - Internet Retailer, The Web Means Business: B2B Companies Beef Up Their Websites, borrowing from b2c retailers and breaking new ground - Internet Retailer, B2B e-Commerce is poised for growth ----------THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT 

    Read the article

  • So, how is the Oracle HCM Cloud User Experience? In a word, smokin’!

    - by Edith Mireles-Oracle
    By Misha Vaughan, Oracle Applications User Experience Oracle unveiled its game-changing cloud user experience strategy at Oracle OpenWorld 2013 (remember that?) with a new simplified user interface (UI) paradigm.  The Oracle HCM cloud user experience is about light-weight interaction, tailored to the task you are trying to accomplish, on the device you are comfortable working with. A key theme for the Oracle user experience is being able to move from smartphone to tablet to desktop, with all of your data in the cloud. The Oracle HCM Cloud user experience provides designs for better productivity, no matter when and how your employees need to work. Release 8  Oracle recently demonstrated how fast it is moving development forward for our cloud applications, with the availability of release 8.  In release 8, users will see expanded simplicity in the HCM cloud user experience, such as filling out a time card and succession planning. Oracle has also expanded its mobile capabilities with task flows for payslips, managing absences, and advanced analytics. In addition, users will see expanded extensibility with the new structures editor for simplified pages, and the with the user interface text editor, which allows you to update language throughout the UI from one place. If you don’t like calling people who work for you “employees,” you can use this tool to create a term that is suited to your business.  Take a look yourself at what’s available now. What are people saying?Debra Lilley (@debralilley), an Oracle ACE Director who has a long history with Oracle Applications, recently gave her perspective on release 8: “Having had the privilege of seeing a preview of release 8, I am again impressed with the enhancements around simplified UI. Even more so, at a user group event in London this week, an existing Cloud HCM customer speaking publically about his implementation said he was very excited about release 8 as the absence functionality was so superior and simple to use.”  In an interview with Lilley for a blog post by Dennis Howlett  (@dahowlett), we probably couldn’t have asked for a more even-handed look at the Oracle Applications Cloud and the impact of user experience. Take the time to watch all three videos and get the full picture.  In closing, Howlett’s said: “There is always the caveat that getting from the past to Fusion [from the editor: Fusion is now called the Oracle Applications Cloud] is not quite as simple as may be painted, but the outcomes are much better than anticipated in large measure because the user experience is so much better than what went before.” Herman Slange, Technical Manager with Oracle Applications partner Profource, agrees with that comment. “We use on-premise Financials & HCM for internal use. Having a simple user interface that works on a desktop as well as a tablet for (very) non-technical users is a big relief. Coming from E-Business Suite, there is less training (none) required to access HCM content.  From a technical point of view, having the abilities to tailor the simplified UI very easy makes it very efficient for us to adjust to specific customer needs.  When we have a conversation about simplified UI, we just hand over a tablet and ask the customer to just use it. No training and no explanation required.” Finally, in a story by Computer Weekly  about Oracle customer BG Group, a natural gas exploration and production company based in the UK and with a presence in 20 countries, the author states: “The new HR platform has proved to be easier and more intuitive for HR staff to use than the previous SAP-based technology.” What’s Next for Oracle’s Applications Cloud User Experiences? This is the question that Steve Miranda, Oracle Executive Vice President, Applications Development, asks the Applications User Experience team, and we’ve been hard at work for some time now on “what’s next.”  I can’t say too much about it, but I can tell you that we’ve started talking to customers and partners, under non-disclosure agreements, about user experience concepts that we are working on in order to get their feedback. We recently had a chance to talk about possibilities for the Oracle HCM Cloud user experience at an Oracle HCM Southern California Customer Success Summit. This was a fantastic event, hosted by Shane Bliss and Vance Morossi of the Oracle Client Success Team. We got to use the uber-slick facilities of Allergan, our hosts (of Botox fame), headquartered in Irvine, Calif., with a presence in more than 100 countries. Photo by Misha Vaughan, Oracle Applications User Experience Vance Morossi, left, and Shane Bliss, of the Oracle Client Success Team, at an Oracle HCM Southern California Customer Success Summit.  We were treated to a few really excellent talks around human resources (HR). Alice White, VP Human Resources, discussed Allergan's process for global talent acquisition -- how Allergan has designed and deployed a global process, and global tools, along with Oracle and Cognizant, and are now at the end of a global implementation. She shared a couple of insights about the journey for Allergan: “One of the major areas for improvement was on role clarification within the company.” She said the company is “empowering managers and deputizing them as recruiters. Now it is a global process that is nimble and efficient."  Deepak Rammohan, VP Product Management, HCM Cloud, Oracle, also took the stage to talk about pioneering modern HR. He reflected modern HR problems of getting the right data about the workforce, the importance of getting the right talent as a key strategic initiative, and other workforce insights. "How do we design systems to deal with all of this?” he asked. “Make sure the systems are talent-centric. The next piece is collaborative, engaging, and mobile. A lot of this is influenced by what users see today. The last thing is around insight; insight at the point of decision-making." Rammohan showed off some killer HCM Cloud talent demos focused on simplicity and mobility that his team has been cooking up, and closed with a great line about the nature of modern recruiting: "Recruiting is a team sport." Deepak Rammohan, left, and Jake Kuramoto, both of Oracle, debate the merits of a Google Glass concept demo for recruiters on-the-go. Later, in an expo-style format, the Apps UX team showed several concepts for next-generation HCM Cloud user experiences, including demos shown by Jake Kuramoto (@jkuramoto) of The AppsLab, and Aylin Uysal (@aylinuysal), Director, HCM Cloud user experience. We even hauled out our eye-tracker, a research tool used to show where the eye is looking at a particular screen, thanks to teammate Michael LaDuke. Dionne Healy, HCM Client Executive, and Aylin Uysal, Director, HCM Cloud user experiences, Oracle, take a look at new HCM Cloud UX concepts. We closed the day with Jeremy Ashley (@jrwashley), VP, Applications User Experience, who brought it all back together by talking about the big picture for applications cloud user experiences. He covered the trends we are paying attention to now, what users will be expecting of their modern enterprise apps, and what Oracle’s design strategy is around these ideas.   We closed with an excellent reception hosted by ADP Payroll services at Bistango. Want to read more?Want to see where our cloud user experience is going next? Read more on the UsableApps web site about our latest design initiative: “Glance, Scan, Commit.” Or catch up on the back story by looking over our Applications Cloud user experience content on the UsableApps web site.  You can also find out where we’ll be next at the Events page on UsableApps.

    Read the article

  • Community Conversation

    - by ultan o'broin
    Applications User Experience members (Erika Webb, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Applications User Experience Senior Director Laurie Pattison (left) with Anne Gentle at the User Assistance Europe Conference In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: * Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help * The Microsoft Development Network Community Center * ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. * Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches * The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes are reaching out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • Company Review: Google Products

    Google, Inc offers an array of products and services to all of its end-users. However their search capabilities are the foundation for Google’s current success and their primary business focus. Currently, Google offers over twenty different search applications that allow users to search the internet for books, maps, videos, images, products and much more. Their product decisions have allowed users demands to be met while focusing on the free based model. This allows users to access Google data free of charge and indirectly gives Google a strong competitive advantage of other competitors along with the accuracy of the search results. According to Google, Inc, they offer the following types of searching capabilities: Alerts Get email updates on the topics of your choice Blog Search Find blogs on your favorite topics  Books Search the full text of books  Custom Search Create a customized search experience for your community  Desktop Search and personalize your computer  Dictionary Search for definitions of words and phrases Directory Search the web, organized by topic or category Earth Explore the world from your computer Finance Business info, news and interactive charts GOOG-411 Find and connect for free with businesses from your phone  Images Search for images on the web Maps View maps and directions News Search thousands of news stories Patent Search Search the full text of US Patents Product Search Search for stuff to buy Scholar Search scholarly papers Toolbar Add a search box to your browser Trends Explore past and present search trends Videos Search for videos on the web Web Search Search billions of web pages Web Search Features Find movies, music, stocks, books and more mapping Google’s free based business model is only one way it differentiates itself from its competition. There is also a strong focus on the accuracy of search results and the speed in which they are returned to the end-user. Quality function deployment (QFD) is a structured method used to help connect user needs to the design features of a project proposed to address those needs. This method is particularly useful in accounting for needs that are not easily articulated or precisely defined according to the U. S. Department of Transportation Federal Highway Administration. Due to the fact that QFD is so customer driven Google is always in a constant state of change in attempt to reengineer its search algorithms, and other dependant systems so that end-users requirements are constantly being met. Value engineering is a key example of this, Google is constantly trying to improve all aspects of its products, improve system maintainability, and system interoperability. Bridgefield Group defines value engineering as an organized methodology that identifies and selects the lowest lifecycle cost options in design, materials and processes that achieves the desired level of performance, reliability and customer satisfaction. In addition, it seeks to remove unnecessary costs in the above areas and is often a joint effort with cross-functional internal teams and relevant suppliers. Common issues that appear when developing large scale systems like Google’s search applications include modular design of a product and/or service and providing accurate value analysis. A design approach that adheres to four fundamental tenets of cohesiveness, encapsulation, self-containment, and high binding to design a system component as an independently operable unit subject to change is how the Open System Joint Task Force defines modular design. More specifically M. S. Schmaltz defines modular software design as having a large collection of statements strung together in one partition of in-line code; we segment or divide the statements into logical groups called modules. Each module performs one or two tasks, and then passes control to another module. By breaking up the code into "bite-sized chunks", so to speak, we are able to better control the flow of data and control. This is especially true in large software systems. Value analysis is a process to evaluate products and services based on effectiveness, safety, and cost. Value analysis involves assessing the quality as well as the cost of a product or service as defined by the Healthcare Financial Management Association.  “Operations Management deals with the design and management of products, processes, services and supply chains. It considers the acquisition, development, and utilization of resources that firms need to deliver the goods and services their clients want.” (MIT,2010) Google, Inc encourages an open environment between all employees, also known as Googlers. This is reinforced by a cross-section team or cross-functional teams comprised from multiple departments assigned to every project so that every department like marketing, finance, and quality assurance has input on every project. In addition, Google is known for their openness to new ideas regardless of the status or seniority of an employee. In fact, Google allows for 20% of an employee’s time can be devoted to developing new ideas and/or pet projects. HumTech.com defines a cross-functional team as a collection of people with varied levels of skills and experience brought together to accomplish a task. As the name implies, Cross-Functional Team members come from different organizational units. Cross-Functional Teams may be permanent or ad hoc. Google’s search application product strategy primarily focuses on mass customization. This is allows Google to create a base search application and allows results to be returned to the end-users quickly based on specific parameters and search settings. In addition, they also store the data that is returned in case other desire the same results based on other end-users supplying the same customized settings. This allows Google to appear to render search results in virtually real-time to the user while allowing for complete customization of the searching criteria. Greg Vogl, a professor at Uganda Martyrs University, defines mass customization as when a business gives its customers the opportunity to tailor its products or services to the customer's specifications. The IT staff at Google play a key role in ensuring that the search application’s product strategy is maintained simply because the IT staff designs, develops, and maintains all of their proprietary applications. In fact, they also maintain all network infrastructure to ensure that it is available to all end-users. References: http://www.google.com/intl/en/options/ http://ops.fhwa.dot.gov/freight/publications/ftat_user_guide/sec5.htm http://www.bridgefieldgroup.com/bridgefieldgroup/glos9.htm#V http://www.acq.osd.mil/osjtf/termsdef.html http://www.cise.ufl.edu/~mssz/Pascal-CGS2462/prog-dsn.html http://www.hfma.org/publications/business_caring_newsletter/exclusives/Supply+and+Inventory+Terms+Defined.htm http://mitsloan.mit.edu/omg/om-definition.php http://www.humtech.com/opm/grtl/ols/ols3.cfm http://www.gregvogl.net/courses/mis1/glossary.htm

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >