Search Results

Search found 5810 results on 233 pages for 'staff of geeks'.

Page 136/233 | < Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >

  • Hosting and consuming WCF services without configuration files

    - by martinsj
    In this post, I'll demonstrate how to configure both the host and the client in code without the need for configuring services i the <system.serviceModel> section of the config-file. In fact, you don't need a  <system.serviceModel> section at all. What you'll do need (and want) sometimes, is the Uri of the service in the configuration file. Configuring the Uri of the the service is actually only needed for the client or when self-hosting, not when hosting in IIS. So, exactly What do we need to configure? The binding type and the binding constraints The metadata behavior Debug behavior You can of course configure even more, and even more if you want to, WCF is after all the king of configuration… As an example I'll be hosting and consuming a service that removes most of the default constraints for WCF-services, using a BasicHttpBinding. Of course, in regards to security, it is probably better to have some constraints on the server, but this is only a demonstration. The ServerConfig class in the code beneath is a static helper class that will be used in the examples. In this post, I’ll be using this helper-class for all configuration, for both the server and the client. In WCF, the  client and the server have both their own WCF-configuration. With this piece of code, they will be sharing the same configuration. 1: public static class ServiceConfig 2: { 3: public static Binding DefaultBinding 4: { 5: get 6: { 7: var binding = new BasicHttpBinding(); 8: Configure(binding); 9: return binding; 10: } 11: } 12:  13: public static void Configure(HttpBindingBase binding) 14: { 15: if (binding == null) 16: { 17: throw new ArgumentException("Argument 'binding' cannot be null. Cannot configure binding."); 18: } 19:  20: binding.SendTimeout = new TimeSpan(0, 0, 30, 0); // 30 minute timeout 21: binding.MaxBufferSize = Int32.MaxValue; 22: binding.MaxBufferPoolSize = 2147483647; 23: binding.MaxReceivedMessageSize = Int32.MaxValue; 24: binding.ReaderQuotas.MaxArrayLength = Int32.MaxValue; 25: binding.ReaderQuotas.MaxBytesPerRead = Int32.MaxValue; 26: binding.ReaderQuotas.MaxDepth = Int32.MaxValue; 27: binding.ReaderQuotas.MaxNameTableCharCount = Int32.MaxValue; 28: binding.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; 29: } 30:  31: public static ServiceMetadataBehavior ServiceMetadataBehavior 32: { 33: get 34: { 35: return new ServiceMetadataBehavior 36: { 37: HttpGetEnabled = true, 38: MetadataExporter = {PolicyVersion = PolicyVersion.Policy15} 39: }; 40: } 41: } 42:  43: public static ServiceDebugBehavior ServiceDebugBehavior 44: { 45: get 46: { 47: var smb = new ServiceDebugBehavior(); 48: Configure(smb); 49: return smb; 50: } 51: } 52:  53:  54: public static void Configure(ServiceDebugBehavior behavior) 55: { 56: if (behavior == null) 57: { 58: throw new ArgumentException("Argument 'behavior' cannot be null. Cannot configure debug behavior."); 59: } 60: 61: behavior.IncludeExceptionDetailInFaults = true; 62: } 63: } Configuring the server There are basically two ways to host a WCF service, in IIS and self-hosting. When hosting a WCF service in a production environment using SOA architecture, you'll be most likely hosting it in IIS. When testing the service in integration tests, it's very handy to be able to self-host services in the unit-tests. In fact, you can share the the WCF configuration for self-hosted services and services hosted in IIS. And that is exactly what you want to do, testing the same configurations for test and production environments.   Configuring when Self-hosting When self-hosting, in order to start the service, you'll have to instantiate the ServiceHost class, configure the  service and open it. 1: // Create the service-host. 2: var host = new ServiceHost(typeof(MyService), endpoint); 3:  4: // Configure the binding 5: host.AddServiceEndpoint(typeof(IMyService), ServiceConfig.DefaultBinding, endpoint); 6:  7: // Configure metadata behavior 8: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 9:  10: // Configure debgug behavior 11: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 12: 13: // Start listening to the service 14: host.Open(); 15:  Configuring when hosting in IIS When you create a WCF service application with the wizard in Visual Studio, you'll end up with bits and pieces of code in order to get the service running: Svc-file with codebehind. A interface to the service Web.config In order to get rid of the configuration in the <system.serviceModel> section, which the wizard has generated for us, we must tell the service that we have a factory that will create the service for us. We do this by changing the markup for the svc-file: 1: <%@ ServiceHost Language="C#" Debug="true" Service="Namespace.MyService" Factory="Namespace.ServiceHostFactory" %> The markup tells IIS that we have a factory called ServiceHostFactory for this service. The service factory has a method we can override which will be called when someone asks IIS for the service. There are overloads we can override: 1: System.ServiceModel.ServiceHostBase CreateServiceHost(string constructorString, Uri[] baseAddresses) 2: System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 3:  In this example, we'll be using the last one, so our implementation looks like this: 1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3:  4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12:  1: public class ServiceHostFactory : System.ServiceModel.Activation.ServiceHostFactory 2: { 3: 4: protected override System.ServiceModel.ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) 5: { 6: var host = base.CreateServiceHost(serviceType, baseAddresses); 7: host.Description.Behaviors.Add(ServiceConfig.ServiceMetadataBehavior); 8: ServiceConfig.Configure((ServiceDebugBehavior)host.Description.Behaviors[typeof(ServiceDebugBehavior)]); 9: return host; 10: } 11: } 12: As you can see, we are using the same configuration helper we used when self-hosting. Now, when you have a factory, the <system.serviceModel> section of the configuration can be removed, because the section will be ignored when the service has a custom factory. If you want to configure something else in the config-file, one could configure in some other section.   Configuring the client Microsoft has helpfully created a ChannelFactory class in order to create a proxy client. When using this approach, you don't have generate those awfull proxy classes for the client. If you share the contracts with the server in it's own assembly like in the layer diagram under, you can share the same piece of code. The contracts in WCF are the interface to the service and if any, the datacontracts (custom types) the service depends on. Using the ChannelFactory with our configuration helper-class is very simple: 1: var identity = EndpointIdentity.CreateDnsIdentity("localhost"); 2: var endpointAddress = new EndpointAddress(endPoint, identity); 3: var factory = new ChannelFactory<IMyService>(DeployServiceConfig.DefaultBinding, endpointAddress); 4: using (var myService = new factory.CreateChannel()) 5: { 6: myService.Hello(); 7: } 8: factory.Close();   Happy configuration!

    Read the article

  • Thread.Interrupt Is Evil

    - by Alois Kraus
    Recently I have found an interesting issue with Thread.Interrupt during application shutdown. Some application was crashing once a week and we had not really a clue what was the issue. Since it happened not very often it was left as is until we have got some memory dumps during the crash. A memory dump usually means WindDbg which I really like to use (I know I am one of the very few fans of it).  After a quick analysis I did find that the main thread already had exited and the thread with the crash was stuck in a Monitor.Wait. Strange Indeed. Running the application a few thousand times under the debugger would potentially not have shown me what the reason was so I decided to what I call constructive debugging. I did create a simple Console application project and try to simulate the exact circumstances when the crash did happen from the information I have via memory dump and source code reading. The thread that was  crashing was actually MS code from an old version of the Microsoft Caching Application Block. From reading the code I could conclude that the main thread did call the Dispose method on the CacheManger class which did call Thread.Interrupt on the cache scavenger thread which was just waiting for work to do. My first version of the repro looked like this   static void Main(string[] args) { Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); } static void ThreadFunc() { while (true) { object value = Dequeue(); // block until unblocked or awaken via ThreadInterruptedException } } static object WaitObject = new object(); static object Dequeue() { object lret = "got value"; try { lock (WaitObject) { } } catch (ThreadInterruptedException) { Console.WriteLine("Got ThreadInterruptException"); lret = null; } return lret; } I do start a background thread and call Thread.Interrupt on it and then directly let the application terminate. The thread in the meantime does plenty of Monitor.Enter/Leave calls to simulate work on it. This first version did not crash. So I need to dig deeper. From the memory dump I did know that the finalizer thread was doing just some critical finalizers which were closing file handles. Ok lets add some long running finalizers to the sample. class FinalizableObject : CriticalFinalizerObject { ~FinalizableObject() { Console.WriteLine("Hi we are waiting to finalize now and block the finalizer thread for 5s."); Thread.Sleep(5000); } } class Program { static void Main(string[] args) { FinalizableObject fin = new FinalizableObject(); Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); GC.KeepAlive(fin); // prevent finalizing it too early // After leaving main the other thread is woken up via Thread.Abort // while we are finalizing. This causes a stackoverflow in the CLR ThreadAbortException handling at this time. } With this changed Main method and a blocking critical finalizer I did get my crash just like the real application. The funny thing is that this is actually a CLR bug. When the main method is left the CLR does suspend all threads except the finalizer thread and declares all objects as garbage. After the normal finalizers were called the critical finalizers are executed to e.g. free OS handles (usually). Remember that I did call Thread.Interrupt as one of the last methods in the Main method. The Interrupt method is actually asynchronous and does wake a thread up and throws a ThreadInterruptedException only once unlike Thread.Abort which does rethrow the exception when an exception handling clause is left. It seems that the CLR does not expect that a frozen thread does wake up again while the critical finalizers are executed. While trying to raise a ThreadInterrupedException the CLR goes down with an stack overflow. Ups not so nice. Why has this nobody noticed for years is my next question. As it turned out this error does only happen on the CLR for .NET 4.0 (x86 and x64). It does not show up in earlier or later versions of the CLR. I have reported this issue on connect here but so far it was not confirmed as a CLR bug. But I would be surprised if my console application was to blame for a stack overflow in my test thread in a Monitor.Wait call. What is the moral of this story? Thread.Abort is evil but Thread.Interrupt is too. It is so evil that even the CLR of .NET 4.0 contains a race condition during the CLR shutdown. When the CLR gurus can get it wrong the chances are high that you get it wrong too when you use this constructs. If you do not believe me see what Patrick Smacchia does blog about Thread.Abort and List.Sort. Not only the CLR creators can get it wrong. The BCL writers do sometimes have a hard time with correct exception handling as well. If you do tell me that you use Thread.Abort frequently and never had problems with it I do suspect that you do not have looked deep enough into your application to find such sporadic errors.

    Read the article

  • How to Configure Microsoft Word 2013 to Connect to Geekswithblogs

    - by Enrique Lima
    The first step in this process is to open Word 2013. Once there, you will have the different templates available. You will select Blog Post.  Once the template for Blog Post opens, you will have a dialog popup with the option to Register a Blog Account. And click on Register Now.  The next part of the dialog will prompt you to provide the New Blog Account details, starting with the type of Blog you have (SharePoint, WordPress, TypePad and others are listed). In our case for GeeksWithBlogs, we will select Other.  Now come the juicy details! Under the New Account dialog, you will have the API set to MetaWebLog.Then provide the Blog Post URL, this needs to be http://geekswithblogs.net/<your-account>/services/metablogapi.aspx (remember to change the <your-account> part with your info).Then, enter your User Name and Password, click OK and you should be set (you will receive a dialog letting you know information will be transferred).  Hope it works for you!

    Read the article

  • Windows8, JavaScript and HTML5 - A good thing?

    - by Albers
    Most of us have seen the Windows 8 news regarding support for native HTML5/JavaScript applications. The press has pushed this as a potential threat to the .NET developer community because JavaScript and HTML5 were called "our new developer platform". The press release refers to "Web-connected and Web-powered apps built using HTML5 and JavaScript that have access to the full power of the PC.".Microsoft has also been hush on details related to these comments. Before we buy the hype and start worrying about a world where we drop our Visual Studio licenses and buy DreamWeaver - let's think about how Windows 8 HTML/JavaScript applications would be implemented. The HTML5 spec offers support for offline applications, but this won't offer the OS-integrated experience the press release refers to. MS has to be planning a way to extend access beyond the traditional JavaScript feature set. Microsoft has a similar option today: HTML Applications or HTAs. They come close to required features, but HTAs need ActiveX or Java integration to provide the promised OS-level access. I'm guessing that Microsoft's future OS strategy isn't built on developers cranking out ActiveX controls or Java applets. So where is Microsoft headed? One possibility is that MS builds a new JavaScript framework from the ground up outside their current APIs. Another idea would be for Microsoft to add support for JavaScript as a first class .NET language using the Dynamic Language Runtime. A solution based on the DLR could be integrated into an HTA-like model to provide the promised access, along with the full range of features in .NET Framework. Security comes included in the Framework. And the work necessary to support this integration would tie in nicely with the effort MS has recently made providing better JavaScript and HTML5 support in Visual Studio 2010. As a bonus, a full-fledged JavaScript DLR implementation would allow single language web solutions across client and server (think node.js) and would appeal to developers who are familiar with JavaScript but have less experience with the Microsoft tech stack. We will all get a better picture after the Build conference in September. But in the mean time we know that Microsoft has a reputation for providing strong developer support. We might want to reserve our harshest judgement and consider that the press release could hint at new opportunities for .NET development.

    Read the article

  • The Enterprise is a Curmudgeon

    - by John K. Hines
    Working in an enterprise environment is a unique challenge.  There's a lot more to software development than developing software.  A project lead or Scrum Master has to manage personalities and intra-team politics, has to manage accomplishing the task at hand while creating the opportunities and a reputation for handling desirable future work, has to create a competent, happy team that actually delivers while being careful not to burn bridges or hurt feelings outside the team.  Which makes me feel surprised to read advice like: " The enterprise should figure out what is likely to work best for itself and try to use it." - Ken Schwaber, The Enterprise and Scrum. The enterprises I have experience with are fundamentally unable to be self-reflective.  It's like asking a Roman gladiator if he'd like to carve out a little space in the arena for some silent meditation.  I'm currently wondering how compatible Scrum is with the top-down hierarchy of life in a large organization.  Specifically, manufacturing-mindset, fixed-release, harmony-valuing large organizations.  Now I understand why Agile can be a better fit for companies without much organizational inertia. Recently I've talked with nearly two dozen software professionals and their managers about Scrum and Agile.  I've become convinced that a developer, team, organization, or enterprise can be Agile without using Scrum.  But I'm not sure about what process would be the best fit, in general, for an enterprise that wants to become Agile.  It's possible I should read more than just the introduction to Ken's book. I do feel prepared to answer some of the questions I had asked in a previous post: How can Agile practices (including but not limited to Scrum) be adopted in situations where the highest-placed managers in a company demand software within extremely aggressive deadlines? Answer: In a very limited capacity at the individual level.  The situation here is that the senior management of this company values any software release more than it values developer well-being, end-user experience, or software quality.  Only if the developing organization is given an immediate refactoring opportunity does this sort of development make sense to a person who values sustainable software.   How can Agile practices be adopted by teams that do not perform a continuous cycle of new development, such as those whose sole purpose is to reproduce and debug customer issues? Answer: It depends.  For Scrum in particular, I don't believe Scrum is meant to manage unpredictable work.  While you can easily adopt XP practices for bug fixing, the project-management aspects of Scrum require some predictability.  My question here was meant toward those who want to apply Scrum to non-development teams.  In some cases it works, in others it does not. How can a team measure if its development efforts are both Agile and employ sound engineering practices? Answer: I'm currently leaning toward measuring these independently.  The Agile Principles are a terrific way to measure if a software team is agile.  Sound engineering practices are those practices which help developers meet the principles.  I think Scrum is being mistakenly applied as an engineering practice when it is essentially a project management practice.  In my opinion, XP and Lean are examples of good engineering practices. How can Agile be explained in an accurate way that describes its benefits to sceptical developers and/or revenue-focused non-developers? Answer: Agile techniques will result in higher-quality, lower-cost software development.  This comes primarily from finding defects earlier in the development cycle.  If there are individual developers who do not want to collaborate, write unit tests, or refactor, then these are simply developers who are either working in an area where adding these techniques will not add value (i.e. they are an expert) or they are a developer who is satisfied with the status quo.  In the first case they should be left alone.  In the second case, the results of Agile should be demonstrated by other developers who are willing to receive recognition for their efforts.  It all comes down to individuals, doesn't it?  If you're working in an organization whose Agile adoption consists exclusively of Scrum, consider ways to form individual Agile teams to demonstrate its benefits.  These can even be virtual teams that span people across org-chart boundaries.  Once you can measure real value, whether it's Scrum, Lean, or something else, people will follow.  Even the curmudgeons.

    Read the article

  • Creating Parent-Child Relationships in SSRS

    - by Tim Murphy
    As I have been working on SQL Server Reporting Services reports the last couple of weeks I ran into a scenario where I needed to present a parent-child data layout.  It is rare that I have seen a report that was a simple tabular or matrix format and this report continued that trend.  I found that the processes for developing complex SSRS reports aren’t as commonly described as I would have thought.  Below I will layout the process that I went through to create a solution. I started with a List control which will contain the layout of the master (parent) information.  This allows for a main repeating report part.  The dataset for this report should include the data elements needed to be passed to the subreport as parameters.  As you can see the layout is simply text boxes that are bound to the dataset. The next step is to set a row group on the List row.  When the dialog appears select the field that you wish to group your report by.  A good example in this case would be the employee name or ID. Create a second report which becomes the subreport.  The example below has a matrix control.  Create the report as you would any parameter driven document by parameterizing the dataset. Add the subreport to the main report inside the row of the List control.  This can be accomplished by either dragging the report from the solution explorer or inserting a Subreport control and then setting the report name property. The last step is to set the parameters on the subreport.  In this case the subreport has EmpId and ReportYear as parameters.  While some of the documentation on this states that the dialog will automatically detect the child parameters, but this has not been my experience.  You must make sure that the names match exactly.  Tie the name of the parameter to either a field in the dataset or a parameter of the parent report. del.icio.us Tags: SQL Server Reporting Services,SSRS,SQL Server,Subreports

    Read the article

  • NDepend 4.0 Released

    - by Anthony Trudeau
    Last week version 4.0 of NDepend was released.  NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high quality code.  A month ago I wrapped up my evaluation of the previous version of NDepend. The new version contains many minor changes, several bug fixes, and adds about 50 new code rules.  The version also adds support for Visual Studio 11, .NET Framework 4.5, and SilverLight 5.0.  But, the biggest change was the shift from CQL to CQLinq. Introducing CQLinq The latest version replaces the CQL rules language with CQLinq (CQL is still an option although the editor is buried).  As you might guess CQLinq is a flavor of Linq designed specifically for the code rules. The best way to illustrate the differences is with an example.  I used the following CQL example in Part 3 of my review: WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” This same query looks like this when implemented in CQLinq: warnif count > 0 from t in Types where t.IsInterface == true && !t.NameLike(“I”) select t I like the syntax and it is a natural fit, but I found writing the queries frustrating in the Queries and Rules Edit window.  The Queries and Rules Edit window replaces the CQL Query Edit window.  The new editor has the same style of Intellisense as the previous editor.  However, it has a few annoyances.  The error indicator is a red block.  It has the tendency of obscuring your cursor.  Additionally, writing CQLing queries is like writing plain old Linq queries, so the fact that the editor uses Enter to select from Intellisense instead of Tab is jarring.  These issues can be an obstacle to writing queries quickly.CQLinq makes it possible to write rules that weren't possible before.  Additionally, a JustMyCode domain is now possible making it easy to eliminate generated code from the analysis.Should you Buy? I recommend NDepend overall.  It has some rough points for me that I have detailed in my earlier evaluation (starting here).  But, it’s definitely worth the money.  The bigger question is: should I pay for the upgrade to 4.0?  At this point I’m on the fence, but I would go for it if you need support for Visual Studio 2011, .NET Framework 4.5, or Silverlight 5.0; or if you need one of the many rules that weren't possible before CQLinq. Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend. Resources: NDepend Release Notes

    Read the article

  • Scenarios for Throwing Exceptions

    - by Joe Mayo
    I recently came across a situation where someone had an opinion that differed from mine of when an exception should be thrown. This particular case was an issue opened on LINQ to Twitter for an Exception on EndSession.  The premise of the issue was that the poster didn’t feel an exception should be raised, regardless of authentication status.  As first, this sounded like a valid point.  However, I went back to review my code and decided not to make any changes. Here's my rationale: 1. The exception doesn’t occur if the user is authenticated when EndAccountSession is called. 2. The exception does occur if the user is not authenticated when EndAccountSession is called. 3. The exception represents the fact that EndAccountSession is not able to fulfill its intended purpose - to end the session.  If a session never existed, then it would not be possible to perform the requested action.  Therefore, an exception is appropriate. To help illustrate how to handle this situation, I've modified the following code in Program.cs in the LinqToTwitterDemo project to illustrate the situation: static void EndSession(ITwitterAuthorizer auth) { using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) { try { //Log twitterCtx.Log = Console.Out; var status = twitterCtx.EndAccountSession(); Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } catch (TwitterQueryException tqe) { var webEx = tqe.InnerException as WebException; if (webEx != null) { var webResp = webEx.Response as HttpWebResponse; if (webResp != null && webResp.StatusCode == HttpStatusCode.Unauthorized) Console.WriteLine("Twitter didn't recognize you as having been logged in. Therefore, your request to end session is illogical.\n"); } var status = tqe.Response; Console.WriteLine("Request: {0}, Error: {1}" , status.Request , status.Error); } } } As expected, LINQ to Twitter wraps the exception in a TwitterQueryException as the InnerException.  The TwitterQueryException serves a very useful purpose through it's Response property.  Notice in the example above that the response has Request and Error proprieties.  These properties correspond to the information that Twitter returns as part of it's response payload.  This is often useful while debugging to help you understand why Twitter was unable to perform the  requested action.  Other times, it's cryptic, but that's another story.  At least you have some way of knowing in your code how to anticipate and handle these situations, along with having extra information to debug with. To sum things up, there are two points to make: when and why an exception should be raised and when to wrap and re-throw an exception in a custom exception type. I felt it was necessary to allow the exception to be raised because the called method was unable to perform the task it was designed for.  I also felt that it is inappropriate for a general library to do anything with exceptions because that could potentially hide a problem from the caller.  A related point is that it should be the exclusive decision of the application that uses the library on what to do with an exception.  Another aspect of this situation is that I wrapped the exception in a custom exception and re-threw.  This is a tough call because I don’t want to hide any stack trace information.  However, the need to make the exception more meaningful by including vital information returned from Twitter swayed me in the direction to design an interface that was as helpful as possible to library consumers.  As shown in the code above, you can dig into the exception and pull out a lot of good information, such as the fact that the underlying HTTP response was a 401 Unauthorized.  In all, trade-offs are seldom perfect for all cases, but combining the fact that the method was unable to perform its intended function, this is a library, and the extra information can be more helpful, it seemed to be the better design. @JoeMayo

    Read the article

  • O'Reilly deals to April 5, 2012 14:00 PT on books on "where"

    - by TATWORTH
    At http://shop.oreilly.com/category/deals/where-conference.do, O'Reilly are offering a series of books on geo-location at 50% off until April 5, 2012 14:00 PT. HTML5 Geolocation Truly revolutionary: now you can write geolocation applications directly in the browser, rather than develop native apps for particular devices. This concise book demonstrates the W3C Geolocation API in action, with code and examples to help you build HTML5 apps using the "write once, deploy everywhere" model. Along the way, you get a crash course in geolocation, browser support, and ways to integrate the API with common geo tools like Google Maps. HTML5 Cookbook With scores of practical recipes you can use in your projects right away, this cookbook helps you gain hands-on experience with HTML5’s versatile collection of elements. You get clear solutions for handling issues with everything from markup semantics, web forms, and audio and video elements to related technologies such as geolocation and rich JavaScript APIs. Each informative recipe includes sample code and a detailed discussion on why and how the solution works. Perfect for intermediate to advanced web and mobile web developers, this handy book lets you choose the HTML5 features that work for you—and helps you experiment with the rest. HTML5 Applications HTML5 is not just a replacement for plugins. It also makes the Web a first-class development environment by giving JavaScript programmers a solid foundation for building industrial-strength applications. This practical guide takes you beyond simple site creation and shows you how to build self-contained HTML5 applications that can run on mobile devices and compete with desktop apps. You’ll learn powerful JavaScript tools for exploiting HTML5 elements, and discover new methods for working with data, such as offline storage and multi-threaded processing. Complete with code samples, this book is ideal for experienced JavaScript and mobile developers alike. There are also other books being offered at a discount at http://shop.oreilly.com/category/deals/where-conference.do

    Read the article

  • Topeka Dot Net User Group (DNUG) Meeting &ndash; April 6, 2010

    - by Robz / Fervent Coder
    Topeka DNUG is free for anyone to attend! Mark your calendars now! SPEAKER: Troy Tuttle is a self-described pragmatic agilist, and Kanban practitioner, with more than a decade of experience in delivering software in the finance and health industries and as a consultant. He advocates teams improve their performance through pursuit of better practices like continuous integration and automated testing. Troy is the founder of the Kansas City Limited WIP Society and is a speaker at local area groups on team related topics. He currently works as a Project Lead Consultant with AdventureTech Group of Kansas City, KS. TOPIC: Why Kanban? Kanban is receiving a large amount of attention recently. What does it offer compared to other approaches? Answering that question may require you to hit the “reset” button on previously held biases and assumptions. Kanban blends Lean thought with ideas from first generation agile methodologies. To get started with Kanban, we will examine what steps are necessary to establish a transparent, work-limited, pull system. We will highlight the perils of allowing too much work-in-progress and how it affects development performance. Once established, Kanban teams need only a few metrics and tools to monitor their performance and improvement. WHERE: Federal Home Loan Bank Topeka on the Security Benefit Campus – Directions? WHEN: 11:30 AM - 1:00 PM on April 6th, 2010 REGISTER: http://topekadotnet.wufoo.com/forms/topeka-dnug-meeting-attendance/ ADDITIONAL INFO: As always, please sign in and out of FHLBank to help them with their accountability. Please park in the visitors section at the front of the building when you arrive. If  there are no spots in visitors you may park in the overflow lot at the far east end of the facility.  Lunch will be provided and we will have some great door prizes!

    Read the article

  • Verifying Office 2010 SP1 Installation

    - by Chris Heacock
    So you downloaded and installed SP1, but now you want to verify that SP1 actually installed! Looking at Outlook's Help Screen under Help->About, it isn't readily apparent that SP1 is installed. Like me, you probably expected to see 14.1.something. Perhaps 14.0.something SP1, right? If you click on that "Additional Version and Copyright Information", another window will pop up and show you a bit more useful info (if you don't have the version numbers committed to memory) That window *does* give us that comforting SP1, and now we can determine that if you have Office 14.1.6023.1000 (and beyond) you are indeed runnning Office 2010 SP1!

    Read the article

  • Oh that XML - did you ever try to read a raw file?

    - by GGBlogger
    If you've ever looked at a raw XML file - even a very simple one - you'll understand. XML files are nearly impossible to read in raw format. That's where various tools come in and there are a bunch of them including some very simple tools. If, however, you need some horsepower one of the best tools on the planet is LiquidXML! LiquidXML is a developer's tool. It's also an analyst's tool, a tester's tool and a designer's tool. Did I mention that it is compatible with Visual Studio? Once again I will be following up on this as time permits. But if this sounds like something you can use just visit http://www.liquid-technologies.com/. You will find a very complete description plus high quality training videos that will help you decide if this is a tool you can use.

    Read the article

  • SharePoint Saturday LA&ndash;Free Conference

    - by MOSSLover
    There are four really cool national board members for Women in SharePoint, Cathy Dew, Nedra Allmond, Michelle Strah, and and Lori Gowin.  Nedra is running Women in SharePoint West and she just also happens to be helping out with SharePoint Saturday LA.  If you guys had no idea that California also has SharePoint Saturdays then you were wrong.  There is a SharePoint Saturday on April 2nd in the greater Los Angeles Area.  If anyone is interested in the vicinity please visit this site: http://www.sharepointsaturday.org/la/default.aspx. Technorati Tags: SharePoint Saturday,Los Angeles,SharePoint 2010,SharePoint Events

    Read the article

  • XSF-FO intellisense and national languages with Apache FOP

    - by Lukasz Kurylo
    Some time ago I showed how to get an intellisense and how to configure the FO.NET to acquire national characters inside the generated pdf files. Due to the limitations that I mensioned in my previous post, I started playing with the Apache FOP. In this post I want to show, how to acquire the same result as I showed in the two posts related to FO.NET.   Intellisense   To get the intellisense from the XSL-FO templates set the xsi:schemaLocation the same way I showed it in this post. The only diffrence to FO.NET is that, during generating the document by the code I showed last time we will get an exception:   org.apache.fop.fo.ValidationException: Invalid property encountered on "fo:root": xsi:schemaLocation (See position 6:11)   Fortunatelly there is a very easy way to resolve this without removing the entire attribute along with the intellisense. Add to the FopFactory the ignoreNamespace by:   FopFactory fopFactory = FopFactory.newInstance(); fopFactory.ignoreNamespace(http://www.w3.org/2001/XMLSchema-instance);   Notice that, the url specified in this method this is a namespace for the xmlns:xsi namespace, not xsi.schemaLocation.   Fonts / national characters   This point is a little dfferent to acquire, but not more complicated that it was with FO.NET. To set the fonts in Apache FOP 1.0, we need a configuration file. A sample one can be get from the directory where we unpacked the fop binaries, from conf subdirectory. There is a file called fop.xconf. We must copy this file to our solution. In the simplest way, in the <fonts> tag we can add  <auto-detect/>. Thanks to this, FOP will index all fonts available on the installed operating system. There probably should be no problem, if we have a http handler or a WCF Service on the server that serves the generated pdf documents. In this situation we can use all available fonts on this server.   To use this config file, we must set a path to it:   FopFactory fopFactory = FopFactory.newInstance(); fopFactory.setUserConfig(new File("fop.xconf"));

    Read the article

  • Configuring Full-Text Search for pdf and docx files

    - by Lukasz Kurylo
    I think in may I was creating a little filters module based on Full Text-Search. I have configured my dev machine, the same for two testing servers – in our company for internal testing before we deployed it to client, and then on the testing client server. Until last week this build  was still on the testing server and finally we got feedback that we can deploy it on the production one. I only say that, I lost half a day because I had not correctly remembered what I was doing to configure the FTS on the previous servers and I had no notes for that. I foolishly believed in my memory. Lesson learned.   For future reference a bunch of steps to configure the FTS for searching in *.pdf and *.docx files (and by the way in other Office files like *.xlsx).   1. From the page (link) download and install the *.pdf IFilter for FTS. 2. To the PATH global system variable add path to the catalog, where you installed the plugin. Default for this version is: C:\Program Files\Adobe\Adobe PDF iFilter 9 for 64-bit platforms\bin 3. From the page (link) download a FilterPackx64.exe and install it. 4. Now from SSMS execute the following procedures: -sp_fulltext_service 'load_os_resources',1 -sp_fulltext_service 'verify_signature', 0 5. Restart the server 6. Now we must check if the plugins are visible: -select document_type, path from sys.fulltext_document_types where document_type = '.pdf' -select document_type, path from sys.fulltext_document_types where document_type = '.docx' 7. If we see a result, then we can assume that everything is ok*. 8. Right now we can create a catalog for FTS and indexes on appropriate columns.     *I lost a lot of hours to find out, why the plugin for the *.pdf files wasn’t indexed any file in the database, but in the sys.fulltext_document_types table there was available a line for this plugin. After the deeper investigation I found that the *.pdf files actually were indexed. At least the EOF sign was added to the indexes and nothing more for each file. In the end the problem was that, I forgot to add the /bin in the path to the plugin in PATH variable..

    Read the article

  • Learning Electronics & the Arduino Microcontroller

    - by Chris Williams
    Lately, I've had a growing interest in Electronics & Microcontrollers. I'm a loyal reader of Make Magazine and thoroughly enjoy seeing all the various projects in each issue, even though I rarely try to make any of them. I've been reading and watching videos about the Arduino, which is an open source Microcontroller and software project that the people at Make (and a lot of other folks) are pretty hot about. Even the prebuilt hardware is remarkably inexpensive , although there are kits available to build one from the base components. (Full disclosure: I bought my first soldering iron... EVER... just last week, so I fully acknowledge the likelihood of making some mistakes. That's why I'm not trying to do the "build it yourself" kit just yet. It's also another reason to be happy the hardware is so cheap.) There are a number of different Arduino boards available, but the two that have really piqued my interest are the Arduino UNO and the NETduino. The UNO is a very popular board, with a number of features and is under $35 which means I won't hurl myself off a bridge when I inevitably destroy it. The NETduino is very similar to the Arduino UNO and has the added advantage of being programmable with... you guessed it... C#. I'm actually ordering both boards and some miscellaneous other doodads to go with them.  There are a few good websites for this sort of thing, including www.makershed.com and www.adafruit.com. The price difference is negligible, so in my case, I'm ordering from Maker Shed (the Make Magazine people) because I want to support them. :) I've also picked up a few O'Reilly books on the subject which I am looking forward to reading & reviewing: Make: Electronics, Arduino: A Quick Start Guide and Getting Started With Arduino (all three of which arrived on my doorstep today.) This ties in with my "learn more about robotics" goals as well, since I'll need a good understanding of Electronics if I want to move past Lego Mindstorms eventually.

    Read the article

  • An alternative way to request read reciepts

    - by lavanyadeepak
    An alternative way to request read reciepts Sometime or other we use messaging namespaces like System.Net.Mail or System.Web.Mail to send emails from our applications. When we would need to include headers to request delivery or return reciepts (often called as Message Disposition Notifications) we lock ourselves to the limitation that not all email servers/email clients can satisfy this. We can enhance this border a little now, thanks to a new innovation I discovered from Gawab. It embeds a small invisible image of 1x1 dimension and the image source reads as recieptimg.php?id=2323425324. When this image is requested by the web browser or email client, the serverside handler does a smart mapping based on the ID to indicate that the message was read. We call them as 'Web Bugs'. But wait it is not a fool proof solution since spammers misuse this technique to confirm activeness of an email address and most of the email clients suppress inline images for security reasons. I just thought anyway would share this observation for the benefit of others.

    Read the article

  • Stacks in C++

    - by MarkPearl
    So some more basics… One of the things you will be taught at any college after conquering arrays is different derivatives of collections. Stack is one of the simplest of those and very useful… A stack is a LIFO (last in first out) data structure and has at least two basic method calls – push & pop. Push, “pushes” an item on the top of the stack. Pop, removes the top most item off the stack. Because all elements on a stack are of the same type, one can use an array to implement a stack or a linked list. With the array based approach, the first element in a stack would be the first element in the array, the second on the stack would be the second on the array, etc. One limitation with an array implementation of a stack is that unless the array is dynamic, one would have to have a preset max stack size (based on the bounds of the array). Linked lists is another approach that gets past this boundary by allowing you to dynamically grow or shrink a collection of data. Stacks have many applications… a typical computer science example would be Postfix Expression Calculator, where the LIFO principle is maintained.

    Read the article

  • PowerShell Script To Find Where SharePoint 2010 Features Are Activated

    - by Brian Jackett
    The script on this post will find where features are activated within your SharePoint 2010 farm.   Problem    Over the past few months I’ve gotten literally dozens of emails, blog comments, or personal requests from people asking “how do I find where a SharePoint feature has been activated?”  I wrote a script to find which features are installed on your farm almost 3 years ago.  There is also the Get-SPFeature PowerShell commandlet in SharePoint 2010.  The problem is that these only tell you if a feature is installed not where they have been activated.  This is especially important to know if you have multiple web applications, site collections, and /or sites.   Solution    The default call (no parameters) for Get-SPFeature will return all features in the farm.  Many of the parameter sets accept filters for specific scopes such as web application, site collection, and site.  If those are supplied then only the enabled / activated features are returned for that filtered scope.  Taking the concept of recursively traversing a SharePoint farm and merging that with calls to Get-SPFeature at all levels of the farm you can find out what features are activated at that level.  Store the results into a variable and you end up with all features that are activated at every level.    Below is the script I came up with (slight edits for posting on blog).  With no parameters the function lists all features activated at all scopes.  If you provide an Identity parameter you will find where a specific feature is activated.  Note that the display name for a feature you see in the SharePoint UI rarely matches the “internal” display name.  I would recommend using the feature id instead.  You can download a full copy of the script by clicking on the link below.    Note: This script is not optimized for medium to large farms.  In my testing it took 1-3 minutes to recurse through my demo environment.  This script is provided as-is with no warranty.  Run this in a smaller dev / test environment first.   001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 function Get-SPFeatureActivated { # see full script for help info, removed for formatting [CmdletBinding()] param(   [Parameter(position = 1, valueFromPipeline=$true)]   [Microsoft.SharePoint.PowerShell.SPFeatureDefinitionPipeBind]   $Identity )#end param   Begin   {     # declare empty array to hold results. Will add custom member `     # for Url to show where activated at on objects returned from Get-SPFeature.     $results = @()         $params = @{}   }   Process   {     if([string]::IsNullOrEmpty($Identity) -eq $false)     {       $params = @{Identity = $Identity             ErrorAction = "SilentlyContinue"       }     }       # check farm features     $results += (Get-SPFeature -Farm -Limit All @params |              % {Add-Member -InputObject $_ -MemberType noteproperty `                 -Name Url -Value ([string]::Empty) -PassThru} |              Select-Object -Property Scope, DisplayName, Id, Url)     # check web application features     foreach($webApp in (Get-SPWebApplication))     {       $results += (Get-SPFeature -WebApplication $webApp -Limit All @params |                % {Add-Member -InputObject $_ -MemberType noteproperty `                   -Name Url -Value $webApp.Url -PassThru} |                Select-Object -Property Scope, DisplayName, Id, Url)       # check site collection features in current web app       foreach($site in ($webApp.Sites))       {         $results += (Get-SPFeature -Site $site -Limit All @params |                  % {Add-Member -InputObject $_ -MemberType noteproperty `                     -Name Url -Value $site.Url -PassThru} |                  Select-Object -Property Scope, DisplayName, Id, Url)                          $site.Dispose()         # check site features in current site collection         foreach($web in ($site.AllWebs))         {           $results += (Get-SPFeature -Web $web -Limit All @params |                    % {Add-Member -InputObject $_ -MemberType noteproperty `                       -Name Url -Value $web.Url -PassThru} |                    Select-Object -Property Scope, DisplayName, Id, Url)           $web.Dispose()         }       }     }   }   End   {     $results   } } #end Get-SPFeatureActivated   Snippet of output from Get-SPFeatureActivated   Conclusion    This script has been requested for a long time and I’m glad to finally getting a working “clean” version.  If you find any bugs or issues with the script please let me know.  I’ll be posting this to the TechNet Script Center after some internal review.  Enjoy the script and I hope it helps with your admin / developer needs.         -Frog Out

    Read the article

  • Reduce weight in healthy way - Day 2

    - by krnites
    My second day of reducing weight and it seems most of the blog are correct in saying that you can reduce weight if your calorie consumption is less than what you burn. In one day I have lost 1 lbs without doing anything. My current weight is 177.4 lbs. Yesterday I ate small portion of dinner that I used to eat that also around 7 PM. Normally I eat my dinner around 10 PM and withing 2 hour of eating I go for sleep, but yesterday I ate around 7 PM and went for sleep only after 12.On my second day I have eaten noodles and 3 eggs in breakfast and sesame chicken ( I love it) and fried rice in lunch, I still have not gone for running but had plan to go for running and then swimming. I hope it will at least burn the calories that I had taken. On some site it was written that a normal men body needs around 2000 Calorie a day. So if I am eating less than 2000 calorie ( noodles + 3 eggs = 400+200, rice + sesame chicken = 1300, total = 1900) and burning around 300 calorie, my total calorie intake will be 1600 which is less than what my body needs. So most probably by tomorrow I should come under 176 lb bracket.Apart from counting the calorie that I am taking in everyday and approx number of calorie that I am burning everyday, I had also starting tracking my physical activities on my mobile. I have got a beautiful Samsung Focus S Windows 7.5 mobile. And after browsing through the market I have downloaded couple of health Apps.1. 6 Week training - this has set of exercise and lets you choose the number of sets you want to do for all exercise. Its focus on your core muscles.2. Fast food Calories - This apps has all the fast food chain listed and give the calorie count of each of the food item available on there menu. Like for Burger King's French Fries Large (Salted) contains 500 Calorie.3. Gym Pocket Guide - Contains instructions for different kind of exercise and tells a right way of doing them.4.  RunSat - kind of GPS based application. Its mark the distance you have run, shows the path you have taken on a map, total calorie burnt, laps completed. I love this apps.5. Stop Watch I also have noticed that If I am running in GYM and have television in front of me where a movie or serial is going on which I like,  I normally didn't notice the time. Most of the time running on treadmill is very boring, but if some music video is playing or some kind of sitcom is going, I can run for  a hour or half.So on day 2 I have lost 1 lbs and had learnt that calorie intake should be less then calorie burnt for a given day.

    Read the article

  • Building an MVC application using QuickBooks

    - by dataintegration
    RSSBus ADO.NET Providers can be used from many tools and IDEs. In this article we show how to connect to QuickBooks from an MVC3 project using the RSSBus ADO.NET Provider for QuickBooks. Although this example uses the QuickBooks Data Provider, the same process can be used with any of our ADO.NET Providers. Creating the Model Step 1: Download and install the QuickBooks Data Provider from RSSBus. Step 2: Create a new MVC3 project in Visual Studio. Add a data model to the Models folder using the ADO.NET Entity Data Model wizard. Step 3: Create a new RSSBus QuickBooks Data Source by clicking "New Connection", specify the connection string options, and click Next. Step 4: Select all the tables and views you need, and click Finish to create the data model. Step 5: Right click on the entity diagram and select 'Add Code Generation Item'. Choose the 'ADO.NET DbContext Generator'. Creating the Controller and the Views Step 6: Add a new controller to the Controllers folder. Give it a meaningful name, such as ReceivePaymentsController. Also, make sure the template selected is 'Controller with empty read/write actions'. Before adding new methods to the Controller, create views for your model. We will add the List, Create, and Delete views. Step 7: Right click on the Views folder and go to Add -> View. Here create a new view for each: List, Create, and Delete templates. Make sure to also associate your Model with the new views. Step 10: Now that the views are ready, go back and edit the RecievePayment controller. Update your code to handle the Index, Create, and Delete methods. Sample Project We are including a sample project that shows how to use the QuickBooks Data Provider in an MVC3 application. You may download the C# project here or download the VB.NET project here. You will also need to install the QuickBooks ADO.NET Data Provider to run the demo. You can download a free trial here. To use this demo, you will also need to modify the connection string in the 'web.config'.

    Read the article

  • ssrs: the report execution has expired or cannot be found

    - by Alex Bransky
    Today I got an exception in a report using SQL Server Reporting Services 2008 R2, but only when attempting to go to the last page of a large report: The report execution sgjahs45wg5vkmi05lq4zaee has expired or cannot be found.;Digging into the logs I found this:library!ReportServer_0-47!149c!12/06/2012-12:37:58:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerStorageException: , An error occurred within the report server database.  This may be due to a connection failure, timeout or low disk condition within the database.;I knew it wasn't a network problem or timeout because I could repeat the problem at will.  I checked the disk space and that seemed fine as well.  The real issue was a lack of memory on the database server that had the ReportServer database.  Restarting the SQL Server engine freed up plenty of RAM and the problem immediately went away.

    Read the article

  • Getting Windows Azure SDK 1.1 To Talk To A Local DB

    - by Richard Jones
    Just found this, if you’re using Azure 1.1,  which you probably will be if yo'u’ve moved to Visual Studio 2010. To change the default database to something other than sqlexpress for Development Storage do this - Look at this - http://msdn.microsoft.com/en-us/library/dd203058.aspx At the bottom it states -   Using Development Storage with SQL Server Express 2008 By default the local Windows Group BUILTIN\Administrator is not included in the SQL Server sysadmin server role on new SQL Server Express 2008 installations.  Add yourself to the sysadmin role in order to use the Development Storage Services on SQL Server Express 2008.  See SQL Server 2008 Security Changes for more information. Changing the SQL Server instance used by Development Storage By default, the Development Storage will use the SQL Express instance.  This can be changed by calling “DSInit.exe /sqlinstance:<SQL Server instance>” from the Windows Azure SDK command prompt.

    Read the article

  • Cleaning your BizTalk Build Server

    - by Michael Stephenson
    Just a little note for myself this one.At one of my customers where it is still BizTalk 2006 one of the build servers is intermittently getting issues so I wanted to run a script periodically to clean things up a little.  The below script is an example of how you can stop cruise control and all of the biztalk services, then clean the biztalk databases and reset the backup process and then click everything off again.This should keep the server a little cleaner and reduce the number of builds that occasionally fail for adhoc environmental issues.REM Server Clean ScriptREM =================== REM This script is ran to move the build server back to a clean state echo Stop Cruise Controlnet stop CCService echo Stop IISiisreset /stop echo Stop BizTalk Servicesnet stop BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Stop SSOnet stop ENTSSO echo Stop SQL Job Agentnet stop SQLSERVERAGENT echo Clean Message Boxsqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_CleanupMsgbox"sqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_PurgeSubscriptions"  echo Clean Tracking Databasesqlcmd -E -d BizTalkDTADb -Q "Exec dtasp_CleanHMData" echo Reset TDDS Stream Statussqlcmd -E -d BizTalkDTADb -Q "Update TDDS_StreamStatus Set lastSeqNum = 0" echo Force Full Backupsqlcmd -E -d BizTalkMgmtDB -Q "Exec sp_ForceFullBackup" echo Clean Backup Directorydel E:\BtsBackups\*.* /q  echo Start SSOnet start ENTSSO echo Start SQL Job Agentnet start SQLSERVERAGENT echo Start BizTalk Servicesnet start BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Start IISiisreset /start echo Start Cruise Controlnet start CCService

    Read the article

  • Microsoft Tag Tagged Me

    - by Brian Schroer
    I got EXTREMELY lucky last week and won an HP Mini 311 notebook from a Microsoft Tag Twitter contest. I did my required tweet to enter last Tuesday, and one hour later received notification that I had won the weekly drawing. Apparently you can tweet up to 500 times (I pity the followers of those who do that), so it was really lucky that I won, and I sympathize with those who had been really trying. If you would like to try your luck, there are seven weekly prizes left, and you can find out about the contest here: http://tag.microsoft.com/ttcontest.aspx For a free PC, I thought it was the least I could do to find out what Microsoft Tag is. I was vaguely aware of those pastel-y triangle-y square things that look like someone put one of Don Johnson’s Miami Vice outfits through a shredder, and knew that the company I work for (one of the world’s largest consumer products companies) was looking into putting them on our products, packaging and advertising, but didn’t know much more about the technology. I thought they were just an improvement over bar codes, and would be used in retail store scanners, but I was mistaken. These tags are meant to be scanned by consumers using their mobile phones, to get instant access to information, websites, reviews, etc. Scanning a tag can open a web page, import a contact card, or dial a phone number, play a video… Tag reader software can be installed on Windows Mobile, iPhone, Symbian, Blackberry, Android, J2ME, and other phones (and I suspect that it will be available for Windows Phone 7 also :). There are built-in tracking, metrics and analysis tools, to help companies using Tag make decisions about their marketing expenditures. (And they don’t have to look Miami Vice-y – They can be customized to reflect the personality of the person or a brand.) Looks like interesting stuff. You can find out more at http://tag.microsoft.com.

    Read the article

< Previous Page | 132 133 134 135 136 137 138 139 140 141 142 143  | Next Page >