Search Results

Search found 35121 results on 1405 pages for 'object cache'.

Page 397/1405 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • How about a new platform for your next API&hellip; a CMS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/05/22/how-about-a-new-platform-for-your-next-apihellip-a.aspxSay what? I’m seeing a type of API emerge which serves static or long-lived resources, which are mostly read-only and have a controlled process to update the data that gets served. Think of something like an app configuration API, where you want a central location for changeable settings. You could use this server side to store database connection strings and keep all your instances in sync, or it could be used client side to push changes out to all users (and potentially driving A/B or MVT testing). That’s a good candidate for a RESTful API which makes proper use of HTTP expiration and validation caching to minimise traffic, but really you want a front end UI where you can edit the current config that the API returns and publish your changes. Sound like a Content Mangement System would be a good fit? I’ve been looking at that and it’s a great fit for this scenario. You get a lot of what you need out of the box, the amount of custom code you need to write is minimal, and you get a whole lot of extra stuff from using CMS which is very useful, but probably not something you’d build if you had to put together a quick UI over your API content (like a publish workflow, fine-grained security and an audit trail). You typically use a CMS for HTML resources, but it’s simple to expose JSON instead – or to do content negotiation to support both, so you can open a resource in a browser and see a nice visual representation, or request it with: Accept=application/json and get the same content rendered as JSON for the app to use. Enter Umbraco Umbraco is an open source .NET CMS that’s been around for a while. It has very good adoption, a lively community and a good release cycle. It’s easy to use, has all the functionality you need for a CMS-driven API, and it’s scalable (although you won’t necessarily put much scale on the CMS layer). In the rest of this post, I’ll build out a simple app config API using Umbraco. We’ll define the structure of the configuration resource by creating a new Document Type and setting custom properties; then we’ll build a very simple Razor template to return configuration documents as JSON; then create a resource and see how it looks. And we’ll look at how you could build this into a wider solution. If you want to try this for yourself, it’s ultra easy – there’s an Umbraco image in the Azure Website gallery, so all you need to to is create a new Website, select Umbraco from the image and complete the installation. It will create a SQL Azure website to store all the content, as well as a Website instance for editing and accessing content. They’re standard Azure resources, so you can scale them as you need. The default install creates a starter site for some HTML content, which you can use to learn your way around (or just delete). 1. Create Configuration Document Type In Umbraco you manage content by creating and modifying documents, and every document has a known type, defining what properties it holds. We’ll create a new Document Type to describe some basic config settings. In the Settings section from the left navigation (spanner icon), expand Document Types and Master, hit the ellipsis and select to create a new Document Type: This will base your new type off the Master type, which gives you some existing properties that we’ll use – like the Page Title which will be the resource URL. In the Generic Properties tab for the new Document Type, you set the properties you’ll be able to edit and return for the resource: Here I’ve added a text string where I’ll set a default cache lifespan, an image which I can use for a banner display, and a date which could show the user when the next release is due. This is the sort of thing that sits nicely in an app config API. It’s likely to change during the life of the product, but not very often, so it’s good to have a centralised place where you can make and publish changes easily and safely. It also enables A/B and MVT testing, as you can change the response each client gets based on your set logic, and their apps will behave differently without needing a release. 2. Define the response template Now we’ve defined the structure of the resource (as a document), in Umbraco we can define a C# Razor template to say how that resource gets rendered to the client. If you only want to provide JSON, it’s easy to render the content of the document by building each property in the response (Umbraco uses dynamic objects so you can specify document properties as object properties), or you can support content negotiation with very little effort. Here’s a template to render the document as HTML or JSON depending on the Accept header, using JSON.NET for the API rendering: @inherits Umbraco.Web.Mvc.UmbracoTemplatePage @using Newtonsoft.Json @{ Layout = null; } @if(UmbracoContext.HttpContext.Request.Headers["accept"] != null &amp;&amp; UmbracoContext.HttpContext.Request.Headers["accept"] == "application/json") { Response.ContentType = "application/json"; @Html.Raw(JsonConvert.SerializeObject(new { cacheLifespan = CurrentPage.cacheLifespan, bannerImageUrl = CurrentPage.bannerImage, nextReleaseDate = CurrentPage.nextReleaseDate })) } else { <h1>App configuration</h1> <p>Cache lifespan: <b>@CurrentPage.cacheLifespan</b></p> <p>Banner Image: </p> <img src="@CurrentPage.bannerImage"> <p>Next Release Date: <b>@CurrentPage.nextReleaseDate</b></p> } That’s a rough-and ready example of what you can do. You could make it completely generic and just render all the document’s properties as JSON, but having a specific template for each resource gives you control over what gets sent out. And the templates are evaluated at run-time, so if you need to change the output – or extend it, say to add caching response headers – you just edit the template and save, and the next client request gets rendered from the new template. No code to build and ship. 3. Create the content With your document type created, in  the Content pane you can create a new instance of that document, where Umbraco gives you a nice UI to input values for the properties we set up on the Document Type: Here I’ve set the cache lifespan to an xs:duration value, uploaded an image for the banner and specified a release date. Each property gets the appropriate input control – text box, file upload and date picker. At the top of the page is the name of the resource – myapp in this example. That specifies the URL for the resource, so if I had a DNS entry pointing to my Umbraco instance, I could access the config with a URL like http://static.x.y.z.com/config/myapp. The setup is all done now, so when we publish this resource it’ll be available to access.  4. Access the resource Now if you open  that URL in the browser, you’ll see the HTML version rendered: - complete with the  image and formatted date. Umbraco lets you save changes and preview them before publishing, so the HTML view could be a good way of showing editors their changes in a usable view, before they confirm them. If you browse the same URL from a REST client, specifying the Accept=application/json request header, you get this response:   That’s the exact same resource, with a managed UI to publish it, being accessed as HTML or JSON with a tiny amount of effort. 5. The wider landscape If you have fairy stable content to expose as an API, I think  this approach is really worth considering. Umbraco scales very nicely, but in a typical solution you probably wouldn’t need it to. When you have additional requirements, like logging API access requests - but doing it out-of-band so clients aren’t impacted, you can put a very thin API layer on top of Umbraco, and cache the CMS responses in your API layer:   Here the API does a passthrough to CMS, so the CMS still controls the content, but it caches the response. If the response is cached for 1 minute, then Umbraco only needs to handle 1 request per minute (multiplied by the number of API instances), so if you need to support 1000s of request per second, you’re scaling a thin, simple API layer rather than having to scale the more complex CMS infrastructure (including the database). This diagram also shows an approach to logging, by asynchronously publishing a message to a queue (Redis in this case), which can be picked up later and persisted by a different process. Does it work? Beautifully. Using Azure, I spiked the solution above (including the Redis logging framework which I’ll blog about later) in half a day. That included setting up different roles in Umbraco to demonstrate a managed workflow for publishing changes, and a couple of document types representing different resources. Is it maintainable? We have three moving parts, which are all managed resources in Azure –  an Azure Website for Umbraco which may need a couple of instances for HA (or may not, depending on how long the content can be cached), a message queue (Redis is in preview in Azure, but you can easily use Service Bus Queues if performance is less of a concern), and the Web Role for the API. Two of the components are off-the-shelf, from open source projects, and the only custom code is the API which is very simple. Does it scale? Pretty nicely. With a single Umbraco instance running as an Azure Website, and with 4x instances for my API layer (Standard sized Web Roles), I got just under 4,000 requests per second served reliably, with a Worker Role in the background saving the access logs. So we had a nice UI to publish app config changes, with a friendly Web preview and a publishing workflow, capable of supporting 14 million requests in an hour, with less than a day’s effort. Worth considering if you’re publishing long-lived resources through your API.

    Read the article

  • SQL SERVER – What is Page Life Expectancy (PLE) Counter

    - by pinaldave
    During performance tuning consultation there are plenty of counters and values, I often come across. Today we will quickly talk about Page Life Expectancy counter, which is commonly known as PLE as well. You can find the value of the PLE by running following query. SELECT [object_name], [counter_name], [cntr_value] FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Manager%' AND [counter_name] = 'Page life expectancy' The recommended value of the PLE counter is 300 seconds. I have seen on busy system this value to be as low as even 45 seconds and on unused system as high as 1250 seconds. Page Life Expectancy is number of seconds a page will stay in the buffer pool without references. In simple words, if your page stays longer in the buffer pool (area of the memory cache) your PLE is higher, leading to higher performance as every time request comes there are chances it may find its data in the cache itself instead of going to hard drive to read the data. Now check your system and post back what is this counter value for you during various time of the day. Is this counter any way relates to performance issues for your system? Note: There are various other counters which are important to discuss during the performance tuning and this counter is not everything. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Calling Web Service Functions Asynchronously from a Web Page

    - by SGWellens
    Over on the Asp.Net forums where I moderate, a user had a problem calling a Web Service from a web page asynchronously. I tried his code on my machine and was able to reproduce the problem. I was able to solve his problem, but only after taking the long scenic route through some of the more perplexing nuances of Web Services and Proxies. Here is the fascinating story of that journey. Start with a simple Web Service     public class Service1 : System.Web.Services.WebService    {        [WebMethod]        public string HelloWorld()        {            // sleep 10 seconds            System.Threading.Thread.Sleep(10 * 1000);            return "Hello World";        }    } The 10 second delay is added to make calling an asynchronous function more apparent. If you don't call the function asynchronously, it takes about 10 seconds for the page to be rendered back to the client. If the call is made from a Windows Forms application, the application freezes for about 10 seconds. Add the web service to a web site. Right-click the project and select "Add Web Reference…" Next, create a web page to call the Web Service. Note: An asp.net web page that calls an 'Async' method must have the Async property set to true in the page's header: <%@ Page Language="C#"          AutoEventWireup="true"          CodeFile="Default.aspx.cs"          Inherits="_Default"           Async='true'  %> Here is the code to create the Web Service proxy and connect the event handler. Shrewdly, we make the proxy object a member of the Page class so it remains instantiated between the various events. public partial class _Default : System.Web.UI.Page {    localhost.Service1 MyService;  // web service proxy     // ---- Page_Load ---------------------------------     protected void Page_Load(object sender, EventArgs e)    {        MyService = new localhost.Service1();        MyService.HelloWorldCompleted += EventHandler;          } Here is the code to invoke the web service and handle the event:     // ---- Async and EventHandler (delayed render) --------------------------     protected void ButtonHelloWorldAsync_Click(object sender, EventArgs e)    {        // blocks        ODS("Pre HelloWorldAsync...");        MyService.HelloWorldAsync();        ODS("Post HelloWorldAsync");    }    public void EventHandler(object sender, localhost.HelloWorldCompletedEventArgs e)    {        ODS("EventHandler");        ODS("    " + e.Result);    }     // ---- ODS ------------------------------------------------    //    // Helper function: Output Debug String     public static void ODS(string Msg)    {        String Out = String.Format("{0}  {1}", DateTime.Now.ToString("hh:mm:ss.ff"), Msg);        System.Diagnostics.Debug.WriteLine(Out);    } I added a utility function I use a lot: ODS (Output Debug String). Rather than include the library it is part of, I included it in the source file to keep this example simple. Fire up the project, open up a debug output window, press the button and we get this in the debug output window: 11:29:37.94 Pre HelloWorldAsync... 11:29:37.94 Post HelloWorldAsync 11:29:48.94 EventHandler 11:29:48.94 Hello World   Sweet. The asynchronous call was made and returned immediately. About 10 seconds later, the event handler fires and we get the result. Perfect….right? Not so fast cowboy. Watch the browser during the call: What the heck? The page is waiting for 10 seconds. Even though the asynchronous call returned immediately, Asp.Net is waiting for the event to fire before it renders the page. This is NOT what we wanted. I experimented with several techniques to work around this issue. Some may erroneously describe my behavior as 'hacking' but, since no ingesting of Twinkies was involved, I do not believe hacking is the appropriate term. If you examine the proxy that was automatically created, you will find a synchronous call to HelloWorld along with an additional set of methods to make asynchronous calls. I tried the other asynchronous method supplied in the proxy:     // ---- Begin and CallBack ----------------------------------     protected void ButtonBeginHelloWorld_Click(object sender, EventArgs e)    {        ODS("Pre BeginHelloWorld...");        MyService.BeginHelloWorld(AsyncCallback, null);        ODS("Post BeginHelloWorld");    }    public void AsyncCallback(IAsyncResult ar)    {        String Result = MyService.EndHelloWorld(ar);         ODS("AsyncCallback");        ODS("    " + Result);    } The BeginHelloWorld function in the proxy requires a callback function as a parameter. I tested it and the debug output window looked like this: 04:40:58.57 Pre BeginHelloWorld... 04:40:58.57 Post BeginHelloWorld 04:41:08.58 AsyncCallback 04:41:08.58 Hello World It works the same as before except for one critical difference: The page rendered immediately after the function call. I was worried the page object would be disposed after rendering the page but the system was smart enough to keep the page object in memory to handle the callback. Both techniques have a use: Delayed Render: Say you want to verify a credit card, look up shipping costs and confirm if an item is in stock. You could have three web service calls running in parallel and not render the page until all were finished. Nice. You can send information back to the client as part of the rendered page when all the services are finished. Immediate Render: Say you just want to start a service running and return to the client. You can do that too. However, the page gets sent to the client before the service has finished running so you will not be able to update parts of the page when the service finishes running. Summary: YourFunctionAsync() and an EventHandler will not render the page until the handler fires. BeginYourFunction() and a CallBack function will render the page as soon as possible. I found all this to be quite interesting and did a lot of searching and researching for documentation on this subject….but there isn't a lot out there. The biggest clues are the parameters that can be sent to the WSDL.exe program: http://msdn.microsoft.com/en-us/library/7h3ystb6(VS.100).aspx Two parameters are oldAsync and newAsync. OldAsync will create the Begin/End functions; newAsync will create the Async/Event functions. Caveat: I haven't tried this but it was stated in this article. I'll leave confirming this as an exercise for the student J. Included Code: I'm including the complete test project I created to verify the findings. The project was created with VS 2008 SP1. There is a solution file with 3 projects, the 3 projects are: Web Service Asp.Net Application Windows Forms Application To decide which program runs, you right-click a project and select "Set as Startup Project". I created and played with the Windows Forms application to see if it would reveal any secrets. I found that in the Windows Forms application, the generated proxy did NOT include the Begin/Callback functions. Those functions are only generated for Asp.Net pages. Probably for the reasons discussed earlier. Maybe those Microsoft boys and girls know what they are doing. I hope someone finds this useful. Steve Wellens

    Read the article

  • ASP.NET Web Forms Extensibility: Providers

    - by Ricardo Peres
    Introduction This will be the first of a number of posts on ASP.NET extensibility. At this moment I don’t know exactly how many will be and I only know a couple of subjects that I want to talk about, so more will come in the next days. I have the sensation that the providers offered by ASP.NET are not widely know, although everyone uses, for example, sessions, they may not be aware of the extensibility points that Microsoft included. This post won’t go into details of how to configure and extend each of the providers, but will hopefully give some pointers on that direction. Canonical These are the most widely known and used providers, coming from ASP.NET 1, chances are, you have used them already. Good support for invoking client side, either from a .NET application or from JavaScript. Lots of server-side controls use them, such as the Login control for example. Membership The Membership provider is responsible for managing registered users, including creating new ones, authenticating them, changing passwords, etc. ASP.NET comes with two implementations, one that uses a SQL Server database and another that uses the Active Directory. The base class is Membership and new providers are registered on the membership section on the Web.config file, as well as parameters for specifying minimum password lengths, complexities, maximum age, etc. One reason for creating a custom provider would be, for example, storing membership information in a different database engine. 1: <membership defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </membership> Role The Role provider assigns roles to authenticated users. The base class is Role and there are three out of the box implementations: XML-based, SQL Server and Windows-based. Also registered on Web.config through the roleManager section, where you can also say if your roles should be cached on a cookie. If you want your roles to come from a different place, implement a custom provider. 1: <roleManager defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly" /> 4: </providers> 5: </roleManager> Profile The Profile provider allows defining a set of properties that will be tied and made available to authenticated or even anonymous ones, which must be tracked by using anonymous authentication. The base class is Profile and the only included implementation stores these settings in a SQL Server database. Configured through profile section, where you also specify the properties to make available, a custom provider would allow storing these properties in different locations. 1: <profile defaultProvider="MyProvider"> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </profile> Basic OK, I didn’t know what to call these, so Basic is probably as good as a name as anything else. Not supported client-side (doesn’t even make sense). Session The Session provider allows storing data tied to the current “session”, which is normally created when a user first accesses the site, even when it is not yet authenticated, and remains all the way. The base class and only included implementation is SessionStateStoreProviderBase and it is capable of storing data in one of three locations: In the process memory (default, not suitable for web farms or increased reliability); A SQL Server database (best for reliability and clustering); The ASP.NET State Service, which is a Windows Service that is installed with the .NET Framework (ok for clustering). The configuration is made through the sessionState section. By adding a custom Session provider, you can store the data in different locations – think for example of a distributed cache. 1: <sessionState customProvider=”MyProvider”> 2: <providers> 3: <add name=”MyProvider” type=”MyClass, MyAssembly” /> 4: </providers> 5: </sessionState> Resource A not so known provider, allows you to change the origin of localized resource elements. By default, these come from RESX files and are used whenever you use the Resources expression builder or the GetGlobalResourceObject and GetLocalResourceObject methods, but if you implement a custom provider, you can have these elements come from some place else, such as a database. The base class is ResourceProviderFactory and there’s only one internal implementation which uses these RESX files. Configuration is through the globalization section. 1: <globalization resourceProviderFactoryType="MyClass, MyAssembly" /> Health Monitoring Health Monitoring is also probably not so well known, and actually not a good name for it. First, in order to understand what it does, you have to know that ASP.NET fires “events” at specific times and when specific things happen, such as when logging in, an exception is raised. These are not user interface events and you can create your own and fire them, nothing will happen, but the Health Monitoring provider will detect it. You can configure it to do things when certain conditions are met, such as a number of events being fired in a certain amount of time. You define these rules and route them to a specific provider, which must inherit from WebEventProvider. Out of the box implementations include sending mails, logging to a SQL Server database, writing to the Windows Event Log, Windows Management Instrumentation, the IIS 7 Trace infrastructure or the debugger Trace. Its configuration is achieved by the healthMonitoring section and a reason for implementing a custom provider would be, for example, locking down a web application in the event of a significant number of failed login attempts occurring in a small period of time. 1: <healthMonitoring> 2: <providers> 3: <add name="MyProvider" type="MyClass, MyAssembly"/> 4: </providers> 5: </healthMonitoring> Sitemap The Sitemap provider allows defining the site’s navigation structure and associated required permissions for each node, in a tree-like fashion. Usually this is statically defined, and the included provider allows it, by supplying this structure in a Web.sitemap XML file. The base class is SiteMapProvider and you can extend it in order to supply you own source for the site’s structure, which may even be dynamic. Its configuration must be done through the siteMap section. 1: <siteMap defaultProvider="MyProvider"> 2: <providers><add name="MyProvider" type="MyClass, MyAssembly" /> 3: </providers> 4: </siteMap> Web Part Personalization Web Parts are better known by SharePoint users, but since ASP.NET 2.0 they are included in the core Framework. Web Parts are server-side controls that offer certain possibilities of configuration by clients visiting the page where they are located. The infrastructure handles this configuration per user or globally for all users and this provider is responsible for just that. The base class is PersonalizationProvider and the only included implementation stores settings on SQL Server. Add new providers through the personalization section. 1: <webParts> 2: <personalization defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </personalization> 7: </webParts> Build The Build provider is responsible for compiling whatever files are present on your web folder. There’s a base class, BuildProvider, and, as can be expected, internal implementations for building pages (ASPX), master pages (Master), user web controls (ASCX), handlers (ASHX), themes (Skin), XML Schemas (XSD), web services (ASMX, SVC), resources (RESX), browser capabilities files (Browser) and so on. You would write a build provider if you wanted to generate code from any kind of non-code file so that you have strong typing at development time. Configuration goes on the buildProviders section and it is per extension. 1: <buildProviders> 2: <add extension=".ext" type="MyClass, MyAssembly” /> 3: </buildProviders> New in ASP.NET 4 Not exactly new since they exist since 2010, but in ASP.NET terms, still new. Output Cache The Output Cache for ASPX pages and ASCX user controls is now extensible, through the Output Cache provider, which means you can implement a custom mechanism for storing and retrieving cached data, for example, in a distributed fashion. The base class is OutputCacheProvider and the only implementation is private. Configuration goes on the outputCache section and on each page and web user control you can choose the provider you want to use. 1: <caching> 2: <outputCache defaultProvider="MyProvider"> 3: <providers> 4: <add name="MyProvider" type="MyClass, MyAssembly"/> 5: </providers> 6: </outputCache> 7: </caching> Request Validation A big change introduced in ASP.NET 4 (and refined in 4.5, by the way) is the introduction of extensible request validation, by means of a Request Validation provider. This means we are not limited to either enabling or disabling event validation for all pages or for a specific page, but we now have fine control over each of the elements of the request, including cookies, headers, query string and form values. The base provider class is RequestValidator and the configuration goes on the httpRuntime section. 1: <httpRuntime requestValidationType="MyClass, MyAssembly" /> Browser Capabilities The Browser Capabilities provider is new in ASP.NET 4, although the concept exists from ASP.NET 2. The idea is to map a browser brand and version to its supported capabilities, such as JavaScript version, Flash support, ActiveX support, and so on. Previously, this was all hardcoded in .Browser files located in %WINDIR%\Microsoft.NET\Framework(64)\vXXXXX\Config\Browsers, but now you can have a class inherit from HttpCapabilitiesProvider and implement your own mechanism. Register in on the browserCaps section. 1: <browserCaps provider="MyClass, MyAssembly" /> Encoder The Encoder provider is responsible for encoding every string that is sent to the browser on a page or header. This includes for example converting special characters for their standard codes and is implemented by the base class HttpEncoder. Another implementation takes care of Anti Cross Site Scripting (XSS) attacks. Build your own by inheriting from one of these classes if you want to add some additional processing to these strings. The configuration will go on the httpRuntime section. 1: <httpRuntime encoderType="MyClass, MyAssembly" /> Conclusion That’s about it for ASP.NET providers. It was by no means a thorough description, but I hope I managed to raise your interest on this subject. There are lots of pointers on the Internet, so I only included direct references to the Framework classes and configuration sections. Stay tuned for more extensibility!

    Read the article

  • Downgrade form php5 5.3.10 to php5 5.3.2 in ubuntu 12.04

    - by iori
    i wanted to install php5 5.3.2, so first deleted all the php5 files sudo apt-get purge php5 php5-cli php5-common php5-mysql and also delete the deb files form /var/cache/apt/archives so now there is no deb file on the system then i add this person repository sudo apt-add-repository ppa:sushkov/personal because he add php5.3.2 and then i updated it and upgraded it sudo apt-get update && sudo apt-get upgrade then i installed php5 sudo apt-get install php5 php5-cli php5-common php5-mysql now when i check the php version it says php5.3.10 and when i run this command sudo apt-cache show php5 it says Package: php5 Version: 5.3.15-1~dotdeb.0 Architecture: all Maintainer: Guillaume Plessis <[email protected]> Installed-Size: 0 Depends: libapache2-mod-php5 (>= 5.3.15-1~dotdeb.0) | libapache2-mod-php5filter (>= 5.3.15-1~dotdeb.0) | php5-cgi (>= 5.3.15-1~dotdeb.0) | php5-fpm (>= 5.3.15-1~dotdeb.0), php5-common (>= 5.3.15-1~dotdeb.0) Filename: dists/squeeze/php5/binary-i386/php5_5.3.15-1~dotdeb.0_all.deb now i dont know how to downgrade, is there any way that i change something in the repository and write sudo apt-get install php5 it will install php5.3.2 which i want instead of php5.3.10 Thanks

    Read the article

  • Google indexing and ranking a custom domain served by Google App Engine

    - by Hugues
    I have a website served on the following URL : "http://www.plugimmo.com" which is a custom domain served by Google App Engine on the following URL : http://plugimmo.appspot.com Since a while I have tried to optimise the Google indexing and ranking with no success. The problem is that searching on Google the keywords in the title of my home page does not retrieve my website at all even not in the 1,000 first results : When checking the cached version of google ( cache:www.plugimmo.com), it says that the cached version is the one of 20-Aug-12 of "plugimmo.appspot.com". It looks there are several issues : 1 - The cached version is really old. I have made a lot of changes since the 20-Aug-12 and I saw the googlebot crawling my site several times. 2 - The cached version is for "plugimmo.appspot.com" 3 - When looking at the Google Webmaster tools, I see that the number of pages indexed for www.plugimmo.com is 0, but that can't be the case given the number of changes I made since then. My questions would therefore be the following : Why is the version of the cache so old although I saw the googlebot crawling the site many times since 20-Aug-12 ? Is there a problem with indexing a custom domain served by Google App Engine ? Why is the Google Webmaster tools showing 0 pages indexed although new pages have been crawled and that no errors have been reported in the indexing ? Also, the site has been developed with Google Web Toolkit. I have followed the guidelines regarding crawling Ajax sites. The home page when crawled by a robot is redirected to http://www.plugimmo.com/HomeSnapshot.html Thanks a lot for your help ! Hugues

    Read the article

  • CodePlex Daily Summary for Thursday, March 04, 2010

    CodePlex Daily Summary for Thursday, March 04, 2010New ProjectsAcPrac: A educational program designed to run on Windows. It is fully customizable. It is developed in C# 2008.argo: Linguistic Tool Camelot: A simple utility for testing cross site data queries in SharePointdelta: Delta is a difference tool for large files. Delta will have several clients, including AJAX and Silverlight. You can view differences in a scroll...EF Dorsal: A dorsal spine for Entity Framework based project. This code generator provides a powerfull Business Layer with almost all high important best p...Excel formatting: Excel formatting projectEyes Recognition: eye recognitionGameOfLife: The Game of Life is a the best example of a cellular automaton. It's developed in C#, SilverLight.GKO Libraries: .NET tool libraries written in C# that we have used in our projectsHello Demo: A simple Hello World application, used to demonstrate access to CodePlex using Team Explorerjog.Portal: jogportal lays the foundation for an extensible portal solution leveraging the latest technology including linq and silver lightKM Brasil: k-meleon, km, gecko, brasil, browser, web, web browser, navegador, navegação, kmeleon, k meleonLost in Translation: Ever find yourself in a foreign country eager (or clueless) to what is written on a shop sign or restaurant menu? With your trusty phone, its came...Machine Learning for .NET: Machine Learning Library for .NET. Initial inclusions will be binary and multi-class classification as well as standard clustering algorithms.Maito: Iron Kingdoms Name GeneratorManPowerEngine: ManPowerEngine is a Silverlight 2D Game Engine. It has an game application framework and supports game object hierarchy management, 2D physics simu...Marvin's Arena: Marvin's Arena is a free and entertaining programming game. The game is designed to easily learn programming in any .NET compatible language. It is...MultiMediaPlayer: 整合我们的内容NCacheD - A Simple Distributed .NET Cache using the TPL and WCF: NCacheD is a simple distributed cache system written in .NET. NCacheD offers functionality similar to that of MemcacheD but scaled back. NCacheD ...NDAP: OPeNDAP is a client/server system for making local scientific data accessible to remote locations without regard to the local or remote storage for...Open Guide CMS: Open Guide makes your traveling through city more adventurous, mode educational and more funny. You can support us by lines of code, by interesti...Rollback - A social backup tool.: Rollback is a simple and intuitive social backup application. You can create multiple backup jobs with a few mouse clicks and even schedule it to ...sELedit: An editor for elements.data file of the popular MMORPG Perfect World.SharePoint Theme Applicator: SharePoint Theme Applicator will give SharePoint administrators the ability to apply a theme accross a whole web application (i.e. apply a theme to...sPATCH: ! sPATCH - Perfect World Client Autopatcher This beta patcher is an alternative to the default perfect world patcher. It offers easy client patc...Wiggle: A-life investigation. Let's make little squirmies!WPFSLBlendModeFx: A blend mode library for WPF & Silverlight.休息提醒: 程序员们,尤其是像我这样对程序痴狂的程序员们。一旦研究起自己感兴趣的程序时,觉得上厕所都是浪费时间。 这个程序是在设定的时间后锁定计算机或关闭显示器,从而从某种程度强迫程序员们去休息New ReleasesAMFParser: 1.1.0: Add handling of DSK object when using BlazeDS Add handling of string references Add handling of DateTime values Correct handling of Double values B...Camelot: Camelot 0.1 Alpha: Early release: 1) Query by content type, optionally list or base template 2) Query by list or base templateDictionary Translator for Umbraco: Dictionary Translator 1.1: This is a minor release that fixes a bug that Thomas Hohler found on the first day of release This package is to be used with the ASP.NET open sou...Dynamic Configuration: Sample Application Release 1: These binaries demonstrate the effect of using DynamicConfigurationManager. The source-code for these binaries is available in the 'Source Code' t...EF Dorsal: EFDorsal v0.3b: Second real version. This version add suport to types and entity sets in tree-view.Enterprise Library Extensions: Release 1.0: This is the initial release of the package. The release will contain one feature only, as being able to deploy the project itself it the milestone ...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.0.4 beta 2 Released: Hi, Today’s release contains fix for the following issues: * In WPF application chart was throwing exception as VisualStateGroup was not foun...GameOfLife: Game of life: First release of the game. PublishedGameStore League Manager: League Manager 1.0 release 2.: Fixes crashing bugs from the first release. To use: 1. Install SQL Server Express 2005 http://www.microsoft.com/Sqlserver/2005/en/us/express-down....Marvin's Arena: Version 0.0.5.0: Code Editor (development of robots without Visual Studio - no debugging) * Rounds * 3D Battle Engine: Skybox * 3D Battle Engine: Robot...Morphfolia - ASP.NET CMS and Framework: Morphfolia v2.4.1.1: Morphfolia v2.4.1.1 - New Release Includes: Better support for browsers other than IE (Chrome, Firefox, Safari - all tested on Windows) Supports ...NCacheD - A Simple Distributed .NET Cache using the TPL and WCF: NCacheD Version 1: Getting Started To get up and running, open two instances of Visual Studio 2010. In one window open the NCacheD client solution and then open the ...Papercut: Papercut 2010-3-3: This release includes a few bug fixes and updates, several of which were contributed patches (thanks!). Feature: Added support for embedded images...PE-file Reader Writer API (PERWAPI): PERWAPI-1.1.4: Perwapi version 1.1.4 is the complete distribution package. It contains Binary files, pdb files and xml files for the PERWAPI and SymbolRW compone...Prolog.NET: Prolog.NET 1.0 Beta 1.2: Installer includes: primary Prolog.NET assembly Prolog.NET Workbench PrologTest console application all required dependencies Beta 1.2 in...Protoforma | Tactica Adversa: Skilful 0.2.4.320: BetaRoTwee: RoTwee 6.1.0.0: 16604 "Post playing tune feature" is added. Using this new feature, you can tweet tune playing in iTunes. 16625 Error processing for network error...sELedit: sELedit v1.0: -SharePoint 2007 Deployment Wizard: Support for SharePoint Server and Foundation 2010: This release encompasses the supported install paths for the default install of SPS 2010 (the 14 hive). All three versions are now supported (60 h...SharePoint Theme Applicator: SharePoint Theme Applicator: SharePoint Theme Applicator was built using C# and WPF, it includes the following features: Provides the total number of site collections in the g...Shinkansen: compress, crunch, combine, and cache JavaScript and CSS: Shinkansen 1.0.0.030310: Build 1.0.0.030310, binaries onlysPATCH: sPatcher v0.8: sPatch - Server Example *Contains a sample Patch that "downgrades" PWI 1.4.2 Client to an 1.3.6 ClientTFS Code Comment Checking Policy (CCCP): CCCP 3.0 for VSTS 2008 SP1: This release includes NRefactory 3.2.0.5571 and is built against VSTS 2008 SP1 (.NET 3.5 is required). New: horizontal scrollbar in listboxes for ...TortoiseHg: Beta for TortoiseHg 1.0 (0.9.31254): Please backup your user Mercurial.ini file and then uninstall any 0.9.X release before installing Use the x86 msi file for 32 bit platforms and th...TwitEclipseAPI: TwitEclipseAPI 0.9: 0.9 Release of TwitEclipseAPI Moved API calls to the api.Twitter.com URL. Moved API calls to the versioning API. Now uses the increased Rate Limit...TwitterVB - A .NET Twitter Library: TwitterVB-2.3: Patch 5151: Added BlockedUsers function to get a page other then the first Patch 5420: The ListMembers function will now return more then just th...休息提醒: 初始版本: 初始版本Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Microsoft SQL Server Community & SamplesASP.NETLiveUpload to FacebookMost Active ProjectsRawrBlogEngine.NETMapWindow GISpatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesMDT Web FrontEndsvn2tfsDiffPlex - a .NET Diff GeneratorIonics Isapi Rewrite FilterFarseer Physics Engine

    Read the article

  • Creating a Dynamic DataRow for easier DataRow Syntax

    - by Rick Strahl
    I've been thrown back into an older project that uses DataSets and DataRows as their entity storage model. I have several applications internally that I still maintain that run just fine (and I sometimes wonder if this wasn't easier than all this ORM crap we deal with with 'newer' improved technology today - but I disgress) but use this older code. For the most part DataSets/DataTables/DataRows are abstracted away in a pseudo entity model, but in some situations like queries DataTables and DataRows are still surfaced to the business layer. Here's an example. Here's a business object method that runs dynamic query and the code ends up looping over the result set using the ugly DataRow Array syntax:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { string title = row["title"] as string; string safeTitle = row["safeTitle"] as string; int pk = (int)row["pk"]; string newSafeTitle = this.GetSafeTitle(title); if (newSafeTitle != safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",pk) ); result++; } } return result; } The problem with looping over DataRow objecs is two fold: The array syntax is tedious to type and not real clear to look at, and explicit casting is required in order to do anything useful with the values. I've highlighted the place where this matters. Using the DynamicDataRow class I'll show in a minute this code can be changed to look like this:public int UpdateAllSafeTitles() { int result = this.Execute("select pk, title, safetitle from " + Tablename + " where EntryType=1", "TPks"); if (result < 0) return result; result = 0; foreach (DataRow row in this.DataSet.Tables["TPks"].Rows) { dynamic entry = new DynamicDataRow(row); string newSafeTitle = this.GetSafeTitle(entry.title); if (newSafeTitle != entry.safeTitle) { this.ExecuteNonQuery("update " + this.Tablename + " set safeTitle=@safeTitle where pk=@pk", this.CreateParameter("@safeTitle",newSafeTitle), this.CreateParameter("@pk",entry.pk) ); result++; } } return result; } The code looks much a bit more natural and describes what's happening a little nicer as well. Well, using the new dynamic features in .NET it's actually quite easy to implement the DynamicDataRow class. Creating your own custom Dynamic Objects .NET 4.0 introduced the Dynamic Language Runtime (DLR) and opened up a whole bunch of new capabilities for .NET applications. The dynamic type is an easy way to avoid Reflection and directly access members of 'dynamic' or 'late bound' objects at runtime. There's a lot of very subtle but extremely useful stuff that dynamic does (especially for COM Interop scenearios) but in its simplest form it often allows you to do away with manual Reflection at runtime. In addition you can create DynamicObject implementations that can perform  custom interception of member accesses and so allow you to provide more natural access to more complex or awkward data structures like the DataRow that I use as an example here. Bascially you can subclass DynamicObject and then implement a few methods (TryGetMember, TrySetMember, TryInvokeMember) to provide the ability to return dynamic results from just about any data structure using simple property/method access. In the code above, I created a custom DynamicDataRow class which inherits from DynamicObject and implements only TryGetMember and TrySetMember. Here's what simple class looks like:/// <summary> /// This class provides an easy way to turn a DataRow /// into a Dynamic object that supports direct property /// access to the DataRow fields. /// /// The class also automatically fixes up DbNull values /// (null into .NET and DbNUll to DataRow) /// </summary> public class DynamicDataRow : DynamicObject { /// <summary> /// Instance of object passed in /// </summary> DataRow DataRow; /// <summary> /// Pass in a DataRow to work off /// </summary> /// <param name="instance"></param> public DynamicDataRow(DataRow dataRow) { DataRow = dataRow; } /// <summary> /// Returns a value from a DataRow items array. /// If the field doesn't exist null is returned. /// DbNull values are turned into .NET nulls. /// /// </summary> /// <param name="binder"></param> /// <param name="result"></param> /// <returns></returns> public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; try { result = DataRow[binder.Name]; if (result == DBNull.Value) result = null; return true; } catch { } result = null; return false; } /// <summary> /// Property setter implementation tries to retrieve value from instance /// first then into this object /// </summary> /// <param name="binder"></param> /// <param name="value"></param> /// <returns></returns> public override bool TrySetMember(SetMemberBinder binder, object value) { try { if (value == null) value = DBNull.Value; DataRow[binder.Name] = value; return true; } catch {} return false; } } To demonstrate the basic features here's a short test: [TestMethod] [ExpectedException(typeof(RuntimeBinderException))] public void BasicDataRowTests() { DataTable table = new DataTable("table"); table.Columns.Add( new DataColumn() { ColumnName = "Name", DataType=typeof(string) }); table.Columns.Add( new DataColumn() { ColumnName = "Entered", DataType=typeof(DateTime) }); table.Columns.Add(new DataColumn() { ColumnName = "NullValue", DataType = typeof(string) }); DataRow row = table.NewRow(); DateTime now = DateTime.Now; row["Name"] = "Rick"; row["Entered"] = now; row["NullValue"] = null; // converted in DbNull dynamic drow = new DynamicDataRow(row); string name = drow.Name; DateTime entered = drow.Entered; string nulled = drow.NullValue; Assert.AreEqual(name, "Rick"); Assert.AreEqual(entered,now); Assert.IsNull(nulled); // this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); } The DynamicDataRow requires a custom constructor that accepts a single parameter that sets the DataRow. Once that's done you can access property values that match the field names. Note that types are automatically converted - no type casting is needed in the code you write. The class also automatically converts DbNulls to regular nulls and vice versa which is something that makes it much easier to deal with data returned from a database. What's cool here isn't so much the functionality - even if I'd prefer to leave DataRow behind ASAP -  but the fact that we can create a dynamic type that uses a DataRow as it's 'DataSource' to serve member values. It's pretty useful feature if you think about it, especially given how little code it takes to implement. By implementing these two simple methods we get to provide two features I was complaining about at the beginning that are missing from the DataRow: Direct Property Syntax Automatic Type Casting so no explicit casts are required Caveats As cool and easy as this functionality is, it's important to understand that it doesn't come for free. The dynamic features in .NET are - well - dynamic. Which means they are essentially evaluated at runtime (late bound). Rather than static typing where everything is compiled and linked by the compiler/linker, member invokations are looked up at runtime and essentially call into your custom code. There's some overhead in this. Direct invocations - the original code I showed - is going to be faster than the equivalent dynamic code. However, in the above code the difference of running the dynamic code and the original data access code was very minor. The loop running over 1500 result records took on average 13ms with the original code and 14ms with the dynamic code. Not exactly a serious performance bottleneck. One thing to remember is that Microsoft optimized the DLR code significantly so that repeated calls to the same operations are routed very efficiently which actually makes for very fast evaluation. The bottom line for performance with dynamic code is: Make sure you test and profile your code if you think that there might be a performance issue. However, in my experience with dynamic types so far performance is pretty good for repeated operations (ie. in loops). While usually a little slower the perf hit is a lot less typically than equivalent Reflection work. Although the code in the second example looks like standard object syntax, dynamic is not static code. It's evaluated at runtime and so there's no type recognition until runtime. This means no Intellisense at development time, and any invalid references that call into 'properties' (ie. fields in the DataRow) that don't exist still cause runtime errors. So in the case of the data row you still get a runtime error if you mistype a column name:// this should throw a RuntimeBinderException Assert.AreEqual(entered,drow.enteredd); Dynamic - Lots of uses The arrival of Dynamic types in .NET has been met with mixed emotions. Die hard .NET developers decry dynamic types as an abomination to the language. After all what dynamic accomplishes goes against all that a static language is supposed to provide. On the other hand there are clearly scenarios when dynamic can make life much easier (COM Interop being one place). Think of the possibilities. What other data structures would you like to expose to a simple property interface rather than some sort of collection or dictionary? And beyond what I showed here you can also implement 'Method missing' behavior on objects with InvokeMember which essentially allows you to create dynamic methods. It's all very flexible and maybe just as important: It's easy to do. There's a lot of power hidden in this seemingly simple interface. Your move…© Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  .NET   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Creating a JSONP Formatter for ASP.NET Web API

    - by Rick Strahl
    Out of the box ASP.NET WebAPI does not include a JSONP formatter, but it's actually very easy to create a custom formatter that implements this functionality. JSONP is one way to allow Browser based JavaScript client applications to bypass cross-site scripting limitations and serve data from the non-current Web server. AJAX in Web Applications uses the XmlHttp object which by default doesn't allow access to remote domains. There are number of ways around this limitation <script> tag loading and JSONP is one of the easiest and semi-official ways that you can do this. JSONP works by combining JSON data and wrapping it into a function call that is executed when the JSONP data is returned. If you use a tool like jQUery it's extremely easy to access JSONP content. Imagine that you have a URL like this: http://RemoteDomain/aspnetWebApi/albums which on an HTTP GET serves some data - in this case an array of record albums. This URL is always directly accessible from an AJAX request if the URL is on the same domain as the parent request. However, if that URL lives on a separate server it won't be easily accessible to an AJAX request. Now, if  the server can serve up JSONP this data can be accessed cross domain from a browser client. Using jQuery it's really easy to retrieve the same data with JSONP:function getAlbums() { $.getJSON("http://remotedomain/aspnetWebApi/albums?callback=?",null, function (albums) { alert(albums.length); }); } The resulting callback the same as if the call was to a local server when the data is returned. jQuery deserializes the data and feeds it into the method. Here the array is received and I simply echo back the number of items returned. From here your app is ready to use the data as needed. This all works fine - as long as the server can serve the data with JSONP. What does JSONP look like? JSONP is a pretty simple 'protocol'. All it does is wrap a JSON response with a JavaScript function call. The above result from the JSONP call looks like this:Query17103401925975181569_1333408916499( [{"Id":"34043957","AlbumName":"Dirty Deeds Done Dirt Cheap",…},{…}] ) The way JSONP works is that the client (jQuery in this case) sends of the request, receives the response and evals it. The eval basically executes the function and deserializes the JSON inside of the function. It's actually a little more complex for the framework that does this, but that's the gist of what happens. JSONP works by executing the code that gets returned from the JSONP call. JSONP and ASP.NET Web API As mentioned previously, JSONP support is not natively in the box with ASP.NET Web API. But it's pretty easy to create and plug-in a custom formatter that provides this functionality. The following code is based on Christian Weyers example but has been updated to the latest Web API CodePlex bits, which changes the implementation a bit due to the way dependent objects are exposed differently in the latest builds. Here's the code:  using System; using System.IO; using System.Net; using System.Net.Http.Formatting; using System.Net.Http.Headers; using System.Threading.Tasks; using System.Web; using System.Net.Http; namespace Westwind.Web.WebApi { /// <summary> /// Handles JsonP requests when requests are fired with /// text/javascript or application/json and contain /// a callback= (configurable) query string parameter /// /// Based on Christian Weyers implementation /// https://github.com/thinktecture/Thinktecture.Web.Http/blob/master/Thinktecture.Web.Http/Formatters/JsonpFormatter.cs /// </summary> public class JsonpFormatter : JsonMediaTypeFormatter { public JsonpFormatter() { SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json")); SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/javascript")); //MediaTypeMappings.Add(new UriPathExtensionMapping("jsonp", "application/json")); JsonpParameterName = "callback"; } /// <summary> /// Name of the query string parameter to look for /// the jsonp function name /// </summary> public string JsonpParameterName {get; set; } /// <summary> /// Captured name of the Jsonp function that the JSON call /// is wrapped in. Set in GetPerRequestFormatter Instance /// </summary> private string JsonpCallbackFunction; public override bool CanWriteType(Type type) { return true; } /// <summary> /// Override this method to capture the Request object /// and look for the query string parameter and /// create a new instance of this formatter. /// /// This is the only place in a formatter where the /// Request object is available. /// </summary> /// <param name="type"></param> /// <param name="request"></param> /// <param name="mediaType"></param> /// <returns></returns> public override MediaTypeFormatter GetPerRequestFormatterInstance(Type type, HttpRequestMessage request, MediaTypeHeaderValue mediaType) { var formatter = new JsonpFormatter() { JsonpCallbackFunction = GetJsonCallbackFunction(request) }; return formatter; } /// <summary> /// Override to wrap existing JSON result with the /// JSONP function call /// </summary> /// <param name="type"></param> /// <param name="value"></param> /// <param name="stream"></param> /// <param name="contentHeaders"></param> /// <param name="transportContext"></param> /// <returns></returns> public override Task WriteToStreamAsync(Type type, object value, Stream stream, HttpContentHeaders contentHeaders, TransportContext transportContext) { if (!string.IsNullOrEmpty(JsonpCallbackFunction)) { return Task.Factory.StartNew(() => { var writer = new StreamWriter(stream); writer.Write( JsonpCallbackFunction + "("); writer.Flush(); base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext).Wait(); writer.Write(")"); writer.Flush(); }); } else { return base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext); } } /// <summary> /// Retrieves the Jsonp Callback function /// from the query string /// </summary> /// <returns></returns> private string GetJsonCallbackFunction(HttpRequestMessage request) { if (request.Method != HttpMethod.Get) return null; var query = HttpUtility.ParseQueryString(request.RequestUri.Query); var queryVal = query[this.JsonpParameterName]; if (string.IsNullOrEmpty(queryVal)) return null; return queryVal; } } } Note again that this code will not work with the Beta bits of Web API - it works only with post beta bits from CodePlex and hopefully this will continue to work until RTM :-) This code is a bit different from Christians original code as the API has changed. The biggest change is that the Read/Write functions no longer receive a global context object that gives access to the Request and Response objects as the older bits did. Instead you now have to override the GetPerRequestFormatterInstance() method, which receives the Request as a parameter. You can capture the Request there, or use the request to pick up the values you need and store them on the formatter. Note that I also have to create a new instance of the formatter since I'm storing request specific state on the instance (information whether the callback= querystring is present) so I return a new instance of this formatter. Other than that the code should be straight forward: The code basically writes out the function pre- and post-amble and the defers to the base stream to retrieve the JSON to wrap the function call into. The code uses the Async APIs to write this data out (this will take some getting used to seeing all over the place for me). Hooking up the JsonpFormatter Once you've created a formatter, it has to be added to the request processing sequence by adding it to the formatter collection. Web API is configured via the static GlobalConfiguration object.  protected void Application_Start(object sender, EventArgs e) { // Verb Routing RouteTable.Routes.MapHttpRoute( name: "AlbumsVerbs", routeTemplate: "albums/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi" } ); GlobalConfiguration .Configuration .Formatters .Insert(0, new Westwind.Web.WebApi.JsonpFormatter()); }   That's all it takes. Note that I added the formatter at the top of the list of formatters, rather than adding it to the end which is required. The JSONP formatter needs to fire before any other JSON formatter since it relies on the JSON formatter to encode the actual JSON data. If you reverse the order the JSONP output never shows up. So, in general when adding new formatters also try to be aware of the order of the formatters as they are added. Resources JsonpFormatter Code on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How do I get a Dane-Elec mp3/mp4 player working?

    - by user40432
    My MP3/MP4 does not plug-in and play and therefore I can not transfer any file to the MP3/MP4 dane-elec music my touch or only dane-elec with 8 gb in memory and perhapses model zt1 with radio,..and microsdhc card slot following the above link the mp3/mp4 is there and it is MP3 Player: TOUCH MY MUSIC and the complete information is on this site http://www.danedigital.com/8-Music-Media-Players/2-music-touch.html as the Technical Specifications MP3 Player: TOUCH MY MUSIC The Mp4 player has a very classy. It allows its users to play music and view photos and video. His fluent interface, its touch-pad, his radio and RDS Micro SDHC reader makes him a very complete device will become the ideal musical companion. ubuntu i am with is ubuntu 11.10 kernel 3.0.0-14-generic the latest I tried to install many applications but nothing worked. With disk utility I can see that Ubuntu can recognize something, that as a peripheral device named rockchip usbdisk user and rockchip usbdisk sd, and i can plug and play other devices, and only this mp3/mp4 do not connect to the computer with ubuntu and the device as no problem working disconnected to computer I try to see if work on Windows and it does! I can see the device and transfer files to the MP3/MP4 dane-elec folder device and use FAT32. So why can not do on Ubuntu!? What can I do and why does not work on Ubuntu? What is wrong with it? Here are the logs: Jan 4 17:27:34 a-ubuntu kernel: [ 141.948863] init: apport pre-start process (1970) terminated with status 1 Jan 4 17:27:34 a-ubuntu kernel: [ 141.963202] init: apport post-stop process (1994) terminated with status 1 Jan 4 17:30:02 a-ubuntu kernel: [ 289.564049] usb 2-4: new high speed USB device number 3 using ehci_hcd Jan 4 17:30:02 a-ubuntu kernel: [ 289.988706] usbcore: registered new interface driver uas Jan 4 17:30:02 a-ubuntu kernel: [ 289.992056] Initializing USB Mass Storage driver... Jan 4 17:30:02 a-ubuntu kernel: [ 289.992272] scsi6 : usb-storage 2-4:1.0 Jan 4 17:30:02 a-ubuntu kernel: [ 289.993082] usbcore: registered new interface driver usb-storage Jan 4 17:30:02 a-ubuntu kernel: [ 289.993088] USB Mass Storage support registered. Jan 4 17:30:03 a-ubuntu kernel: [ 290.996887] scsi 6:0:0:0: Direct-Access RockChip USBDISK User 1.00 PQ: 0 ANSI: 0 Jan 4 17:30:03 a-ubuntu kernel: [ 290.997372] scsi 6:0:0:1: Direct-Access RockChip USBDISK SD 1.00 PQ: 0 ANSI: 0 Jan 4 17:30:03 a-ubuntu kernel: [ 290.997478] scsi: killing requests for dead queue Jan 4 17:30:03 a-ubuntu kernel: [ 291.002712] scsi: killing requests for dead queue Jan 4 17:30:03 a-ubuntu kernel: [ 291.002880] scsi: killing requests for dead queue Jan 4 17:30:04 a-ubuntu kernel: [ 291.016249] scsi: killing requests for dead queue Jan 4 17:30:04 a-ubuntu kernel: [ 291.032252] scsi: killing requests for dead queue Jan 4 17:30:04 a-ubuntu kernel: [ 291.048182] scsi: killing requests for dead queue Jan 4 17:30:04 a-ubuntu kernel: [ 291.060178] scsi: killing requests for dead queue Jan 4 17:30:04 a-ubuntu kernel: [ 291.060357] scsi: killing requests for dead queue Jan 4 17:30:04 a-ubuntu kernel: [ 291.080381] sd 6:0:0:0: Attached scsi generic sg2 type 0 Jan 4 17:30:04 a-ubuntu kernel: [ 291.080646] sd 6:0:0:1: Attached scsi generic sg3 type 0 Jan 4 17:30:04 a-ubuntu kernel: [ 291.088381] sd 6:0:0:0: [sdb] 16015360 512-byte logical blocks: (8.19 GB/7.63 GiB) Jan 4 17:30:04 a-ubuntu kernel: [ 291.088988] sd 6:0:0:1: [sdc] Attached SCSI removable disk Jan 4 17:30:04 a-ubuntu kernel: [ 291.200050] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:30:04 a-ubuntu kernel: [ 291.448044] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:30:04 a-ubuntu kernel: [ 291.696055] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:30:04 a-ubuntu kernel: [ 291.832046] sd 6:0:0:0: [sdb] Test WP failed, assume Write Enabled Jan 4 17:30:04 a-ubuntu kernel: [ 291.832994] sd 6:0:0:0: [sdb] Asking for cache data failed Jan 4 17:30:04 a-ubuntu kernel: [ 291.833001] sd 6:0:0:0: [sdb] Assuming drive cache: write through Jan 4 17:30:04 a-ubuntu kernel: [ 291.834378] sdb: detected capacity change from 8199864320 to 0 Jan 4 17:30:04 a-ubuntu kernel: [ 291.835367] sd 6:0:0:0: [sdb] Attached SCSI removable disk Jan 4 17:30:06 a-ubuntu kernel: [ 293.004741] sd 6:0:0:0: [sdb] 16015360 512-byte logical blocks: (8.19 GB/7.63 GiB) Jan 4 17:30:06 a-ubuntu kernel: [ 293.116051] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:30:21 a-ubuntu kernel: [ 308.228043] usb 2-4: device descriptor read/64, error -110 Jan 4 17:30:36 a-ubuntu kernel: [ 323.444072] usb 2-4: device descriptor read/64, error -110 Jan 4 17:30:36 a-ubuntu kernel: [ 323.660047] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:30:51 a-ubuntu kernel: [ 338.772085] usb 2-4: device descriptor read/64, error -110 Jan 4 17:31:06 a-ubuntu kernel: [ 353.988064] usb 2-4: device descriptor read/64, error -110 Jan 4 17:31:07 a-ubuntu kernel: [ 354.204058] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:31:12 a-ubuntu kernel: [ 359.224115] usb 2-4: device descriptor read/8, error -110 Jan 4 17:31:17 a-ubuntu kernel: [ 364.344136] usb 2-4: device descriptor read/8, error -110 Jan 4 17:31:17 a-ubuntu kernel: [ 364.560037] usb 2-4: reset high speed USB device number 3 using ehci_hcd Jan 4 17:31:22 a-ubuntu kernel: [ 369.580132] usb 2-4: device descriptor read/8, error -110 Jan 4 17:31:27 a-ubuntu kernel: [ 374.700126] usb 2-4: device descriptor read/8, error -110 Jan 4 17:31:27 a-ubuntu kernel: [ 374.804121] usb 2-4: USB disconnect, device number 3 Jan 4 17:31:27 a-ubuntu kernel: [ 374.804518] sd 6:0:0:0: Device offlined - not ready after error recovery Jan 4 17:31:27 a-ubuntu kernel: [ 374.804600] sd 6:0:0:0: [sdb] No Caching mode page present Jan 4 17:31:27 a-ubuntu kernel: [ 374.804606] sd 6:0:0:0: [sdb] Assuming drive cache: write through Jan 4 17:31:27 a-ubuntu kernel: [ 374.804693] sd 6:0:0:0: [sdb] READ CAPACITY failed Jan 4 17:31:27 a-ubuntu kernel: [ 374.804698] sd 6:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK Jan 4 17:31:27 a-ubuntu kernel: [ 374.804704] sd 6:0:0:0: [sdb] Sense not available. Jan 4 17:31:27 a-ubuntu kernel: [ 374.804744] sd 6:0:0:0: [sdb] No Caching mode page present Jan 4 17:31:27 a-ubuntu kernel: [ 374.804748] sd 6:0:0:0: [sdb] Assuming drive cache: write through Jan 4 17:31:27 a-ubuntu kernel: [ 374.804754] sdb: detected capacity change from 8199864320 to 0 Jan 4 17:31:27 a-ubuntu kernel: [ 374.820273] scsi: killing requests for dead queue Jan 4 17:31:27 a-ubuntu kernel: [ 374.852240] scsi: killing requests for dead queue Jan 4 17:31:27 a-ubuntu kernel: [ 374.980054] usb 2-4: new high speed USB device number 4 using ehci_hcd Jan 4 17:31:43 a-ubuntu kernel: [ 390.092059] usb 2-4: device descriptor read/64, error -110 Jan 4 17:31:58 a-ubuntu kernel: [ 405.308070] usb 2-4: device descriptor read/64, error -110 Jan 4 17:31:58 a-ubuntu kernel: [ 405.524078] usb 2-4: new high speed USB device number 5 using ehci_hcd and the other post is: http://pastebin.ubuntu.com/792915/ and the other bDeviceSubClass 2 ? bDeviceProtocol 1 Interface Association bMaxPacketSize0 64 idVendor 0x04f2 Chicony Electronics Co., Ltd idProduct 0xb008 USB 2.0 Camera bcdDevice 93.27 iManufacturer 2 Chicony Electronics Co., Ltd. iProduct 1 Chicony USB 2.0 Camera iSerial 3 SN0001 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 565 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 500mA Interface Association: bLength 8 bDescriptorType 11 bFirstInterface 0 bInterfaceCount 2 bFunctionClass 14 Video bFunctionSubClass 3 Video Interface Collection bFunctionProtocol 0 iFunction 1 Chicony USB 2.0 Camera Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 1 Video Control bInterfaceProtocol 0 iInterface 1 Chicony USB 2.0 Camera VideoControl Interface Descriptor: bLength 13 bDescriptorType 36 bDescriptorSubtype 1 (HEADER) bcdUVC 1.00 wTotalLength 77 dwClockFrequency 15.000000MHz bInCollection 1 baInterfaceNr( 0) 1 VideoControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 3 (OUTPUT_TERMINAL) bTerminalID 2 wTerminalType 0x0101 USB Streaming bAssocTerminal 0 bSourceID 4 iTerminal 0 VideoControl Interface Descriptor: bLength 26 bDescriptorType 36 bDescriptorSubtype 6 (EXTENSION_UNIT) bUnitID 4 guidExtensionCode {7033f028-1163-2e4a-ba2c-6890eb334016} bNumControl 1 bNrPins 1 baSourceID( 0) 3 bControlSize 1 bmControls( 0) 0x01 iExtension 0 VideoControl Interface Descriptor: bLength 18 bDescriptorType 36 bDescriptorSubtype 2 (INPUT_TERMINAL) bTerminalID 1 wTerminalType 0x0201 Camera Sensor bAssocTerminal 0 iTerminal 0 wObjectiveFocalLengthMin 0 wObjectiveFocalLengthMax 0 wOcularFocalLength 0 bControlSize 3 bmControls 0x00000000 VideoControl Interface Descriptor: bLength 11 bDescriptorType 36 bDescriptorSubtype 5 (PROCESSING_UNIT) Warning: Descriptor too short bUnitID 3 bSourceID 1 wMaxMultiplier 0 bControlSize 2 bmControls 0x0000053f Brightness Contrast Hue Saturation Sharpness Gamma Backlight Compensation Power Line Frequency iProcessing 0 bmVideoStandards 0x a NTSC - 525/60 SECAM - 625/50 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0010 1x 16 bytes bInterval 6 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 VideoStreaming Interface Descriptor: bLength 14 bDescriptorType 36 bDescriptorSubtype 1 (INPUT_HEADER) bNumFormats 1 wTotalLength 345 bEndPointAddress 129 bmInfo 0 bTerminalLink 2 bStillCaptureMethod 0 bTriggerSupport 1 bTriggerUsage 0 bControlSize 1 bmaControls( 0) 27 VideoStreaming Interface Descriptor: bLength 27 bDescriptorType 36 bDescriptorSubtype 4 (FORMAT_UNCOMPRESSED) bFormatIndex 1 bNumFrameDescriptors 7 guidFormat {59555932-0000-1000-8000-00aa00389b71} bBitsPerPixel 16 bDefaultFrameIndex 1 bAspectRatioX 0 bAspectRatioY 0 bmInterlaceFlags 0x00 Interlaced stream or variable: No Fields per frame: 2 fields Field 1 first: No Field pattern: Field 1 only bCopyProtect 0 VideoStreaming Interface Descriptor: bLength 46 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 1 bmCapabilities 0x00 Still image unsupported wWidth 640 wHeight 480 dwMinBitRate 614400 dwMaxBitRate 18432000 dwMaxVideoFrameBufferSize 614400 dwDefaultFrameInterval 333333 bFrameIntervalType 5 dwFrameInterval( 0) 333333 dwFrameInterval( 1) 500000 dwFrameInterval( 2) 666666 dwFrameInterval( 3) 1000000 dwFrameInterval( 4) 2000000 VideoStreaming Interface Descriptor: bLength 46 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 2 bmCapabilities 0x00 Still image unsupported wWidth 352 wHeight 288 dwMinBitRate 202752 dwMaxBitRate 6082560 dwMaxVideoFrameBufferSize 202752 dwDefaultFrameInterval 333333 bFrameIntervalType 5 dwFrameInterval( 0) 333333 dwFrameInterval( 1) 500000 dwFrameInterval( 2) 666666 dwFrameInterval( 3) 1000000 dwFrameInterval( 4) 2000000 VideoStreaming Interface Descriptor: bLength 46 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 3 bmCapabilities 0x00 Still image unsupported wWidth 320 wHeight 240 dwMinBitRate 153600 dwMaxBitRate 4608000 dwMaxVideoFrameBufferSize 153600 dwDefaultFrameInterval 333333 bFrameIntervalType 5 dwFrameInterval( 0) 333333 dwFrameInterval( 1) 500000 dwFrameInterval( 2) 666666 dwFrameInterval( 3) 1000000 dwFrameInterval( 4) 2000000 VideoStreaming Interface Descriptor: bLength 46 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 4 bmCapabilities 0x00 Still image unsupported wWidth 176 wHeight 144 dwMinBitRate 50688 dwMaxBitRate 1520640 dwMaxVideoFrameBufferSize 50688 dwDefaultFrameInterval 333333 bFrameIntervalType 5 dwFrameInterval( 0) 333333 dwFrameInterval( 1) 500000 dwFrameInterval( 2) 666666 dwFrameInterval( 3) 1000000 dwFrameInterval( 4) 2000000 VideoStreaming Interface Descriptor: bLength 46 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 5 bmCapabilities 0x00 Still image unsupported wWidth 160 wHeight 120 dwMinBitRate 38400 dwMaxBitRate 1152000 dwMaxVideoFrameBufferSize 38400 dwDefaultFrameInterval 333333 bFrameIntervalType 5 dwFrameInterval( 0) 333333 dwFrameInterval( 1) 500000 dwFrameInterval( 2) 666666 dwFrameInterval( 3) 1000000 dwFrameInterval( 4) 2000000 VideoStreaming Interface Descriptor: bLength 34 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 6 bmCapabilities 0x00 Still image unsupported wWidth 1280 wHeight 800 dwMinBitRate 2048000 dwMaxBitRate 18432000 dwMaxVideoFrameBufferSize 2048000 dwDefaultFrameInterval 1333333 bFrameIntervalType 2 dwFrameInterval( 0) 1333333 dwFrameInterval( 1) 2000000 VideoStreaming Interface Descriptor: bLength 34 bDescriptorType 36 bDescriptorSubtype 5 (FRAME_UNCOMPRESSED) bFrameIndex 7 bmCapabilities 0x00 Still image unsupported wWidth 1280 wHeight 1024 dwMinBitRate 2621440 dwMaxBitRate 23592960 dwMaxVideoFrameBufferSize 2621440 dwDefaultFrameInterval 1333333 bFrameIntervalType 2 dwFrameInterval( 0) 1333333 dwFrameInterval( 1) 2000000 VideoStreaming Interface Descriptor: bLength 6 bDescriptorType 36 bDescriptorSubtype 13 (COLORFORMAT) bColorPrimaries 1 (BT.709,sRGB) bTransferCharacteristics 1 (BT.709) bMatrixCoefficients 4 (SMPTE 170M (BT.601)) Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 1 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x0080 1x 128 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 2 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x0100 1x 256 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 3 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x0320 1x 800 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 4 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x0b20 2x 800 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 5 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x1320 3x 800 bytes bInterval 1 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 6 bNumEndpoints 1 bInterfaceClass 14 Video bInterfaceSubClass 2 Video Streaming bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 5 Transfer Type Isochronous Synch Type Asynchronous Usage Type Data wMaxPacketSize 0x13e8 3x 1000 bytes bInterval 1 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 239 Miscellaneous Device bDeviceSubClass 2 ? bDeviceProtocol 1 Interface Association bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) Bus 006 Device 002: ID 04d9:1503 Holtek Semiconductor, Inc. Shortboard Lefty Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x04d9 Holtek Semiconductor, Inc. idProduct 0x1503 Shortboard Lefty bcdDevice 3.10 iManufacturer 1 iProduct 2 USB Keyboard iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 59 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 100mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 1 Keyboard iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 62 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 101 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Device Status: 0x0000 (Bus Powered)

    Read the article

  • Demystifying Silverlight Dependency Properties

    - by dwahlin
    I have the opportunity to teach a lot of people about Silverlight (amongst other technologies) and one of the topics that definitely confuses people initially is the concept of dependency properties. I confess that when I first heard about them my initial thought was “Why do we need a specialized type of property?” While you can certainly use standard CLR properties in Silverlight applications, Silverlight relies heavily on dependency properties for just about everything it does behind the scenes. In fact, dependency properties are an essential part of the data binding, template, style and animation functionality available in Silverlight. They simply back standard CLR properties. In this post I wanted to put together a (hopefully) simple explanation of dependency properties and why you should care about them if you’re currently working with Silverlight or looking to move to it.   What are Dependency Properties? XAML provides a great way to define layout controls, user input controls, shapes, colors and data binding expressions in a declarative manner. There’s a lot that goes on behind the scenes in order to make XAML work and an important part of that magic is the use of dependency properties. If you want to bind data to a property, style it, animate it or transform it in XAML then the property involved has to be a dependency property to work properly. If you’ve ever positioned a control in a Canvas using Canvas.Left or placed a control in a specific Grid row using Grid.Row then you’ve used an attached property which is a specialized type of dependency property. Dependency properties play a key role in XAML and the overall Silverlight framework. Any property that you bind, style, template, animate or transform must be a dependency property in Silverlight applications. You can programmatically bind values to controls and work with standard CLR properties, but if you want to use the built-in binding expressions available in XAML (one of my favorite features) or the Binding class available through code then dependency properties are a necessity. Dependency properties aren’t needed in every situation, but if you want to customize your application very much you’ll eventually end up needing them. For example, if you create a custom user control and want to expose a property that consumers can use to change the background color, you have to define it as a dependency property if you want bindings, styles and other features to be available for use. Now that the overall purpose of dependency properties has been discussed let’s take a look at how you can create them. Creating Dependency Properties When .NET first came out you had to write backing fields for each property that you defined as shown next: Brush _ScheduleBackground; public Brush ScheduleBackground { get { return _ScheduleBackground; } set { _ScheduleBackground = value; } } Although .NET 2.0 added auto-implemented properties (for example: public Brush ScheduleBackground { get; set; }) where the compiler would automatically generate the backing field used by get and set blocks, the concept is still the same as shown in the above code; a property acts as a wrapper around a field. Silverlight dependency properties replace the _ScheduleBackground field shown in the previous code and act as the backing store for a standard CLR property. The following code shows an example of defining a dependency property named ScheduleBackgroundProperty: public static readonly DependencyProperty ScheduleBackgroundProperty = DependencyProperty.Register("ScheduleBackground", typeof(Brush), typeof(Scheduler), null);   Looking through the code the first thing that may stand out is that the definition for ScheduleBackgroundProperty is marked as static and readonly and that the property appears to be of type DependencyProperty. This is a standard pattern that you’ll use when working with dependency properties. You’ll also notice that the property explicitly adds the word “Property” to the name which is another standard you’ll see followed. In addition to defining the property, the code also makes a call to the static DependencyProperty.Register method and passes the name of the property to register (ScheduleBackground in this case) as a string. The type of the property, the type of the class that owns the property and a null value (more on the null value later) are also passed. In this example a class named Scheduler acts as the owner. The code handles registering the property as a dependency property with the call to Register(), but there’s a little more work that has to be done to allow a value to be assigned to and retrieved from the dependency property. The following code shows the complete code that you’ll typically use when creating a dependency property. You can find code snippets that greatly simplify the process of creating dependency properties out on the web. The MVVM Light download available from http://mvvmlight.codeplex.com comes with built-in dependency properties snippets as well. public static readonly DependencyProperty ScheduleBackgroundProperty = DependencyProperty.Register("ScheduleBackground", typeof(Brush), typeof(Scheduler), null); public Brush ScheduleBackground { get { return (Brush)GetValue(ScheduleBackgroundProperty); } set { SetValue(ScheduleBackgroundProperty, value); } } The standard CLR property code shown above should look familiar since it simply wraps the dependency property. However, you’ll notice that the get and set blocks call GetValue and SetValue methods respectively to perform the appropriate operation on the dependency property. GetValue and SetValue are members of the DependencyObject class which is another key component of the Silverlight framework. Silverlight controls and classes (TextBox, UserControl, CompositeTransform, DataGrid, etc.) ultimately derive from DependencyObject in their inheritance hierarchy so that they can support dependency properties. Dependency properties defined in Silverlight controls and other classes tend to follow the pattern of registering the property by calling Register() and then wrapping the dependency property in a standard CLR property (as shown above). They have a standard property that wraps a registered dependency property and allows a value to be assigned and retrieved. If you need to expose a new property on a custom control that supports data binding expressions in XAML then you’ll follow this same pattern. Dependency properties are extremely useful once you understand why they’re needed and how they’re defined. Detecting Changes and Setting Defaults When working with dependency properties there will be times when you want to assign a default value or detect when a property changes so that you can keep the user interface in-sync with the property value. Silverlight’s DependencyProperty.Register() method provides a fourth parameter that accepts a PropertyMetadata object instance. PropertyMetadata can be used to hook a callback method to a dependency property. The callback method is called when the property value changes. PropertyMetadata can also be used to assign a default value to the dependency property. By assigning a value of null for the final parameter passed to Register() you’re telling the property that you don’t care about any changes and don’t have a default value to apply. Here are the different constructor overloads available on the PropertyMetadata class: PropertyMetadata Constructor Overload Description PropertyMetadata(Object) Used to assign a default value to a dependency property. PropertyMetadata(PropertyChangedCallback) Used to assign a property changed callback method. PropertyMetadata(Object, PropertyChangedCalback) Used to assign a default property value and a property changed callback.   There are many situations where you need to know when a dependency property changes or where you want to apply a default. Performing either task is easily accomplished by creating a new instance of the PropertyMetadata class and passing the appropriate values to its constructor. The following code shows an enhanced version of the initial dependency property code shown earlier that demonstrates these concepts: public Brush ScheduleBackground { get { return (Brush)GetValue(ScheduleBackgroundProperty); } set { SetValue(ScheduleBackgroundProperty, value); } } public static readonly DependencyProperty ScheduleBackgroundProperty = DependencyProperty.Register("ScheduleBackground", typeof(Brush), typeof(Scheduler), new PropertyMetadata(new SolidColorBrush(Colors.LightGray), ScheduleBackgroundChanged)); private static void ScheduleBackgroundChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var scheduler = d as Scheduler; scheduler.Background = e.NewValue as Brush; } The code wires ScheduleBackgroundProperty to a property change callback method named ScheduleBackgroundChanged. What’s interesting is that this callback method is static (as is the dependency property) so it gets passed the instance of the object that owns the property that has changed (otherwise we wouldn’t be able to get to the object instance). In this example the dependency object is cast to a Scheduler object and its Background property is assigned to the new value of the dependency property. The code also handles assigning a default value of LightGray to the dependency property by creating a new instance of a SolidColorBrush. To Sum Up In this post you’ve seen the role of dependency properties and how they can be defined in code. They play a big role in XAML and the overall Silverlight framework. You can think of dependency properties as being replacements for fields that you’d normally use with standard CLR properties. In addition to a discussion on how dependency properties are created, you also saw how to use the PropertyMetadata class to define default dependency property values and hook a dependency property to a callback method. The most important thing to understand with dependency properties (especially if you’re new to Silverlight) is that they’re needed if you want a property to support data binding, animations, transformations and styles properly. Any time you create a property on a custom control or user control that has these types of requirements you’ll want to pick a dependency property over of a standard CLR property with a backing field. There’s more that can be covered with dependency properties including a related property called an attached property….more to come.

    Read the article

  • dpkg error : old pre-removal script returned error exit status 102

    - by Siva Prasad Varma
    I am unable to install or remove a package on my Ubuntu 10.04 due to the following error. $ sudo apt-get autoremove Password: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: busybox 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0B/212kB of archives. After this operation, 627kB disk space will be freed. Do you want to continue [Y/n]? y Selecting previously deselected package nscd. (Reading database ... 235651 files and directories currently installed.) Preparing to replace nscd 2.11.1-0ubuntu7.8 (using .../nscd_2.11.1-0ubuntu7.8_amd64.deb) ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: warning: old pre-removal script returned error exit status 102 dpkg - trying script from the new package instead ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error processing /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 102 update-rc.d: warning: /etc/rc2.d/S76nscd is not a symbolic link invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 102 Errors were encountered while processing: /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) What should I do to resolve this error. I have tried sudo dpkg --remove --force-remove-reinstreq nscd but it did not work.

    Read the article

  • I can't add PPA repository behind the proxy (with @ in the username)

    - by kenorb
    I'm trying to add the ppa repository (as a root) with the following command: export HTTP_PROXY="http://[email protected]:[email protected]:8080" add-apt-repository ppa:nilarimogard/webupd8 Traceback (most recent call last): File "/usr/bin/add-apt-repository", line 125, in <module> ppa_info = get_ppa_info_from_lp(user, ppa_name) File "/usr/lib/python2.7/dist-packages/softwareproperties/ppa.py", line 84, in get_ppa_info_from_lp curl.perform() pycurl.error: (56, 'Received HTTP code 407 from proxy after CONNECT') Unfortunately it doesn't work. Looks like curl is connecting to the proxy, but the proxy says that Authentication is Required. I've tried with .curlrc, http_proxy env instead, but it doesn't work. strace -e network,write -s1000 add-apt-repository ppa:nilarimogard/webupd8 socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 4 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 4 connect(4, {sa_family=AF_INET, sin_port=htons(8080), sin_addr=inet_addr("165.x.x.232")}, 16) = -1 EINPROGRESS (Operation now in progress) getsockopt(4, SOL_SOCKET, SO_ERROR, [0], [4]) = 0 getpeername(4, {sa_family=AF_INET, sin_port=htons(8080), sin_addr=inet_addr("165.x.x.232")}, [16]) = 0 getsockname(4, {sa_family=AF_INET, sin_port=htons(46025), sin_addr=inet_addr("161.20.75.220")}, [16]) = 0 sendto(4, "CONNECT launchpad.net:443 HTTP/1.1\r\nHost: launchpad.net:443\r\nUser-Agent: PycURL/7.22.0\r\nProxy-Connection: Keep-Alive\r\nAccept: application/json\r\n\r\n", 146, MSG_NOSIGNAL, NULL, 0) = 146 recvfrom(4, "HTTP/1.1 407 Proxy Authentication Required\r\nProxy-Authenticate: BASIC realm=\"proxy\"\r\nCache-Control: no-cache\r\nPragma: no-cache\r\nContent-Type: text/html; charset=utf-8\r\nProxy-Connection: close\r\nSet-Cookie: BCSI-CS-91b9906520151dad=2; Path=/\r\nConnection: close\ Maybe it's because there is @ sign in the username? Wget works with proxy fine. Related: How do I add a repository from behind a proxy? Environment Ubuntu 12.04 curl 7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 curl Features: GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP

    Read the article

  • WPF Login Verification Using Active Directory

    - by psheriff
    Back in October of 2009 I created a WPF login screen (Figure 1) that just showed how to create the layout for a login screen. That one sample is probably the most downloaded sample we have. So in this blog post, I thought I would update that screen and also hook it up to show how to authenticate your user against Active Directory. Figure 1: Original WPF Login Screen I have updated not only the code behind for this login screen, but also the look and feel as shown in Figure 2. Figure 2: An Updated WPF Login Screen The UI To create the UI for this login screen you can refer to my October of 2009 blog post to see how to create the borderless window. You can then look at the sample code to see how I created the linear gradient brush for the background. There are just a few differences in this screen compared to the old version. First, I changed the key image and instead of using words for the Cancel and Login buttons, I used some icons. Secondly I added a text box to hold the Domain name that you wish to authenticate against. This text box is automatically filled in if you are connected to a network. In the Window_Loaded event procedure of the winLogin window you can retrieve the user’s domain name from the Environment.UserDomainName property. For example: txtDomain.Text = Environment.UserDomainName The ADHelper Class Instead of coding the call to authenticate the user directly in the login screen I created an ADHelper class. This will make it easier if you want to add additional AD calls in the future. The ADHelper class contains just one method at this time called AuthenticateUser. This method authenticates a user name and password against the specified domain. The login screen will gather the credentials from the user such as their user name and password, and also the domain name to authenticate against. To use this ADHelper class you will need to add a reference to the System.DirectoryServices.dll in .NET. The AuthenticateUser Method In order to authenticate a user against your Active Directory you will need to supply a valid LDAP path string to the constructor of the DirectoryEntry class. The LDAP path string will be in the format LDAP://DomainName. You will also pass in the user name and password to the constructor of the DirectoryEntry class as well. With a DirectoryEntry object populated with this LDAP path string, the user name and password you will now pass this object to the constructor of a DirectorySearcher object. You then perform the FindOne method on the DirectorySearcher object. If the DirectorySearcher object returns a SearchResult then the credentials supplied are valid. If the credentials are not valid on the Active Directory then an exception is thrown. C#public bool AuthenticateUser(string domainName, string userName,  string password){  bool ret = false;   try  {    DirectoryEntry de = new DirectoryEntry("LDAP://" + domainName,                                           userName, password);    DirectorySearcher dsearch = new DirectorySearcher(de);    SearchResult results = null;     results = dsearch.FindOne();     ret = true;  }  catch  {    ret = false;  }   return ret;} Visual Basic Public Function AuthenticateUser(ByVal domainName As String, _ ByVal userName As String, ByVal password As String) As Boolean  Dim ret As Boolean = False   Try    Dim de As New DirectoryEntry("LDAP://" & domainName, _                                 userName, password)    Dim dsearch As New DirectorySearcher(de)    Dim results As SearchResult = Nothing     results = dsearch.FindOne()     ret = True  Catch    ret = False  End Try   Return retEnd Function In the Click event procedure under the Login button you will find the following code that will validate the credentials that the user types into the login window. C#private void btnLogin_Click(object sender, RoutedEventArgs e){  ADHelper ad = new ADHelper();   if(ad.AuthenticateUser(txtDomain.Text,         txtUserName.Text, txtPassword.Password))    DialogResult = true;  else    MessageBox.Show("Unable to Authenticate Using the                      Supplied Credentials");} Visual BasicPrivate Sub btnLogin_Click(ByVal sender As Object, _ ByVal e As RoutedEventArgs)  Dim ad As New ADHelper()   If ad.AuthenticateUser(txtDomain.Text, txtUserName.Text, _                         txtPassword.Password) Then    DialogResult = True  Else    MessageBox.Show("Unable to Authenticate Using the                      Supplied Credentials")  End IfEnd Sub Displaying the Login Screen At some point when your application launches, you will need to display your login screen modally. Below is the code that you would call to display the login form (named winLogin in my sample application). This code is called from the main application form, and thus the owner of the login screen is set to “this”. You then call the ShowDialog method on the login screen to have this form displayed modally. After the user clicks on one of the two buttons you need to check to see what the DialogResult property was set to. The DialogResult property is a nullable type and thus you first need to check to see if the value has been set. C# private void DisplayLoginScreen(){  winLogin win = new winLogin();   win.Owner = this;  win.ShowDialog();  if (win.DialogResult.HasValue && win.DialogResult.Value)    MessageBox.Show("User Logged In");  else    this.Close();} Visual Basic Private Sub DisplayLoginScreen()  Dim win As New winLogin()   win.Owner = Me  win.ShowDialog()  If win.DialogResult.HasValue And win.DialogResult.Value Then    MessageBox.Show("User Logged In")  Else    Me.Close()  End IfEnd Sub Summary Creating a nice looking login screen is fairly simple to do in WPF. Using the Active Directory services from a WPF application should make your desktop programming task easier as you do not need to create your own user authentication system. I hope this article gave you some ideas on how to create a login screen in WPF. NOTE: You can download the complete sample code for this blog entry at my website: http://www.pdsa.com/downloads. Click on Tips & Tricks, then select 'WPF Login Verification Using Active Directory' from the drop down list. Good Luck with your Coding,Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS **We frequently offer a FREE gift for readers of my blog. Visit http://www.pdsa.com/Event/Blog for your FREE gift!

    Read the article

  • Caching in the .NET Stack: Inside-Out

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/06/28/caching-in-the-.net-stack-inside-out.aspxI'm delighted to have my first course published on Pluralsight - Caching in the .NET Stack: Inside-out.   It's a pretty comprehensive look at caching in .NET solutions. The first half covers using local, remote and persistent cache stores inside the solution, including the .NET MemoryCache, NCache Express, AppFabric Caching, memcached, Azure Table Storage and local disk stores. The second half covers caching outside the solution in HTTP clients and proxies, and how to set up ASP.NET WebForms, MVC, Web API and WCF projects to use HTTP validation and expiration caching.   The course takes a hands-on approach, starting with a distributed solution that has no caching, analysing key points which can benefit from caching, and adding different types of cache. At the end of the course I run through a set of before and after performance tests, stressing the solution under load. Without caching and with 60 concurrent users the page response time maxes out at 18 seconds - with caching that falls to 2 seconds, so it's a huge improvement from very little effort. I’d be glad to hear feedback if you watch the course, especially if it’s as positive as my editor’s.

    Read the article

  • links for 2010-04-29

    - by Bob Rhubart
    AS11 Oracle B2B Sync Support - Series 1 (Oracle Fusion Middleware - B2B Team Blog) Sinkarbabu Kirubanithi with part 1 of a planned 3-part series on synchronous message support in Oracle B2B 11g. (tags: oracle otn fusionmiddleware b2b) Java 2 Go!: How to write a simple yet “bullet-proof” object cache "So, while we were thinking hard to come up with the most efficient, generic and elegant way of finally implementing our weak and soft caches, Mr. Eric Chan, who is one of the main architects in Oracle Beehive team, had a very interesting breakthrough. In short terms, he thought of a very nice way of combining both WeakReference and SoftReference in our weak and soft caches so that they would provide exactly the same functionality without having to deal with those reference queues at all. Basically, instead of using a plain HashMap as our backing storage, we used a java.util.WeakHashMap in both our cache implementations. The hat trick was what and how to store things in it." - Eduardo Rodrigues (tags: oracle java sun) @jamet123: First Look – Oracle Data Mining "[Oracle Data Mining] is a nice product for Oracle database customers and well worth looking into. The new UI will only make it more so." James Taylor (tags: oracle otn datamining database) Live Webcast: Social BPM: Integrating Enterprise 2.0 with Business Applications #oracle Peggy Chen and Dan Tortorici show you how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. Wednesday, May 12, 2010 at 11:00am PT / 2:00pm ET (tags: oracle otn enterprise2.0 webcast)

    Read the article

  • I can't upgrade 13.10 because of broken pipe

    - by user212179
    I try upgrading and this is what I get: christopher@chris-computer:~$ sudo apt-get upgrade [sudo] password for christopher: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: librhythmbox-core7 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/809 kB of archives. After this operation, 39.9 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 170617 files and directories currently installed.) Preparing to replace librhythmbox-core7 2.99.1-0ubuntu1 (using .../librhythmbox-core7_3.0.1-0~13.10~ppa1_i386.deb) ... Unpacking replacement librhythmbox-core7 ... dpkg: error processing /var/cache/apt/archives/librhythmbox-core7_3.0.1-0~13.10~ppa1_i386.deb (--unpack): trying to overwrite '/usr/lib/librhythmbox-core.so.8.0.0', which is also in package librhythmbox-core8 3.0.1-1ubuntu2~ppa0 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/librhythmbox-core7_3.0.1-0~13.10~ppa1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Windows Azure Learning Plan - Application Fabric

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with the Application Fabric for Windows Azure. It serves three main purposes - Access Control, Caching, and as a Service Bus.   Overview and Training Overview and general  information about the Azure Application Fabric, - what it is, how it works, and where you can learn more. General Introduction and Overview http://msdn.microsoft.com/en-us/library/ee922714.aspx Access Control Service Overview http://msdn.microsoft.com/en-us/magazine/gg490345.aspx Microsoft Documentation http://msdn.microsoft.com/en-gb/windowsazure/netservices.aspx Learning and Examples Sources for online and other Azure Appllications Fabric training Application Fabric SDK http://www.microsoft.com/downloads/en/details.aspx?FamilyID=39856a03-1490-4283-908f-c8bf0bfad8a5&displaylang=en Application Fabric Caching Service Primer http://blogs.msdn.com/b/appfabriccat/archive/2010/11/29/azure-appfabric-caching-service-soup-to-nuts-primer.aspx?wa=wsignin1.0 Hands-On Lab: Building Windows Azure Applications with the Caching Service http://www.wadewegner.com/2010/11/hands-on-lab-building-windows-azure-applications-with-the-caching-service/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+WadeWegner+%28Wade+Wegner+-+Technical%29 Architecture  Azure Application Fabric Internals and Architectures for Scale Out and other use-cases. Azure Application Fabric Architecture Guide http://blogs.msdn.com/b/yasserabdelkader/archive/2010/09/12/release-of-windows-server-appfabric-architecture-guide.aspx Windows Azure AppFabric Service Bus - A Deep Dive (Video) http://www.msteched.com/2010/Europe/ASI410 Access Control Service (ACS) High Level Architecture http://blogs.msdn.com/b/alikl/archive/2010/09/28/azure-appfabric-access-control-service-acs-v-2-0-high-level-architecture-web-application-scenario.aspx Applications  and Programming Programming Patterns and Architectures for SQL Azure systems. Various Examples from PDC 2010 on using Azure Application as a Service Bus http://tinyurl.com/2dcnt8o Creating a Distributed Cache using the Application Fabric http://blog.structuretoobig.com/post/2010/08/31/Creating-a-Poor-Mane28099s-Distributed-Cache-in-Azure.aspx  Azure Application Fabric Java SDK http://jdotnetservices.com/

    Read the article

  • Speaking at Dog Food Conference 2013

    - by Brian T. Jackett
    Originally posted on: http://geekswithblogs.net/bjackett/archive/2013/10/22/speaking-at-dog-food-conference-2013.aspx    It has been a couple years since I last attended / spoke at Dog Food Conference, but on Nov 21-22, 2013 I’ll be speaking at Dog Food Conference 2013 here in Columbus, OH.  For those of you confused by the name of the conference (no it’s not about dog food), read up on the concept of dogfooding .  This conference has a history of great sessions from local and regional speakers and I look forward to being a part of it once again.  Registration is now open (registration link) and is expected to sell out quickly.  Reserve your spot today.   Title: The Evolution of Social in SharePoint Audience and Level: IT Pro / Architect, Intermediate Abstract: Activities, newsfeed, community sites, following... these are just some of the big changes introduced to the social experience in SharePoint 2013. This class will discuss the evolution of the social components since SharePoint 2010, the architecture (distributed cache, microfeed, etc.) that supports the social experience, Yammer integration, and proper planning considerations when deploying social capabilities (personal sites, SkyDrive Pro and distributed cache). This session will include demos of the social newsfeed, community sites, and mentions. Attendees should have an intermediate knowledge of SharePoint 2010 or 2013 administration.         -Frog Out

    Read the article

  • Complex knowledge management system with CRM..written internally

    - by JonH
    We've all heard of salesforce and sugarcrm and the likes of systems like this. Unfortunately at my workplace we have been asked to write a similiar system (rather then license or purchase). Basically the database is fairly large. Think of modules such as: Corporate groups, customers, programs, projects, sub projects, and issue management. In simple terms a corporate group has one to many customers. A program has one or more projects. A project has one or more sub projects. And an issue can be created on many sub projects. Of course the system is a bit more complex but instead of listing every single module I think its best to keep it simple. In any event, the system in its current state has only two resources to be working on it (basically we have to do it all: CSS, database, jquery, asp.net and C#). We've started off well by defining the UI master and footer pages that way we can reuse those across all of our pages. Now comes the hard part. The system will have about 4k end users with say 5-10% being concurrent users. We are wondering if it makes sense to cache our database data (For say 5-10 minutes) rather then continously hit our database. The reason being is some of these pages may have 5-10 search filters associated with the page. Imagine every time a selection is made from a search box how many database hits. Also some of these search fields cascade so selecting for instance an initial drop down may cascade several drop down boxes under them. Is it wrong to cache because I am not finding too many articles on whether it is a good idea or not. Remember the system is similiar to say a CRM system where we manage our various customers, projects, sub projects, issues, etc.

    Read the article

  • Find The Bug

    - by Alois Kraus
    What does this code print and why?             HashSet<int> set = new HashSet<int>();             int[] data = new int[] { 1, 2, 1, 2 };             var unique = from i in data                          where set.Add(i)                          select i;   // Compiles to: var unique = Enumerable.Where(data, (i) => set.Add(i));             foreach (var i in unique)             {                 Console.WriteLine("First: {0}", i);             }               foreach (var i in unique)             {                 Console.WriteLine("Second: {0}", i);             }   The output is: First: 1 First: 2 Why is there no output of the second loop? The reason is that LINQ does not cache the results of the collection but it does recalculate the contents for every new enumeration again. Since I have used state (the Hashset does decide which entries are part of the output) I do arrive with an empty sequence since Add of the Hashset will return false for all values I have already passed in leaving nothing to return a second time. The solution is quite simple: Use the Distinct extension method or cache the results by calling .ToList() or ToArray() for the result of the LINQ query. Lession Learned: Do never forget to think about state in Where clauses!

    Read the article

  • "wrong fs type, bad option, bad superblock" error while mounting FAT Drives

    - by cshubhamrao
    I am unable to mount any fat32 or fat16 formatted usb disks under Ubuntu 13.10. The thing here to note is that it is happening only with fat formatted Disks. ntfs, ext formatted external usb disks work well (I tried formatting the same with ext4 and it worked) While mounting via nautilus: Error while mounting from terminal: root@shubham-pc:~# mount -t vfat /dev/sdc1 /media/shubham/n mount: wrong fs type, bad option, bad superblock on /dev/sdc1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so As suggested by the error: Output from dmesg | tail root@shubham-pc:~# dmesg | tail [ 3545.482598] scsi8 : usb-storage 1-1:1.0 [ 3546.481530] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.26 PQ: 0 ANSI: 5 [ 3546.482373] sd 8:0:0:0: Attached scsi generic sg3 type 0 [ 3546.483758] sd 8:0:0:0: [sdc] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [ 3546.485254] sd 8:0:0:0: [sdc] Write Protect is off [ 3546.485262] sd 8:0:0:0: [sdc] Mode Sense: 43 00 00 00 [ 3546.488314] sd 8:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [ 3546.499820] sdc: sdc1 [ 3546.503388] sd 8:0:0:0: [sdc] Attached SCSI removable disk [ 3547.273396] FAT-fs (sdc1): IO charset iso8859-1 not found Output from fsck.vfat: root@shubham-pc:~# fsck.vfat /dev/sdc1 dosfsck 3.0.16, 01 Mar 2013, FAT32, LFN /dev/sdc1: 1 files, 1/1949978 clusters All normal Tried re-creating the whole partition table and then formatting as fat32 but to no avail so the possibility of corrupted drive is ruled out. Tried the same with around 4 Disks or so and all have the same things

    Read the article

  • How to resolve - dpkg error : old pre-removal script returned error exit status 102

    - by Siva Prasad Varma
    I am unable to install or remove a package on my Ubuntu 10.04 due to the following error. $ sudo apt-get autoremove Password: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: busybox 0 upgraded, 0 newly installed, 1 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0B/212kB of archives. After this operation, 627kB disk space will be freed. Do you want to continue [Y/n]? y Selecting previously deselected package nscd. (Reading database ... 235651 files and directories currently installed.) Preparing to replace nscd 2.11.1-0ubuntu7.8 (using .../nscd_2.11.1-0ubuntu7.8_amd64.deb) ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: warning: old pre-removal script returned error exit status 102 dpkg - trying script from the new package instead ... invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error processing /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-removal script returned error exit status 102 update-rc.d: warning: /etc/rc2.d/S76nscd is not a symbolic link invoke-rc.d: not a symlink: /etc/rc2.d/S76nscd dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 102 Errors were encountered while processing: /var/cache/apt/archives/nscd_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) What should I do to resolve this error? I have tried sudo dpkg --remove --force-remove-reinstreq nscd but it did not work.

    Read the article

  • Improve speed of "start menu" in Linux Mint 10 - Ubuntu 10.10 derivative [closed]

    - by Gabriel L. Oliveira
    I have a global menu (including application, administration and system tabs) that is taking too much time (for me) to load (about 2.5 seconds). Of course, this time is taken only during first start. After it have loaded, next times are better ( less than 0.2 miliseconds) The menu was taking more time before (about 5 seconds), and I found that was because of the 'Other' part of the menu, that included many applications installed with Wine, so I removed all of them (I didn't need them at all). I have a "normal" knowledge of programming, and I think that the process of starting the menu for the first time has some kind of "cache function", that tries to find which apps are present that need to be placed under menu to be shown to user. But didn't found this function so that I could analyze in details what he is doing (if searching for files under "~/.local/share/applications" or anything else). Also, I found that hitting "Alt-F2" also fires this "cache function", because after waiting it to load, the process of opening the menu took less than 0.2 miliseconds. So, could anyone help me in order to reduce this time? I found on internet that some user could reduce the time by resizing the icons of applications. But found here that most of my icons are already at 25x25 size. Any other idead? Maybe a multiprocess to load it, or include it under startup... don't know. Ps: Sorry if this is an awkward question, but I just do not like waiting for things to happen, and think that this process should be smoother than it's now. Also, thanks in advance!

    Read the article

  • Cannot reinstall MySql in 11.10 - ERROR: There's not enough space in /var/lib/mysql/

    - by Robin McCain
    I've tried it all (removing all the packages associated with MySQL) but keep getting stuff like this: Preconfiguring packages ... (Reading database ... 142196 files and directories currently installed.) Unpacking mysql-server-5.1 (from .../mysql-server-5.1_5.1.63-0ubuntu0.11.10.1_amd64.deb) ... ERROR: There's not enough space in /var/lib/mysql/ dpkg: error processing /var/cache/apt/archives/mysql-server-5.1_5.1.63-0ubuntu0.11.10.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.1_5.1.63-0ubuntu0.11.10.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Here is my drive space map. root@kyle:/# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/kyle-root 59361428 59021768 0 100% / udev 1014052 8 1014044 1% /dev tmpfs 409304 1476 407828 1% /run none 5120 0 5120 0% /run/lock none 1023256 0 1023256 0% /run/shm /dev/sda1 233191 46888 173862 22% /boot /dev/md0 1922858288 1048513192 776669500 58% /media/array The root volume actually only has about 10 gigabytes in use on the hard drive (which has a 60 gig partition). /dev/md0 is a 2 TB raid array.

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >