Search Results

Search found 23265 results on 931 pages for 'justin case'.

Page 366/931 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • Need a solution to store images (1 billion, 1000,000,000) which users will upload to a website via php or javascript upload [on hold]

    - by wish_you_all_peace
    I need a solution to store images (1 billion) which users will upload to a website via PHP or Javascript upload (website will have 1 billion page views a month using Linux Debian distros) assuming 20 photos per user maximum (10 thumbnails of size 90px by 90px and 10 large, script resized images of having maximum width 500px or maximum height 500px depending on shape of image, meaning square, rectangle, horizontal, vertical etc). Assume this to be a LEMP-stack (Linux Nginx MySQL PHP) social-media or social-matchmaking type application whose content will be text and images. Since everyone knows storing tons of images (website users uploaded images in this case) are bad inside a single directory or NFS etc, please explain all the details about the architecture and configuration of the entire setup of storage solution, to store 1 billion images on any method you recommend (no third-party cloud storage like S3 etc. It has to be within the private data center using our own hardware and resources.). The solution has to include both the storage solution and organizing the images uploaded by users. How will we organize the users images if a single user will not have more than 20 images (10 thumbs and 10 large of having either width or height 500px)? Please consider that this has to be organized in a structural way so we can fetch a single user's images via PHP/Javascript or API programmatically through some type of user's unique identifier(s).

    Read the article

  • SSH main process ended

    - by Khaled
    I have a running ubuntu server 10.04.1. When I tried to login to the server via ssh, I could not. Instead, I got connection refused error. I tried to ping the machine and I got reply! So, the clear reason is that SSH daemon is stopped. After reboot, I was able to login to my server via ssh. After some time, I looked at my logs /var/log/syslog and found the following records: Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2465) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2469) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2473) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2477) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2481) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2485) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2489) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2493) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2497) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2501) terminated with status 255 Jan 16 10:57:09 myserver init: ssh respawning too fast, stopped I searched for a similar problem/solution. Some people said that this is caused by the SSH daemon trying to start before networking and they suggest to change ListenAddress in /etc/ssh/sshd_config to be 0.0.0.0. I think this is not the cause in my case, because my problem occurs after system is up and running. Any idea what is causing this? This is ubuntu server and it should be running and accessed remotely using ssh.

    Read the article

  • least privilege account for WinRM remote calls on Windows 2008 Server

    - by aldrin
    ServerFault Windows experts: please consider the following use case: I have 2 Windows 2008 Server SP2 boxes let’s call them – SOURCE, CLIENT. On SOURCE: I create a new user called 'normal'. Just a plain user - no special privileges. On CLIENT: I run the following from a command prompt winrm get wmi/root/cimv2/Win32_UTCTime -r:SOURCE -u:normal -p:NormalPassword I get an output containing WSManFault: Message = Access is denied. On CLIENT: I repeat step 3 with the administrator identity, i.e. winrm get wmi/root/cimv2/Win32_UTCTime -r:SOURCE -u:Administrator -p:AdminPassword I get the current UTC time at SOURCE. The question is, what are the least privileges I need to assign to the user 'normal' to ensure that Step 3 behaves like Step 5. In other words, what's the least privilege to enable WinRM access for a non-Admin account?

    Read the article

  • Webby Nominations for Retail

    - by David Dorf
    The Webby Awards were created back in 1996 when the internet was just a baby. This is their 14th year of highlighting excellence on the Web, and there are lots of great nominations. Its quite amazing to see the rich content and interactivity of today's websites. Some interesting nominees for this year are: Sephora did a campaign at Christmas, and what remains of the Sephora Clause website is a bunch of wishes. The Starbucks "All you need is Love" campaign has lots of cool videos. The Sound Check from Walmart highlights raising music artists. Refinery29 has their fashion info hub. The five nominees in the retail category are: Bugaboo.com's website for selling high-end baby strollers. If you're looking for a high-end bag, check out Crumpler's flash-based e-commerce site. It's highly interactive, but a little on the slow side. I Make My Case sells custom designed iPod/iPhone and Blackberry cases. At MOO.com, they love to print. Tons of art for printing customized business cards, post cards, etc. If you want light shoes, check out Puma L.I.F.T. and see just how light shoes can weigh. Check them out, cast a few votes, and see if you're inspired to create something even better.

    Read the article

  • How to Customize the File Open/Save Dialog Box in Windows

    - by Lori Kaufman
    Generally, there are two kinds of Open/Save dialog boxes in Windows. One kind looks like Windows Explorer, with the tree on the left containing Favorites, Libraries, Computer, etc. The other kind contains a vertical toolbar, called the Places Bar. The Windows Explorer-style Open/Save dialog box can be customized by adding your own folders to the Favorites list. You can, then, click the arrows to the left of the main items, except the Favorites, to collapse them, leaving only the list of default and custom Favorites. The Places Bar is located along the left side of the File Open/Save dialog box and contains buttons providing access to frequently-used folders. The default buttons on the Places Bar are links to Recent Places, Desktop, Libraries, Computer, and Network. However, you change these links to be links to custom folders of your choice. We will show you how to customize the Places Bar using the registry and using a free tool in case you are not comfortable making changes in the registry. Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed How to Run Android Apps on Your Desktop the Easy Way HTG Explains: Do You Really Need to Defrag Your PC?

    Read the article

  • WinRT WebView and Cookies

    - by javarg
    Turns out that WebView Control in WinRT is much more limited than it’s counterpart in WPF/Silverlight. There are some great articles out there in how to extend the control in order for it to support navigation events and some other features. For a personal project I'm working on, I needed to grab cookies a Web Site generated for the user. Basically, after a user authenticated to a Web Site I needed to get the authentication cookies and generate some extra requests on her behalf. In order to do so, I’ve found this great article about a similar case using SharePoint and Azure ACS. The secret is to use a p/invoke to native InternetGetCookieEx to get cookies for the current URL displayed in the WebView control.   void WebView_LoadCompleted(object sender, NavigationEventArgs e) { var urlPattern = "http://someserver.com/somefolder"; if (e.Uri.ToString().StartsWith(urlPattern)) { var cookies = InternetGetCookieEx(e.Uri.ToString()); // Do something with the cookies } } static string InternetGetCookieEx(string url) { uint sizeInBytes = 0; // Gets capacity length first InternetGetCookieEx(url, null, null, ref sizeInBytes, INTERNET_COOKIE_HTTPONLY, IntPtr.Zero); uint bufferCapacityInChars = (uint)Encoding.Unicode.GetMaxCharCount((int)sizeInBytes); // Now get cookie data var cookieData = new StringBuilder((int)bufferCapacityInChars); InternetGetCookieEx(url, null, cookieData, ref bufferCapacityInChars, INTERNET_COOKIE_HTTPONLY, IntPtr.Zero); return cookieData.ToString(); }   Function import using p/invoke follows: const int INTERNET_COOKIE_HTTPONLY = 0x00002000; [DllImport("wininet.dll", CharSet = CharSet.Unicode, SetLastError = true)] static extern bool InternetGetCookieEx(string pchURL, string pchCookieName, StringBuilder pchCookieData, ref System.UInt32 pcchCookieData, int dwFlags, IntPtr lpReserved); Enjoy!

    Read the article

  • Need HP recovery partition info

    - by ggambett
    I'm configuring a new HP Pavillion DV4 with a 320 GB disk. I made the recovery DVDs, then did a couple other things (including deleting the recovery partition), and finally decided to restore the system. Unfortunately, the recovery process fails; the three DVDs are read (the recovery program says "Reformatting the Windows partition" and "Copying files required to restore the hard drive") but after it finishes reading the 3rd, and the progress bar reaches 100%, it fails with error 0xe0f00013 - Googling it didn't return anything at all. I'm afraid this may be because I deleted the partitions. So, I'm kindly asking for one of the following, in order of preference, from a HP Pavillion DV4 with a 320 GB hard disk or a similar enough one : 1) A dump of the MBR 2) The type and size of all the partitions in a "new" system so I can try to make a partition table resembling the original one. BTW, I thought the recovery DVDs were supposed to work even if the entire disk was wiped - isn't that the case? Thanks!

    Read the article

  • CodePlex Daily Summary for Sunday, March 14, 2010

    CodePlex Daily Summary for Sunday, March 14, 2010New ProjectsBeerMath.net: BeerMath.net lets brewers calculate expected values for their recipes. Written entirely in C#, it can be used in any .Net language.Bible Study: Данный проект предусматривает создание программного обеспечения, предоставляющего пользователю гибкие и мощные инструменты для чтения и изучения Пи...E-Messenger: Description détaillé du sujet : Développement d'une application (client lourd) de messagerie instantané et de partage de fichier interne à ESPRIT....Facebook Azure Toolkit: The Facebook Azure Toolit provides a flexible and scalable hosting platform for the smallest and largest of Facebook applications. This toolkit hel...Gherkin editor: A simple text editor to write specifications using Gherkin. The editor supports code completion, syntax highlighting, spell checker and more.Mydra Center: Mydra Center is a Media center with the particularity to be very flexible, allowing developers to extend it and add new features. The philosophy be...MyTwits - A rich Twitter client for Windows powered by WPF: MyTwits is a free Twitter client for Windows XP/Vista/7 powered by WPF which gives you freedom to twit right from your desktop. You can do almost a...na laborke: aaaaaaaaaaaaaaasssssssssssssssddddddddddddddddddddfffffffffffffNMTools: The "Network Management Tools" (NMTools) complete OpenSLIM CMDB capa­bil­i­ties with Network Discovery, Automa­tion and Con­fig­u­ra­tion Man­age­m...orionSRO: This project aims to make a fully functional server.Project Naduvar: Project Naduvar, is a centralized Locking Service in distribute systems. You can use this service in any of your existing distributed application. ...Silverlight Input Keyboard: Silverlight Input Keyboard and Behaviorsuh: uh.py is a command line tool that helps developers porting native projects from a case-insensitive filesystem to a case-sensitive filesystem by sea...New ReleasesAmiBroker Plug-ins with C#. A non official AmiBroker Plug-in SDK: AmiBroker Plug-in SDK v0.0.3: Small changesAmiBroker Plug-ins with C#. A non official AmiBroker Plug-in SDK: AmiBroker Plug-in SDK v0.0.4: Small updatesAStyle AddIn for SharpDevelop (Alex): 2.0 Production: #D 3.* add in with updated GUI elements.Coding Cockerel code samples: Validation with ASP .NET MVC and jQuery: Code sample related to the following blog post, http://codingcockerel.co.uk/consistent-validation-with-asp-net-mvc-and-jquery/.CoreSystem Library: Release - 1.0.3725.10575: This release contains a new class Crypto which makes encryption and descryption of string easy, it uses TripleDESCrystal Mapper: Release - 2.0.3725.11614: This is preview if release 2.0* that I promised, it contains following new features Tracking dirty entities and provide Save function to save all ...Digital Media Processing Project 1: Image Processor: Image Processor Alpha: First Release Features Include: Curve Adjustment Tool Region Growing Segmetation Threshold Segmentation Guassian/Butterworth High/Low pass filter...Exepack.NET: Exepack.NET version 0.03 beta: Exepack.NET is executable file compressor for .NET Framework. It allows to package your .NET application consisting of an executable file and sever...Export code as Code Snippet - Addin for Visual Studio 2008/2010 RC: VS 2010 Release Candidate: This release targets Visual Studio 2010 Release Candidate. It includes full Visual Basic 2010 source code. Fixes already available in previous ve...Facebook Azure Toolkit: 0.9 Beta: This is the initial beta releaseFamily Tree Analyzer: Version 1.0.5.0: Version 1.0.5.0 Change the way Census & Individual reports columns are sized so that user can resize later. Add filter to exclude individuals over...Home Access Plus+: v3.1.2.1: Version 3.1.2.1 Release Change Log: Added SSL SMTP Added SSL Authentication File Changes: ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdb ~/we...Home Access Plus+: v3.1.3.1: Version 3.1.3.1 Release Change Log: Fixed Help Desk File Changes: ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdb ~/helpdesk/*.htmIceChat: IceChat 2009 Alpha 11.6 Full Install: IceChat 2009 Alpha 11.6 - Full Installer, installs IceChat 2009, and the Emoticons, and will also download .Net Framework 2.0 if needed.IceChat: IceChat 2009 Alpha 11.6 Simple Binaries: This simply the IceChat2009.exe and the IPluginIceChat.dll needed to run IceChat 2009. Is not an installer, does not include emoticons.IceChat: IceChat 2009 Alpha 11.6 Source Code: IceChat 2009 Alpha 11.6 Source CodeLunar Phase Silverlight Gadget: Lunar Phase RC: Stable release. 6 languages Auto refresh. Name / Light problem fixedMiracle OS: Miracle OS Alpha 0.001: Our first release is the Alpha 0.001. Miracle OS doens't work at all, but we work on it. You to? Please help us.MyTwits - A rich Twitter client for Windows powered by WPF: MyTwits BETA 1: I'm happy to release first BETA version of MyTwits. Just download the zip file attached and run setup.exe and you are done! If you've any problem...MyTwits - A rich Twitter client for Windows powered by WPF: MyTwits Source BETA 1: I'm providing you just a project file, I'll upload complete source code once I fine tuned the code.NMock3: NMock3 - Beta 5, .NET 3.5: Hilights of this releaseTutorials have been updated and are in a much better place now. (they compile) Public API is getting locked down. Void me...Project Naduvar: com.declum.naduvar.locking: First ReleaseQueryToGrid Module for DotNetNuke®: QueryToGrid Module version 01.00.01: This module is a proof of concept for both using AJAX in a DotNetNuke® module, and for using SQL in a module. »»» IMPORTANT NOTE ««« Using this mo...SCSI Interface for Multimedia and Block Devices: Release 10 - Almost like a commercial burner!!: I made many changes in the ISOBurn program in this version, making it much more user-friendly than before. You can now add, rename, and delete file...Silverlight Input Keyboard: Initial Release: For more information see http://www.orktane.com/Blog/post/2009/11/09/Virtual-Input-Keyboard-Behaviours-for-Silverlight.aspxThe Silverlight Hyper Video Player [http://slhvp.com]: RC: The release candidate is now in place. Unfortunately, because there are aspects of it that I'm not yet ready to discuss, the code for the RC will...twNowplaying: twNowplaying 1.0.0.3: Press the Twitter icon to get started, don't forget to submit bugs to the issue tracker. What's new This release has some minor UI fixes.uh: 1.0: This is the first stable release. It isn't super full featured but it does the basics.UriTree: UriTree 2.0.0: This release is the WPF version of this application.VCC: Latest build, v2.1.30313.0: Automatic drop of latest buildVr30 OS: Blackra1n: The software was made by blackra1n for jailbreak iphone and ipod touch. Is not the Vr30 OS Team ProjectVr30 OS: Vr30 Operating System Live Cd 1.0: The Operating system linux made by team. For more information go to http://vr30os.tuxfamily.orgWatchersNET.TagCloud: WatchersNET.TagCloud 01.01.00: Whats New Decide between Tags generated from the Search words, or create your own Tag List Custom Tag list changes Small BugfixesZeta Resource Editor: Source code release 2010-03-13: New sources, some small fixes.ZipStorer - A Pure C# Class to Store Files in Zip: ZipStorer 2.35: Improved UTF-8 Support Correct writting of modification time for extracted filesMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NET Ajax LibraryASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrN2 CMSBlogEngine.NETpatterns & practices – Enterprise LibrarySharePoint Team-MailerFasterflect - A Fast and Simple Reflection APICaliburn: An Application Framework for WPF and SilverlightjQuery Library for SharePoint Web ServicesCalcium: A modular application toolset leveraging PrismFarseer Physics Engine

    Read the article

  • Integrating WIF with WCF Data Services

    - by cibrax
    A time ago I discussed how a custom REST Starter kit interceptor could be used to parse a SAML token in the Http Authorization header and wrap that into a ClaimsPrincipal that the WCF services could use. The thing is that code was initially created for Geneva framework, so it got deprecated quickly. I recently needed that piece of code for one of projects where I am currently working on so I decided to update it for WIF. As this interceptor can be injected in any host for WCF REST services, also represents an excellent solution for integrating claim-based security into WCF Data Services (previously known as ADO.NET Data Services). The interceptor basically expects a SAML token in the Authorization header. If a token is found, it is parsed and a new ClaimsPrincipal is initialized and injected in the WCF authorization context. public class SamlAuthenticationInterceptor : RequestInterceptor {   SecurityTokenHandlerCollection handlers;   public SamlAuthenticationInterceptor()     : base(false)   {     this.handlers = FederatedAuthentication.ServiceConfiguration.SecurityTokenHandlers;   }   public override void ProcessRequest(ref RequestContext requestContext)   {     SecurityToken token = ExtractCredentials(requestContext.RequestMessage);     if (token != null)     {       ClaimsIdentityCollection claims = handlers.ValidateToken(token);       var principal = new ClaimsPrincipal(claims);       InitializeSecurityContext(requestContext.RequestMessage, principal);     }     else     {       DenyAccess(ref requestContext);     }   }   private void DenyAccess(ref RequestContext requestContext)   {     Message reply = Message.CreateMessage(MessageVersion.None, null);     HttpResponseMessageProperty responseProperty = new HttpResponseMessageProperty() { StatusCode = HttpStatusCode.Unauthorized };     responseProperty.Headers.Add("WWW-Authenticate",           String.Format("Basic realm=\"{0}\"", ""));     reply.Properties[HttpResponseMessageProperty.Name] = responseProperty;     requestContext.Reply(reply);     requestContext = null;   }   private SecurityToken ExtractCredentials(Message requestMessage)   {     HttpRequestMessageProperty request = (HttpRequestMessageProperty)  requestMessage.Properties[HttpRequestMessageProperty.Name];     string authHeader = request.Headers["Authorization"];     if (authHeader != null && authHeader.Contains("<saml"))     {       XmlTextReader xmlReader = new XmlTextReader(new StringReader(authHeader));       var col = SecurityTokenHandlerCollection.CreateDefaultSecurityTokenHandlerCollection();       SecurityToken token = col.ReadToken(xmlReader);                                        return token;     }     return null;   }   private void InitializeSecurityContext(Message request, IPrincipal principal)   {     List<IAuthorizationPolicy> policies = new List<IAuthorizationPolicy>();     policies.Add(new PrincipalAuthorizationPolicy(principal));     ServiceSecurityContext securityContext = new ServiceSecurityContext(policies.AsReadOnly());     if (request.Properties.Security != null)     {       request.Properties.Security.ServiceSecurityContext = securityContext;     }     else     {       request.Properties.Security = new SecurityMessageProperty() { ServiceSecurityContext = securityContext };      }    }    class PrincipalAuthorizationPolicy : IAuthorizationPolicy    {      string id = Guid.NewGuid().ToString();      IPrincipal user;      public PrincipalAuthorizationPolicy(IPrincipal user)      {        this.user = user;      }      public ClaimSet Issuer      {        get { return ClaimSet.System; }      }      public string Id      {        get { return this.id; }      }      public bool Evaluate(EvaluationContext evaluationContext, ref object state)      {        evaluationContext.AddClaimSet(this, new DefaultClaimSet(System.IdentityModel.Claims.Claim.CreateNameClaim(user.Identity.Name)));        evaluationContext.Properties["Identities"] = new List<IIdentity>(new IIdentity[] { user.Identity });        evaluationContext.Properties["Principal"] = user;        return true;      }    } A WCF Data Service, as any other WCF Service, contains a service host where this interceptor can be injected. The following code illustrates how that can be done in the “svc” file. <%@ ServiceHost Language="C#" Debug="true" Service="ContactsDataService"                 Factory="AppServiceHostFactory" %> using System; using System.ServiceModel; using System.ServiceModel.Activation; using Microsoft.ServiceModel.Web; class AppServiceHostFactory : ServiceHostFactory {    protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses)   {     WebServiceHost2 result = new WebServiceHost2(serviceType, true, baseAddresses);     result.Interceptors.Add(new SamlAuthenticationInterceptor());                 return result;   } } WCF Data Services includes an specific WCF host of out the box (DataServiceHost). However, the service is not affected at all if you replace it with a custom one as I am doing in the code above (WebServiceHost2 is part of the REST Starter kit). Finally, the client application needs to pass the SAML token somehow to the data service. In case you are using any Http client library for consuming the data service, that’s easy to do, you only need to include the SAML token as part of the “Authorization” header. If you are using the auto-generated data service proxy, a little piece of code is needed to inject a SAML token into the DataServiceContext instance. That class provides an event “SendingRequest” that any client application can leverage to include custom code that modified the Http request before it is sent to the service. So, you can easily create an extension method for the DataServiceContext that negotiates the SAML token with an existing STS, and adds that token as part of the “Authorization” header. public static class DataServiceContextExtensions {        public static void ConfigureFederatedCredentials(this DataServiceContext context, string baseStsAddress, string realm)   {     string address = string.Format(STSAddressFormat, baseStsAddress, realm);                  string token = NegotiateSecurityToken(address);     context.SendingRequest += (source, args) =>     {       args.RequestHeaders.Add("Authorization", token);     };   } private string NegotiateSecurityToken(string address) { } } I left the NegociateSecurityToken method empty for this extension as it depends pretty much on how you are negotiating tokens from an existing STS. In case you want to end-to-end REST solution that involves an Http endpoint for the STS, you should definitely take a look at the Thinktecture starter STS project in codeplex.

    Read the article

  • jQuery Templates and Data Linking (and Microsoft contributing to jQuery)

    - by ScottGu
    The jQuery library has a passionate community of developers, and it is now the most widely used JavaScript library on the web today. Two years ago I announced that Microsoft would begin offering product support for jQuery, and that we’d be including it in new versions of Visual Studio going forward. By default, when you create new ASP.NET Web Forms and ASP.NET MVC projects with VS 2010 you’ll find jQuery automatically added to your project. A few weeks ago during my second keynote at the MIX 2010 conference I announced that Microsoft would also begin contributing to the jQuery project.  During the talk, John Resig -- the creator of the jQuery library and leader of the jQuery developer team – talked a little about our participation and discussed an early prototype of a new client templating API for jQuery. In this blog post, I’m going to talk a little about how my team is starting to contribute to the jQuery project, and discuss some of the specific features that we are working on such as client-side templating and data linking (data-binding). Contributing to jQuery jQuery has a fantastic developer community, and a very open way to propose suggestions and make contributions.  Microsoft is following the same process to contribute to jQuery as any other member of the community. As an example, when working with the jQuery community to improve support for templating to jQuery my team followed the following steps: We created a proposal for templating and posted the proposal to the jQuery developer forum (http://forum.jquery.com/topic/jquery-templates-proposal and http://forum.jquery.com/topic/templating-syntax ). After receiving feedback on the forums, the jQuery team created a prototype for templating and posted the prototype at the Github code repository (http://github.com/jquery/jquery-tmpl ). We iterated on the prototype, creating a new fork on Github of the templating prototype, to suggest design improvements. Several other members of the community also provided design feedback by forking the templating code. There has been an amazing amount of participation by the jQuery community in response to the original templating proposal (over 100 posts in the jQuery forum), and the design of the templating proposal has evolved significantly based on community feedback. The jQuery team is the ultimate determiner on what happens with the templating proposal – they might include it in jQuery core, or make it an official plugin, or reject it entirely.  My team is excited to be able to participate in the open source process, and make suggestions and contributions the same way as any other member of the community. jQuery Template Support Client-side templates enable jQuery developers to easily generate and render HTML UI on the client.  Templates support a simple syntax that enables either developers or designers to declaratively specify the HTML they want to generate.  Developers can then programmatically invoke the templates on the client, and pass JavaScript objects to them to make the content rendered completely data driven.  These JavaScript objects can optionally be based on data retrieved from a server. Because the jQuery templating proposal is still evolving in response to community feedback, the final version might look very different than the version below. This blog post gives you a sense of how you can try out and use templating as it exists today (you can download the prototype by the jQuery core team at http://github.com/jquery/jquery-tmpl or the latest submission from my team at http://github.com/nje/jquery-tmpl).  jQuery Client Templates You create client-side jQuery templates by embedding content within a <script type="text/html"> tag.  For example, the HTML below contains a <div> template container, as well as a client-side jQuery “contactTemplate” template (within the <script type="text/html"> element) that can be used to dynamically display a list of contacts: The {{= name }} and {{= phone }} expressions are used within the contact template above to display the names and phone numbers of “contact” objects passed to the template. We can use the template to display either an array of JavaScript objects or a single object. The JavaScript code below demonstrates how you can render a JavaScript array of “contact” object using the above template. The render() method renders the data into a string and appends the string to the “contactContainer” DIV element: When the page is loaded, the list of contacts is rendered by the template.  All of this template rendering is happening on the client-side within the browser:   Templating Commands and Conditional Display Logic The current templating proposal supports a small set of template commands - including if, else, and each statements. The number of template commands was deliberately kept small to encourage people to place more complicated logic outside of their templates. Even this small set of template commands is very useful though. Imagine, for example, that each contact can have zero or more phone numbers. The contacts could be represented by the JavaScript array below: The template below demonstrates how you can use the if and each template commands to conditionally display and loop the phone numbers for each contact: If a contact has one or more phone numbers then each of the phone numbers is displayed by iterating through the phone numbers with the each template command: The jQuery team designed the template commands so that they are extensible. If you have a need for a new template command then you can easily add new template commands to the default set of commands. Support for Client Data-Linking The ASP.NET team recently submitted another proposal and prototype to the jQuery forums (http://forum.jquery.com/topic/proposal-for-adding-data-linking-to-jquery). This proposal describes a new feature named data linking. Data Linking enables you to link a property of one object to a property of another object - so that when one property changes the other property changes.  Data linking enables you to easily keep your UI and data objects synchronized within a page. If you are familiar with the concept of data-binding then you will be familiar with data linking (in the proposal, we call the feature data linking because jQuery already includes a bind() method that has nothing to do with data-binding). Imagine, for example, that you have a page with the following HTML <input> elements: The following JavaScript code links the two INPUT elements above to the properties of a JavaScript “contact” object that has a “name” and “phone” property: When you execute this code, the value of the first INPUT element (#name) is set to the value of the contact name property, and the value of the second INPUT element (#phone) is set to the value of the contact phone property. The properties of the contact object and the properties of the INPUT elements are also linked – so that changes to one are also reflected in the other. Because the contact object is linked to the INPUT element, when you request the page, the values of the contact properties are displayed: More interesting, the values of the linked INPUT elements will change automatically whenever you update the properties of the contact object they are linked to. For example, we could programmatically modify the properties of the “contact” object using the jQuery attr() method like below: Because our two INPUT elements are linked to the “contact” object, the INPUT element values will be updated automatically (without us having to write any code to modify the UI elements): Note that we updated the contact object above using the jQuery attr() method. In order for data linking to work, you must use jQuery methods to modify the property values. Two Way Linking The linkBoth() method enables two-way data linking. The contact object and INPUT elements are linked in both directions. When you modify the value of the INPUT element, the contact object is also updated automatically. For example, the following code adds a client-side JavaScript click handler to an HTML button element. When you click the button, the property values of the contact object are displayed using an alert() dialog: The following demonstrates what happens when you change the value of the Name INPUT element and click the Save button. Notice that the name property of the “contact” object that the INPUT element was linked to was updated automatically: The above example is obviously trivially simple.  Instead of displaying the new values of the contact object with a JavaScript alert, you can imagine instead calling a web-service to save the object to a database. The benefit of data linking is that it enables you to focus on your data and frees you from the mechanics of keeping your UI and data in sync. Converters The current data linking proposal also supports a feature called converters. A converter enables you to easily convert the value of a property during data linking. For example, imagine that you want to represent phone numbers in a standard way with the “contact” object phone property. In particular, you don’t want to include special characters such as ()- in the phone number - instead you only want digits and nothing else. In that case, you can wire-up a converter to convert the value of an INPUT element into this format using the code below: Notice above how a converter function is being passed to the linkFrom() method used to link the phone property of the “contact” object with the value of the phone INPUT element. This convertor function strips any non-numeric characters from the INPUT element before updating the phone property.  Now, if you enter the phone number (206) 555-9999 into the phone input field then the value 2065559999 is assigned to the phone property of the contact object: You can also use a converter in the opposite direction also. For example, you can apply a standard phone format string when displaying a phone number from a phone property. Combining Templating and Data Linking Our goal in submitting these two proposals for templating and data linking is to make it easier to work with data when building websites and applications with jQuery. Templating makes it easier to display a list of database records retrieved from a database through an Ajax call. Data linking makes it easier to keep the data and user interface in sync for update scenarios. Currently, we are working on an extension of the data linking proposal to support declarative data linking. We want to make it easy to take advantage of data linking when using a template to display data. For example, imagine that you are using the following template to display an array of product objects: Notice the {{link name}} and {{link price}} expressions. These expressions enable declarative data linking between the SPAN elements and properties of the product objects. The current jQuery templating prototype supports extending its syntax with custom template commands. In this case, we are extending the default templating syntax with a custom template command named “link”. The benefit of using data linking with the above template is that the SPAN elements will be automatically updated whenever the underlying “product” data is updated.  Declarative data linking also makes it easier to create edit and insert forms. For example, you could create a form for editing a product by using declarative data linking like this: Whenever you change the value of the INPUT elements in a template that uses declarative data linking, the underlying JavaScript data object is automatically updated. Instead of needing to write code to scrape the HTML form to get updated values, you can instead work with the underlying data directly – making your client-side code much cleaner and simpler. Downloading Working Code Examples of the Above Scenarios You can download this .zip file to get with working code examples of the above scenarios.  The .zip file includes 4 static HTML page: Listing1_Templating.htm – Illustrates basic templating. Listing2_TemplatingConditionals.htm – Illustrates templating with the use of the if and each template commands. Listing3_DataLinking.htm – Illustrates data linking. Listing4_Converters.htm – Illustrates using a converter with data linking. You can un-zip the file to the file-system and then run each page to see the concepts in action. Summary We are excited to be able to begin participating within the open-source jQuery project.  We’ve received lots of encouraging feedback in response to our first two proposals, and we will continue to actively contribute going forward.  These features will hopefully make it easier for all developers (including ASP.NET developers) to build great Ajax applications. Hope this helps, Scott P.S. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu]

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • I have bluescreen-phobia

    - by Charlie Somerville
    I'm scared of the Blue Screen of Death. Seriously. Whenever I hibernate or shutdown my computer, I have to turn the screen off in case it bluescreens during that process. Whenever I see a bluescreen, it makes my heart skip a beat and I jump a little - especially if it's on my own computer. I decided that this is getting ridiculous after I experienced a BSOD this evening. My computer booted back up and I decided to switch off the screen and leave the room. I came back about a minute later and it still hadn't got to the login screen. To find out what happened, I actually covered the screen with two A4 pages and gradually peeled them back after I saw that the screen wasn't blue. This is going way too far, but I can't help it. I am legitimately afraid of BSODs. Does anyone have any advice on how I can help myself?

    Read the article

  • OpenGL extension vs OpenGL core

    - by user209347
    I was doubting: I'm writing a cross-platform engine OpenGL C++, I figured out windows forces the developers to access OpenGL features above 1.1 through extensions. Now the thing is, on Linux, I know that I can directly access functions if the version supports it through glext.h and opengl version. The problem is that if on Linux, the core doesn't support it, is it possible there is an extensions that supports the same functionality, in my case vertex buffer objects? I'm doing something like this: Windows: (hashdeck) define glFunction functionpointer_to_the_extension (apparently the layout changes font size if I use #) Linux: Since glext already defined glFunction, I can write in client code glFunction, and compile it both on Windows AND Linux without changing a single line in my client code using the engine (my goal). Now the thing is, I saw a tutorial use only the extension on Linux, and not checking for the opengl implementation version. If the functionality is available in the core, is it also available as extension (VBO's e.g.)? Or is an extension something you never know is available? I want to write an engine that gets all the possibilities on hardware, so I need to check (on Linux) for extensions as well as core version for possible functionality implementation.

    Read the article

  • How to use BCDEdit to dual boot Windows installations?

    - by Ian Boyd
    What are the bcdedit commands necessary to setup dual boot between different installations of Windows?5 Background i recently installed Windows 8 onto a separate hard drive1. Now that Windows 8 in installed i want to dual-boot back to Windows 7. i have my two2 hard drives: So you can see that i have my two disks, with the partitions containing Windows: Windows 7: \\PhysicalDisk0 (partition 03) Windows 8: \\PhysicalDisk2 (partition 1) What i'm trying to figure out how is how to use bcdedit to instruct the thing that boots Windows that there is another Windows installation out there. Running bcdedit now, it shows current configuration: C:\WINDOWS\system32>bcdedit Windows Boot Manager -------------------- identifier {bootmgr} device partition=\Device\HarddiskVolume2 description Windows Boot Manager locale en-US inherit {globalsettings} integrityservices Enable default {current} resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} displayorder {current} toolsdisplayorder {memdiag} timeout 30 Windows Boot Loader ------------------- identifier {current} device partition=C: path \WINDOWS\system32\winload.exe description Windows 8 locale en-US inherit {bootloadersettings} recoverysequence {ce153eb9-3786-11e2-87c0-e740e123299f} integrityservices Enable recoveryenabled Yes allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \WINDOWS resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} nx OptIn bootmenupolicy Standard hypervisorlaunchtype Auto i cannot find any documentation on the difference between Windows Boot Manager and Windows Boot Loader. Documentation There is some documentation on Bcdedit: Technet: Command Line Reference - Bcdedit Technet: Windows Automated Installation Kit - BCDEdit Command Line Options Whitepaper - BCDEdit Commands for Boot Environment (Word Document) But they don't explain how edit the binary boot configuration data If i had to guess, i would think that a Windows Boot Manager instructs the BIOS what program it should run. That program would give the user a set of boot choices. That leaves Windows Boot Loader do be a particular boot choice, that represents a particular installation of Windows. If that is the case i would need to create a new Windows Boot Loader entry. This means i might want to use the /create parameter: /create Creates a new boot entry: bcdedit [/store filename] /create [id] /d description [/application apptype | /inherit [apptype] | /inherit DEVICE | /device] So i assume a syntax of: >bcdedit /create /d "The old Windows 7" /application osloader Where application can be one of the following types: Apptype Description BOOTSECTOR The boot sector application OSLOADER The Windows boot loader RESUME A resume application Unfortunately, the only documentation about osloader is "The Windows boot loader". i don't see how that can differentiate between Windows 8 on one hard drive, and Windows 7 on another. The other possible parameter when /create a boot loader is >bcdedit /create /D "Windows Vista" /device "The Quick Brown Fox" Unfortunately the documentation is missing for /device: /device Optional. If id is not set to a well-known identifier, the option that is used to specify the new boot entry as an additional device options entry. Since i did not set id to a well-known identifier, i must set /device to "the option that is used to specify the new boot entry as an additional device options entry". i know all those words; they're all English. But i have on idea what it is saying; those words in that order seem nonsensical. So i'm somewhat stymied. i don't want to be like Dan Stolts from Microsoft: I found no content that was particularly helpful when I hosed my machine by playing with BCDEdit. This post would have been ok if there was much more detail especially on the /set command OSDevice, etc. So once I got my machine fixed, I documented the solution and the information is here.... i mean, if a Microsoft guy can't even figure out how to use BCDEdit to edit his BCD, then what chance to i have? Bonus Reading BCDEdit Command-Line Options Bcdedit Server 2008 R2 or Windows 7 System Will NOT Boot After Making Changes To Boot Manager Using BCDEdit Visual BCD Editor4 Windows 7 and Windows 8 RTM Dual Boot Setup Footnotes 1 Since the Windows 8 installer would have damaged my Windows 7 install, i decided to unplug my "main" hard drive during the install. Which is a long-winded explanation of why the Windows 8 installer didn't detect the existing Windows 7 install. Normally the installer would have automatically created the required entries for dual-boot. Not that the reason i'm asking the question is important. 2 Really there's three drives, but the third is just bulk storage. The existence of a 3rd hard drive is irrelevant to the question. i only mention it in case someone wants to know why the screenshot has 3 hard drives when i only mention two. 3 i arbitrarily started numbering partitions at "zero"; not to imply that partitions are numbered starting at zero. i only mention partitions because i don't see how any boot-loader could do its job without knowing which partition, and which folder, an installation of Windows is located in. 4 i'm asking about BCDEdit. i tried Visual BCD Editor. It seems to be a visual BCD editor. That is to say that it's a GUI, but still uses the same terminology as BCDEdit, and requires the same knowledge that BCD doesn't document. 5 For simplicity sake we'll assume that all installation of Windows i want to dual-boot between are Windows Vista or later, making them all compatible with the BCDEdit and the binary boot loader. The alternative would require delving into the intricacies of the old ntloader. Nor am i asking about dual booting to Linux; or how to boot to a Virtual Hard Drive (vhd) image. Just modern versions of Windows on existing hard drives in the same machine. Note: You can ignore everything after the word Background. It's all pointless exposition to satisfy some people's need for "research effort" before they'll consider being helpful. Some people have even been known to summarily close questions unless there is research effort. Some people have been know to close questions if there is too much research effort. Some people close questions when i put the note saying that they can ignore everything after the Background out of spite. Some people are just grumpy.

    Read the article

  • Modifying AD Schema permissions from the command line

    - by Ryan Roussel
    Recently while making some changes for a client, I accidently dug myself into a pretty deep hole.  I was trying to explicitly deny a certain user from reading a few group policies including the Default Domain Policy.  When I went in to make the change I accidently denied Authenticated Users rather than the AD user object.  This of course made the GPO inaccessible to all users including any with domain admin rights.  The policy could no longer be modified in the GPMC and worse, changes could not be made through ADSIedit.   The errors I was getting from inside ADSIedit when trying to edit the container looked like this This object has one or more property sheets currently open. Invalid path to object The only solution was to strip Authenticated Users from the container ACL completely in the schema, then re-add it back with the default read and apply rights.  To perform this action, I used a command I had never used before:  DSALCS.exe  It’s part of the DSMOD group of tools.  Since this command interacts with the actual schema, you have to know the full LDAP container or object name.  In this case the GUID of the Default Domain Policy: {31B2F340-016D-11D2-945F-00C04FB984F9}   The actual commands I ran looked like this:   To display the current ACL of the container: c:\>dsacls “cn={31B2F340-016D-11D2-945F-00C04FB984F9},cn=Policies,cn=System, dc=domain,dc=com” /A To strip Authenticated Users from the ACL of the container: c:\>dsacls “cn={31B2F340-016D-11D2-945F-00C04FB984F9},cn=Policies,cn=System, dc=domain,dc=com” /R “NT Authority\Authenticated Users”   For full reference of the DSACLS.EXE command visit: http://support.microsoft.com/kb/281146 Once the Authenticated Users was cleared from the ACL, I was able to use Group Policy Management Console to reassign the default permissions.

    Read the article

  • HP D530 Startup Error: 512 - Chassis Fan Not Detected

    - by lyrikles
    I'm using the HP D530 Motherboard/CPU that I installed in a new case with a 600W PSU. There was a problem with the onboard chassis fan connector (3-wire) not supplying sufficient power to the chassis fan indicated by the fan spinning very slowly, but I never experienced the "512 Error" at boot. Also, the same fan works perfectly connected directly to the PSU. I disconnected it since I already have plenty of fans connected via the PSU directly. Since then, on startup, I get the error: "512 - Chassis Fan Not Detected" and am asked to "Press F1 to continue". This gets quite annoying since I use this machine remotely (w/ FreeNAS). What could be causing the onboard fan connector to not be giving enough power? If this is unable to be corrected, how can I make the BIOS think there's a chassis fan plugged in without actually plugging a fan into the onboard connector? Would it be possible to jumper the pins without damaging the motherboard or PSU? Thanks,Erik

    Read the article

  • Question about API and Web application code sharing

    - by opendd
    This is a design question. I have a multi part application with several user types. There is a user client for the patient that interacts with a web service. There is an API evolving behind the web service that will be exposed to institutional "users" and an interface for clinicians, researchers and admin types. The patient UI is Flex. The clinician/admin portion of the application is RoR. The API is RoR/rack based. The web service component is Java WS. All components access the same data source. These components are deployed as separate components to their own subdomains. This decision was made to allow for scaling the components individually as needed. Initially, the decision was made to split the code for the RoR Web application from the RoR API. This decision was made in the interests of security and keeping the components focused on specific tasks. Over the course of time, there is necessarily going to be overlap and I am second guessing my decision to keep the code totally separate. I am noticing code being lifted from the admin side being lifted, modified and used in the API. This being the case, I have been considering merging the Ruby based repositories. I am interested in ideas and insight on this situation along with the reasoning behind your thoughts. Thanks.

    Read the article

  • Sputnik – Google’s Java script conformance tester now as website

    - by samsudeen
    Sputnik the JavaScript 3 conformance test suite launched by Google last year is now available as Google Labs (Sputnik Test)product. You can browse it like any other  website and run over 5000 java script function tests to check your browser compatibility This product allows you do the following options Run :  You can run the complete test suite on your browser to check the compliance to  ECMA-262 standards.You can also browse trough  the failed test case. Compare : You can compare java script conformance of the various leading browsers in the market. Now web developers can be more cautious while designing websites to make them compatible with multiple browsers. Google is committed to review and release multiple version periodically with updated test cases. According to the latest sputnik test  result released by Google, Opera 10.5 leads the race with only 78 failures. Microsoft IE 8 performed the worst (463 failures) . Next to Opera,  Apple’s Safari (159 failures), Google’s Chrome (218 failures) and Mozilla’s Firefox (259 failures) leads respectively Though the test are about conformance, not performance, the top 3 leading browsers are among the last three in conformance which is something they have to improve Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • SharePoint 2010 MSDN Labs

    - by Kelly Jones
    Eric Ligman, from Microsoft, posted a great blog post this week listing all of the SharePoint 2010 Virtual Labs that are available from Microsoft.  His blog entry is here: http://blogs.msdn.com/b/mssmallbiz/archive/2012/03/13/sharepoint-server-2010-msdn-virtual-labs-available-to-you-online-plus-more-sharepoint-2010-resources.aspx He also posted other resources as well. I’ve copied his Virtual Lab links here: SharePoint Server 2010 Virtual Labs MSDN Virtual Lab: SharePoint Server 2010: Introduction MSDN Virtual Lab: Getting Started with SharePoint 2010 MSDN Virtual Lab: SharePoint 2010 User Interface Advancements MSDN Virtual Lab: SharePoint Server 2010 Connectors & Using the Business Data Connectivity (BDC) Service MSDN Virtual Lab: SharePoint Server 2010: Advanced Search Security MSDN Virtual Lab: SharePoint Server 2010: Configuring Search UIs MSDN Virtual Lab: SharePoint Server 2010: Content Processing and Property Extraction MSDN Virtual Lab: SharePoint Server 2010: Developing a Custom Connector MSDN Virtual Lab: SharePoint Server 2010: Fast Search Web Crawler MSDN Virtual Lab: SharePoint Server 2010: Federated Search MSDN Virtual Lab: SharePoint Server 2010: Linguistics MSDN Virtual Lab: SharePoint Server 2010: People Search Administration and Management MSDN Virtual Lab: SharePoint Server 2010: Relevancy and Ranking MSDN Virtual Lab: Customizing MySites MSDN Virtual Lab: Designing Lists and Schemas MSDN Virtual Lab: Developing a BCS External Content Type with Visual Studio 2010 MSDN Virtual Lab: Developing a Sandboxed Solution with Web Parts MSDN Virtual Lab: Developing a Visual Web Part in Visual Studio 2010 MSDN Virtual Lab: Developing Business Intelligence Applications MSDN Virtual Lab: Enterprise Content Management MSDN Virtual Lab: LINQ to SharePoint 2010 MSDN Virtual Lab: Visual Studio SharePoint Tools MSDN Virtual Lab: Workflow In addition to the SharePoint Server 2010 Virtual Labs, here are a few other SharePoint 2010 resources that I thought you might also be interested in: Technical reference for Microsoft SharePoint Server 2010 SharePoint 2010: IT Pro Evaluation Guide Connecting SharePoint 2010 to Line-of-Business Systems to Deliver Business-Critical Solutions Configure SharePoint Server 2010 as a Single Server with Microsoft SQL Server: Test Lab Guide Microsoft SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Technologies 2010 Deploying FAST Search Server 2010 for SharePoint FAST Search Server 2010 for SharePoint Add or Remove an Index Column Upgrade worksheet for SharePoint Server 2010 Microsoft SharePoint Server 2010 Technical Library in Compiled Help format Microsoft SharePoint Foundation 2010 Technical Library in Compiled Help format Microsoft FAST Search Server 2010 for SharePoint Technical Library in Compiled Help format Microsoft Reseller partner Learning Path Microsoft solutions partners and ISVs Learning Path Microsoft partner Practice Accelerator for SharePoint Microsoft partner SharePoint 2010 Internal Use Licenses SharePoint Case Studies SharePoint MSDN Forums SharePoint TechNet Forums Microsoft SharePoint 2010 page on Microsoft Partner Network portal

    Read the article

  • SSH server not working (respawns until stopped)

    - by Khaled
    I have a running Ubuntu Server 10.04.1. When I tried to login to the server via ssh, I could not. Instead, I got connection refused error. I tried to ping the machine and I got reply! So, the clear reason is that SSH daemon is stopped. After reboot, I was able to login to my server via ssh. After some time, I looked at my logs /var/log/syslog and found the following records: Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2465) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2469) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2473) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2477) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2481) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2485) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2489) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2493) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2497) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2501) terminated with status 255 Jan 16 10:57:09 myserver init: ssh respawning too fast, stopped I searched for a similar problem/solution. Some people said that this is caused by the SSH daemon trying to start before networking and they suggest to change ListenAddress in /etc/ssh/sshd_config to be 0.0.0.0. I think this is not the cause in my case, because my problem occurs after system is up and running. Any idea what is causing this? This is Ubuntu Server and it should be running and accessed remotely using SSH.

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Service-Oriented Architecture and Web Services

    Service oriented architecture is an architectural model for developing distributed systems across a network or the Internet. The main goal of this model is to create a collection of sub-systems to function as one unified system. This approach allows applications to work within the context of a client server relationship much like a web browser would interact with a web server. In this relationship a client application can request an action to be performed on a server application and are returned to the requesting client. It is important to note that primary implementation of service oriented architecture is through the use of web services. Web services are exposed components of a remote application over a network. Typically web services communicate over the HTTP and HTTPS protocols which are also the standard protocol for accessing web pages on the Internet.  These exposed components are self-contained and are self-describing.  Due to web services independence, they can be called by any application as long as it can be accessed via the network.  Web services allow for a lot of flexibility when connecting two distinct systems because the service works independently from the client. In this case a web services built with Java in a UNIX environment not will have problems handling request from a C# application in a windows environment. This is because these systems are communicating over an open protocol allowed by both environments. Additionally web services can be found by using UDDI. References: Colan, M. (2004). Service-Oriented Architecture expands the vision of web services, Part 1. Retrieved on August 21, 2011 from http://www.ibm.com/developerworks/library/ws-soaintro/index.html W3Schools.com. (2011). Web Services Introduction - What is Web Services. Retrieved on August 21, 2011 from http://www.w3schools.com/webservices/ws_intro.asp

    Read the article

  • SQL SERVER – Information Related to DATETIME and DATETIME2

    - by pinaldave
    I recently received interesting comment on the blog regarding workaround to overcome the precision issue while dealing with DATETIME and DATETIME2. I have written over this subject earlier over here. SQL SERVER – Difference Between GETDATE and SYSDATETIME SQL SERVER – Difference Between DATETIME and DATETIME2 – WITH GETDATE SQL SERVER – Difference Between DATETIME and DATETIME2 SQL Expert Jing Sheng Zhong has left following comment: The issue you found in SQL server new datetime type is related time source function precision. Folks have found the root reason of the problem – when data time values are converted (implicit or explicit) between different data type, which would lose some precision, so the result cannot match each other as thought. Here I would like to gave a work around solution to solve the problem which the developers met. -- Declare and loop DECLARE @Intveral INT, @CurDate DATETIMEOFFSET; CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2, GlobalDate DATETIMEOFFSET) SET @Intveral = 10000 WHILE (@Intveral > 0) BEGIN ----SET @CurDate = SYSDATETIMEOFFSET(); -- higher precision for future use only SET @CurDate = TODATETIMEOFFSET(GETDATE(),DATEDIFF(N,GETUTCDATE(),GETDATE())); -- lower precision to match exited date process INSERT #TimeTable (FirstDate, LastDate, GlobalDate) VALUES (@CurDate, @CurDate, @CurDate) SET @Intveral = @Intveral - 1 END GO -- Distinct Values SELECT COUNT(DISTINCT FirstDate) D_DATETIME, COUNT(DISTINCT LastDate) D_DATETIME2, COUNT(DISTINCT GlobalDate) D_SYSGETDATE FROM #TimeTable GO -- Join SELECT DISTINCT a.FirstDate,b.LastDate, b.GlobalDate, CAST(b.GlobalDate AS DATETIME) GlobalDateASDateTime FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = CAST(b.GlobalDate AS DATETIME) GO -- Select SELECT * FROM #TimeTable GO -- Clean up DROP TABLE #TimeTable GO If you read my blog SQL SERVER – Difference Between DATETIME and DATETIME2 you will notice that I have achieved the same using GETDATE(). Are you using DATETIME2 in your production environment? If yes, I am interested to know the use case. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Create and Consume WCF service using Visual Studio 2010

    - by sreejukg
    In this article I am going to demonstrate how to create a WCF service, that can be hosted inside IIS and a windows application that consume the WCF service. To support service oriented architecture, Microsoft developed the programming model named Windows Communication Foundation (WCF). ASMX was the prior version from Microsoft, was completely based on XML and .Net framework continues to support ASMX web services in future versions also. While ASMX web services was the first step towards the service oriented architecture, Microsoft has made a big step forward by introducing WCF. An overview of planning for WCF can be found from this link http://msdn.microsoft.com/en-us/library/ff649584.aspx . The following are the important differences between WCF and ASMX from an asp.net developer point of view. 1. ASMX web services are easy to write, configure and consume 2. ASMX web services are only hosted in IIS 3. ASMX web services can only use http 4. WCF, can be hosted inside IIS, windows service, console application, WAS(Windows Process Activation Service) etc 5. WCF can be used with HTTP, TCP/IP, MSMQ and other protocols. The detailed difference between ASMX web service and WCF can be found here. http://msdn.microsoft.com/en-us/library/cc304771.aspx Though WCF is a bigger step for future, Visual Studio makes it simpler to create, publish and consume the WCF service. In this demonstration, I am going to create a service named SayHello that accepts 2 parameters such as name and language code. The service will return a hello to user name that corresponds to the language. So the proposed service usage is as follows. Caller: SayHello(“Sreeju”, “en”) -> return value -> Hello Sreeju Caller: SayHello(“???”, “ar”) -> return value -> ????? ??? Caller: SayHello(“Sreeju”, “es”) - > return value -> Hola Sreeju Note: calling an automated translation service is not the intention of this article. If you are interested, you can find bing translator API and can use in your application. http://www.microsofttranslator.com/dev/ So Let us start First I am going to create a Service Application that offer the SayHello Service. Open Visual Studio 2010, Go to File -> New Project, from your preferred language from the templates section select WCF, select WCF service application as the project type, give the project a name(I named it as HelloService), click ok so that visual studio will create the project for you. In this demonstration, I have used C# as the programming language. Visual studio will create the necessary files for you to start with. By default it will create a service with name Service1.svc and there will be an interface named IService.cs. The screenshot for the project in solution explorer is as follows Since I want to demonstrate how to create new service, I deleted Service1.Svc and IService1.cs files from the project by right click the file and select delete. Now in the project there is no service available, I am going to create one. From the solution explorer, right click the project, select Add -> New Item Add new item dialog will appear to you. Select WCF service from the list, give the name as HelloService.svc, and click on the Add button. Now Visual studio will create 2 files with name IHelloService.cs and HelloService.svc. These files are basically the service definition (IHelloService.cs) and the service implementation (HelloService.svc). Let us examine the IHelloService interface. The code state that IHelloService is the service definition and it provides an operation/method (similar to web method in ASMX web services) named DoWork(). Any WCF service will have a definition file as an Interface that defines the service. Let us see what is inside HelloService.svc The code illustrated is implementing the interface IHelloService. The code is self-explanatory; the HelloService class needs to implement all the methods defined in the Service Definition. Let me do the service as I require. Open IHelloService.cs in visual studio, and delete the DoWork() method and add a definition for SayHello(), do not forget to add OperationContract attribute to the method. The modified IHelloService.cs will look as follows Now implement the SayHello method in the HelloService.svc.cs file. Here I wrote the code for SayHello method as follows. I am done with the service. Now you can build and run the service by clicking f5 (or selecting start debugging from the debug menu). Visual studio will host the service in give you a client to test it. The screenshot is as follows. In the left pane, it shows the services available in the server and in right side you can invoke the service. To test the service sayHello, double click on it from the above window. It will ask you to enter the parameters and click on the invoke button. See a sample output below. Now I have done with the service. The next step is to write a service client. Creating a consumer application involves 2 steps. One generating the class and configuration file corresponds to the service. Create a project that utilizes the generated class and configuration file. First I am going to generate the class and configuration file. There is a great tool available with Visual Studio named svcutil.exe, this tool will create the necessary class and configuration files for you. Read the documentation for the svcutil.exe here http://msdn.microsoft.com/en-us/library/aa347733.aspx . Open Visual studio command prompt, you can find it under Start Menu -> All Programs -> Visual Studio 2010 -> Visual Studio Tools -> Visual Studio command prompt Make sure the service is in running state in visual studio. Note the url for the service(from the running window, you can right click and choose copy address). Now from the command prompt, enter the svcutil.exe command as follows. I have mentioned the url and the /d switch – for the directory to store the output files(In this case d:\temp). If you are using windows drive(in my case it is c: ) , make sure you open the command prompt with run as administrator option, otherwise you will get permission error(Only in windows 7 or windows vista). The tool has created 2 files, HelloService.cs and output.config. Now the next step is to create a new project and use the created files and consume the service. Let us do that now. I am going to add a console application to the current solution. Right click solution name in the solution explorer, right click, Add-> New Project Under Visual C#, select console application, give the project a name, I named it TestService Now navigate to d:\temp where I generated the files with the svcutil.exe. Rename output.config to app.config. Next step is to add both files (d:\temp\helloservice.cs and app.config) to the files. In the solution explorer, right click the project, Add -> Add existing item, browse to the d:\temp folder, select the 2 files as mentioned before, click on the add button. Now you need to add a reference to the System.ServiceModel to the project. From solution explorer, right click the references under testservice project, select Add reference. In the Add reference dialog, select the .Net tab, select System.ServiceModel, and click ok Now open program.cs by double clicking on it and add the code to consume the web service to the main method. The modified file looks as follows Right click the testservice project and set as startup project. Click f5 to run the project. See the sample output as follows Publishing WCF service under IIS is similar to publishing ASP.Net application. Publish the application to a folder using Visual studio publishing feature, create a virtual directory and create it as an application. Don’t forget to set the application pool to use ASP.Net version 4. One last thing you need to check is the app.config file you have added to the solution. See the element client under ServiceModel element. There is an endpoint element with address attribute that points to the published service URL. If you permanently host the service under IIS, you can simply change the address parameter to the corresponding one and your application will consume the service. You have seen how easily you can build/consume WCF service. If you need the solution in zipped format, please post your email below.

    Read the article

  • December release of Microsoft All-In-One Code Framework is available now.

    - by Jialiang
    The code samples in Microsoft All-In-One Code Framework are updated on 2010-12-13. Download address: http://1code.codeplex.com/releases/view/57459#DownloadId=185534 Updated code sample index categorized by technologies: http://1code.codeplex.com/wikipage?title=All-In-One%20Code%20Framework%20Sample%20Catalog (it also allows you to download individual code samples instead of the entire All-In-One Code Framework sample package.) If it’s the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on YouTube http://www.youtube.com/watch?v=cO5Li3APU58, or read the introduction on our homepage http://1code.codeplex.com/,  and this Port25 article http://port25.technet.com/archive/2010/01/18/the-all-in-one-code-framework.aspx.  -------------- New ASP.NET Code Samples VBASPNETAJAXWebChat and CSASPNETAJAXWebChat Most of you have some experience in chatting with friends on the web. So you may want to know how to make a web chat application, it seems to be quite complicated. But ASP.NET gives you the power to buiild a chat room easily. In this code sample, we will construct our own web chat room with the amazing AJAX feature. The principle is simple relatively. As we all know, a base chat application need 4 base controls: one List control to show the chat room members, one List control to show the message list, one TextBox control to input messages and one button to send message. User inputs his message in the textbox first and then presses Send button, it will send the message to the server. The message list will update every 2 seconds to get the newest message list in the chat room from the server. We need to know, it is hard for us to make an AJAX web chat application like a windows form application because we cannot keep the connection after one web request ended. So a lot of events which communicates between client side and server side cannot be realized. The common workaround is to make web requests in every some seconds to check whether the server side has been updated. But another technique called COMET makes it possible. But it is different with AJAX and will not be talked in details in this KB. For more details about COMET, we can get some clues from the Reference.   CSASPNETCurrentOnlineUserList and VBASPNETCurrentOnlineUserList This sample demos a system that needs to display a list of current online users' information. As a matter of fact, Membership.GetNumberOfUsersOnline Method  can get the number of online users and there is a convenient approach to check whether the user is online by using Membership.GetUser(string userName).IsOnline property,however many asp.net projects are not using membership.So in this case,the sample shows how to display a list of current online users' information without using membership provider. It is not difficult to check whether the user is online by using session.Many projects tend to be used “Session_End” event to mark a user as “Offline”,however ,it may not be a good idea,because it can’t detect the user status accurately. In addition, "Session_End" event is only available in the "InProc" session mode. If you are storing session states in the State Server or SQL Server, "Session_End" event will never fire. To handle this issue, we need to save the user online status to a  global DataTable or  DataBase. In the sample application, define a global DataTable to store the information of online users.Use XmlHttpRequest in the pages to update and check user's last active time at intervals and also retrieve information on how many users are still online. The sample project can auto delete offline users' information from a global DataTable by checking users’ last active time. A step-by-step guide illustrating how to display a list of current online users' information without using membership provider: 1. Login page. Let user sign in and add current user’s information to a global datatable while Initialize the global datatable which used to store information of current online users. 2. Current online user list page. Use XmlHttpRequest in this page to update and check user's last active time at intervals and also retrieve information on how many users are still online. 3. If user closes the page without clicking  the sign out link button ,the sample project can auto mark the user as offline and delete offline users' information from a global DataTable which used to store information of current online users  by checking users’ last active time. Then the current online user list will be like this:   CSASPNETIPtoLocation This sample demonstrates how to find the geographical location from an IP address. As we know, it is not hard for us to get the IP address of visitors via Request.ServerVariable property, but it is really difficult for us to know where they come from. To achieve this feature, the sample uses a free third party web service from http://freegeoip.appspot.com/, which returns the information about an IP address we send to the server in the format of XML, JSON or CSV. It makes all things easier.   CSASPNETBackgroundWorker Sometimes we do an operation which needs long time to complete. It will stop the response and the page is blank until the operation finished. In this case, we want the operation to run in the background, and in the page, we want to display the progress of the running operation. Therefore, the user can know the operation is running and can know the progress. CSASPNETInheritingFromTreeNode In windows forms TreeView, each tree node has a property called "Tag" which can be used to store a custom object. Many customers want to implement the same tag feature in ASP.NET TreeView. This project creates a custom TreeView control named "CustomTreeView" to achieve this goal. CSASPNETRemoteUploadAndDownload and VBASPNETRemoteUploadAndDownload This code sample was created in response to a code sample request in our new code sample request frunction for customers. The code samples demonstrate uploading files to and downloading files from a remote HTTP or FTP server. In .NET Framework 2.0 and higher versions, there are some lightweight class libraries which support HTTP and FTP protocol transmission. By using these classes, we can achieve this programming requirement.   CSASPNETImageEditUpload and VBASPNETImageEditUpload This demo will shows how to insert, edit and update a common image with the type of "jpg", "png", "gif" or "bmp" . We mainly use two different SqlDataSources with the same database to bind to GridView and FormView in order to establish the “cascading” effort. Besides we apply our self-made ImageHanlder to encoding or decoding images of different types, and use context to output the stream of images. We will explicitly assign the binary streams of images through the event of “FormView_ItemInserting” or “Form_ItemUpdating” to synchronize the stream both in what we can see on an aspx page as well as in what’s really stored in the database.   WebBrowser Control, Network and other Windows General New Code Samples   CSWebBrowserSuppressError and VBWebBrowserSuppressError The sample demonstrates how to make WebBrowser suppress errors, such as script error, navigation error and so on.   CSWebBrowserWithProxy and VBWebBrowserWithProxy The sample demonstrates how to make WebBrowser use a proxy server.   CSWebDownloadProgress and VBWebDownloadProgress The sample demonstrates how to show progress during the download. It also supplies the features to Start, Pause, Resume and Cancel a download.   CppSetDesktopWallpaper, CSSetDesktopWallpaper and VBSetDesktopWallpaper This code sample application allows you select an image, view a preview (resized smaller to fit if necessary), select a display style among Tile, Center, Stretch, Fit (Windows 7 and later) and Fill (Windows 7 and later), and set the image as the Desktop wallpaper. CSWindowsServiceRecoveryProperty and VBWindowsServiceRecoveryProperty CSWindowsServiceRecoveryProperty example demonstrates how to use ChangeServiceConfig2 to configure the service "Recovery" properties in C#. This example operates all the options you can see on the service "Recovery" tab, including setting the "Enable actions for stops with errors" option in Windows Vista and later operating systems. This example also include how to grant the shut down privilege to the process, so that we can configure a special option in the "Recovery" tab - "Restart Computer Options...".   New Office Development Code Samples   CSOneNoteRibbonAddIn and VBOneNoteRibbonAddIn The code sample demonstrates a OneNote 2010 COM add-in that implements IDTExtensibility2. The add-in also supports customizing the Ribbon by implementing the IRibbonExtensibility interface. It is a skeleton OneNote add-in that developers can extend it to implement more functions. The code sample was requested by a customer in our code sample request service. We expect that this could help developers in the community.   New Windows Shell Code Samples   CppShellExtPreviewHandler, CSShellExtPreviewHandler and VBShellExtPreviewHandler In the past two months, we released the code samples of Windows Context Menu Handler, Infotip Handler, and Thumbnail Handler. This is the fourth part of the shell extension series: Preview Handler. The code samples demo the C++, C# and VB.NET implementation of a preview handler for a new file type registered with the .recipe extension. Preview handlers are called when an item is selected to show a lightweight, rich, read-only preview of the file's contents in the view's reading pane. This is done without launching the file's associated application. Windows Vista and later operating systems support preview handlers. To be a valid preview handler, several interfaces must be implemented. This includes IPreviewHandler (shobjidl.h); IInitializeWithFile, IInitializeWithStream, or IInitializeWithItem (propsys.h); IObjectWithSite (ocidl.h); and IOleWindow (oleidl.h). There are also optional interfaces, such as IPreviewHandlerVisuals (shobjidl.h), that a preview handler can implement to provide extended support. Windows API Code Pack for Microsoft .NET Framework makes the implementation of these interfaces very easy in .NET. The example preview handler provides previews for .recipe files. The .recipe file type is simply an XML file registered as a unique file name extension. It includes the title of the recipe, its author, difficulty, preparation time, cook time, nutrition information, comments, an embedded preview image, and so on. The preview handler extracts the title, comments, and the embedded image, and display them in a preview window.   In response to many customers' request, we added setup projects in every shell extension samples in this release. Those setup projects allow you to deploy the shell extensions to your end users' machines. ---------- Download address: http://1code.codeplex.com/releases/view/57459#DownloadId=185534 Updated code sample index categorized by technologies: http://1code.codeplex.com/wikipage?title=All-In-One%20Code%20Framework%20Sample%20Catalog (it also allows you to download individual code samples instead of the entire All-In-One Code Framework sample package.) If you have any feedback for us, please email: [email protected]. We look forward to your comments.

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >