Search Results

Search found 10594 results on 424 pages for 'vizioz umbraco blog'.

Page 270/424 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • SQL SERVER – Installing AdventureWorks for SQL Server 2011

    - by pinaldave
    I just began with SQL Server 2011 Denali CTP1. The very first thing, I realized that there is no AdventureWorks Sample Database available for Denali. I quickly searched online and reached to Microsoft documentations where it provides information of the how to install (restore) AdventureWorks for SQL Server 2011 for Denali. Download the AdventureWorks from here. Run following script (replace your path of mdf file. CREATE DATABASE AdventureWorks2008R2 ON (FILENAME = 'C:\SQL 11 CTP1\CTP1\AdventureWorks2008R2_Data.mdf') FOR ATTACH_REBUILD_LOG ; When you run above script it will give you following message and you are DONE! File activation failure. The physical file name "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdventureWorks2008R2_Log.ldf" may be incorrect. New log file 'C:\SQL 11 CTP1\CTP1\AdventureWorks2008R2_log.ldf' was created. Converting database 'AdventureWorks2008R2' from version 679 to the current version 684. Database 'AdventureWorks2008R2' running the upgrade step from version 679 to version 680. Database 'AdventureWorks2008R2' running the upgrade step from version 680 to version 681. Database 'AdventureWorks2008R2' running the upgrade step from version 681 to version 682. Database 'AdventureWorks2008R2' running the upgrade step from version 682 to version 683. Database 'AdventureWorks2008R2' running the upgrade step from version 683 to version 684. I will soon write my experience about Denali. However, SQL Server Management Studio more started to look a like Visual Studio. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 – Microsoft Whitepaper

    - by pinaldave
    I recently presented session on Statistics and Best Practices in Virtual Tech Days on Nov 22, 2010. The sessions was very popular and I got many questions right after the sessions. The number question I had received was where everybody can get the further information. I am very much happy that my sessions created some curiosity for one of the most important feature of the SQL Server. Statistics are the heart of the SQL Server. Microsoft has published a white paper on the subject how statistics are useful to Query Optimizer. Here is the abstract of the same white paper from Microsoft. Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 Writer: Eric N. Hanson and Yavor Angelov Microsoft SQL Server 2008 collects statistical information about indexes and column data stored in the database. These statistics are used by the SQL Server query optimizer to choose the most efficient plan for retrieving or updating data. This paper describes what data is collected, where it is stored, and which commands create, update, and delete statistics. By default, SQL Server 2008 also creates and updates statistics automatically, when such an operation is considered to be useful. This paper also outlines how these defaults can be changed on different levels (column, table, and database). In addition, it presents how certain query language features, such as Transact-SQL variables, interact with use of statistics by the optimizer, and it provides guidance for using these features when writing queries so you can obtain good query performance. Link to white paper Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 ?Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – Spot the SQLAuthority Baby Contest – SQL Server Cheat Sheet

    - by pinaldave
    Last Year during the TechEd India 2009 SQL Server Cheat Sheets were instant hit. Yesterday when I announce that I am going to attend TechED India 2010 at Bangalore, I received many requests for the same. I have only 30 copies available at this moment.  I will print more copies of the same after this event. For the moment I am going to run quick content to win SQL Server Cheat Sheet during this event. The contest is very simple. My 7 months old daughter will join me in this trip. She will be staying with me in the same hotel where the event is organized. Here is the detail for contest: Contest: If you Spot SQLAuthority Baby, get one SQL Server Cheat Sheet. Rules: Every hour the first person to spot SQLAuthority Baby will get 1 SQL Server Cheat Sheet. If you spot her and the hourly SQL Server Cheat Sheet is given away, you still have chance to get a copy. Drop your business card or email address and we will contact you for your copy. SQLAuthority Baby is very easy to spot. Shaivi Dave If you are not attending this event and want copy, you can easily download the same from link below. Download SQL Server Cheat Sheet from here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SQL Cheat Sheet, TechEd, TechEdIn

    Read the article

  • SQL SERVER – Resolving SQL Server Connection Errors – SQL in Sixty Seconds #030 – Video

    - by pinaldave
    One of the most famous errors related to SQL Server is about connecting to SQL Server itself. Here is how it goes, most of the time developers have worked with SQL Server and knows pretty much every error which they face during development language. However, hardly they install fresh SQL Server. As the installation of the SQL Server is a rare occasion unless you are DBA who is responsible for such an instance – the error faced during installations are pretty rare as well. I have earlier written an article about this which describes how to resolve the errors which are related to SQL Server connection. Even though the step by step directions are pretty simple there are many first time IT Professional who are not able to figure out how to resolve this error. I have quickly built a video which is covering most of the solutions related to resolving the connection error. In the Fix SQL Server Connection Error article following workarounds are described: SQL Server Services TCP/IP Settings Firewall Settings Enable Remote Connection Browser Services Firewall exception of sqlbrowser.exe Recreating Alias Related Tips in SQL in Sixty Seconds: SQL SERVER – FIX : ERROR : (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: ) SQL SERVER – Could not connect to TCP error code 10061: No connection could be made because the target machine actively refused it SQL SERVER – Connecting to Server Using Windows Authentication by SQLCMD SQL SERVER – Fix : Error: 15372 Failed to generate a ser instance od SQL Server due to a failure in starting the process for the user instance. The connection will be closed SQL SERVER – Dedicated Access Control for SQL Server Express Edition – An error occurred while obtaining the dedicated administrator connection (DAC) port. SQL SERVER – Fix : Error: 4064 – Cannot open user default database. Login failed. Login failed for user What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • SQL SERVER – Find Max Worker Count using DMV – 32 Bit and 64 Bit

    - by pinaldave
    During several recent training courses, I found it very interesting that Worker Thread is not quite known to everyone despite the fact that it is a very important feature. At some point in the discussion, one of the attendees mentioned that we can double the Worker Thread if we double the CPU (add the same number of CPU that we have on current system). The same discussion has triggered this quick article. Here is the DMV which can be used to find out Max Worker Count SELECT max_workers_count FROM sys.dm_os_sys_info Let us run the above query on my system and find the results. As my system is 32 bit and I have two CPU, the Max Worker Count is displayed as 512. To address the previous discussion, adding more CPU does not necessarily double the Worker Count. In fact, the logic behind this simple principle is as follows: For x86 (32-bit) upto 4 logical processors  max worker threads = 256 For x86 (32-bit) more than 4 logical processors  max worker threads = 256 + ((# Procs – 4) * 8) For x64 (64-bit) upto 4 logical processors  max worker threads = 512 For x64 (64-bit) more than 4 logical processors  max worker threads = 512+ ((# Procs – 4) * 8) In addition to this, you can configure the Max Worker Thread by using SSMS. Go to Server Node >> Right Click and Select Property >> Select Process and modify setting under Worker Threads. According to Book On Line, the default Worker Thread settings are appropriate for most of the systems. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology Tagged: SQL DMV

    Read the article

  • SQL SERVER – Color Coding SQL Server Management Studio Status Bar – SQL in Sixty Seconds #023 – Video

    - by pinaldave
    I often see developers executing the unplanned code on production server when they actually want to execute on the development server. Developers and DBAs get confused because when they use SQL Server Management Studio (SSMS) they forget to pay attention to the server they are connecting. It is very easy to fix this problem. You can select different color for a different server. Once you have different color for different server in the status bar, it will be easier for developer easily notice the server against which they are about to execute the script. Personally when I work on SQL Server development, here is the color code, which I follow. I keep Green for my development server, blue for my staging server and red for my production server. Honestly color coding does not signify much but different color for different server is the key here. More Tips on SSMS in SQL in Sixty Seconds: Generate Script for Schema and Data in SQL Server – SQL in Sixty Seconds #021  Remove Debug Button in SQL Server Management Studio – SQL in Sixty Seconds #020  Three Tricks to Comment T-SQL in SQL Server Management Studio – SQL in Sixty Seconds #019  Importing CSV into SQL Server – SQL in Sixty Seconds #018   Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • CodePlex Daily Summary for Sunday, March 14, 2010

    CodePlex Daily Summary for Sunday, March 14, 2010New ProjectsBeerMath.net: BeerMath.net lets brewers calculate expected values for their recipes. Written entirely in C#, it can be used in any .Net language.Bible Study: Данный проект предусматривает создание программного обеспечения, предоставляющего пользователю гибкие и мощные инструменты для чтения и изучения Пи...E-Messenger: Description détaillé du sujet : Développement d'une application (client lourd) de messagerie instantané et de partage de fichier interne à ESPRIT....Facebook Azure Toolkit: The Facebook Azure Toolit provides a flexible and scalable hosting platform for the smallest and largest of Facebook applications. This toolkit hel...Gherkin editor: A simple text editor to write specifications using Gherkin. The editor supports code completion, syntax highlighting, spell checker and more.Mydra Center: Mydra Center is a Media center with the particularity to be very flexible, allowing developers to extend it and add new features. The philosophy be...MyTwits - A rich Twitter client for Windows powered by WPF: MyTwits is a free Twitter client for Windows XP/Vista/7 powered by WPF which gives you freedom to twit right from your desktop. You can do almost a...na laborke: aaaaaaaaaaaaaaasssssssssssssssddddddddddddddddddddfffffffffffffNMTools: The "Network Management Tools" (NMTools) complete OpenSLIM CMDB capa­bil­i­ties with Network Discovery, Automa­tion and Con­fig­u­ra­tion Man­age­m...orionSRO: This project aims to make a fully functional server.Project Naduvar: Project Naduvar, is a centralized Locking Service in distribute systems. You can use this service in any of your existing distributed application. ...Silverlight Input Keyboard: Silverlight Input Keyboard and Behaviorsuh: uh.py is a command line tool that helps developers porting native projects from a case-insensitive filesystem to a case-sensitive filesystem by sea...New ReleasesAmiBroker Plug-ins with C#. A non official AmiBroker Plug-in SDK: AmiBroker Plug-in SDK v0.0.3: Small changesAmiBroker Plug-ins with C#. A non official AmiBroker Plug-in SDK: AmiBroker Plug-in SDK v0.0.4: Small updatesAStyle AddIn for SharpDevelop (Alex): 2.0 Production: #D 3.* add in with updated GUI elements.Coding Cockerel code samples: Validation with ASP .NET MVC and jQuery: Code sample related to the following blog post, http://codingcockerel.co.uk/consistent-validation-with-asp-net-mvc-and-jquery/.CoreSystem Library: Release - 1.0.3725.10575: This release contains a new class Crypto which makes encryption and descryption of string easy, it uses TripleDESCrystal Mapper: Release - 2.0.3725.11614: This is preview if release 2.0* that I promised, it contains following new features Tracking dirty entities and provide Save function to save all ...Digital Media Processing Project 1: Image Processor: Image Processor Alpha: First Release Features Include: Curve Adjustment Tool Region Growing Segmetation Threshold Segmentation Guassian/Butterworth High/Low pass filter...Exepack.NET: Exepack.NET version 0.03 beta: Exepack.NET is executable file compressor for .NET Framework. It allows to package your .NET application consisting of an executable file and sever...Export code as Code Snippet - Addin for Visual Studio 2008/2010 RC: VS 2010 Release Candidate: This release targets Visual Studio 2010 Release Candidate. It includes full Visual Basic 2010 source code. Fixes already available in previous ve...Facebook Azure Toolkit: 0.9 Beta: This is the initial beta releaseFamily Tree Analyzer: Version 1.0.5.0: Version 1.0.5.0 Change the way Census & Individual reports columns are sized so that user can resize later. Add filter to exclude individuals over...Home Access Plus+: v3.1.2.1: Version 3.1.2.1 Release Change Log: Added SSL SMTP Added SSL Authentication File Changes: ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdb ~/we...Home Access Plus+: v3.1.3.1: Version 3.1.3.1 Release Change Log: Fixed Help Desk File Changes: ~/bin/CHS Extranet.dll ~/bin/CHS Extranet.pdb ~/helpdesk/*.htmIceChat: IceChat 2009 Alpha 11.6 Full Install: IceChat 2009 Alpha 11.6 - Full Installer, installs IceChat 2009, and the Emoticons, and will also download .Net Framework 2.0 if needed.IceChat: IceChat 2009 Alpha 11.6 Simple Binaries: This simply the IceChat2009.exe and the IPluginIceChat.dll needed to run IceChat 2009. Is not an installer, does not include emoticons.IceChat: IceChat 2009 Alpha 11.6 Source Code: IceChat 2009 Alpha 11.6 Source CodeLunar Phase Silverlight Gadget: Lunar Phase RC: Stable release. 6 languages Auto refresh. Name / Light problem fixedMiracle OS: Miracle OS Alpha 0.001: Our first release is the Alpha 0.001. Miracle OS doens't work at all, but we work on it. You to? Please help us.MyTwits - A rich Twitter client for Windows powered by WPF: MyTwits BETA 1: I'm happy to release first BETA version of MyTwits. Just download the zip file attached and run setup.exe and you are done! If you've any problem...MyTwits - A rich Twitter client for Windows powered by WPF: MyTwits Source BETA 1: I'm providing you just a project file, I'll upload complete source code once I fine tuned the code.NMock3: NMock3 - Beta 5, .NET 3.5: Hilights of this releaseTutorials have been updated and are in a much better place now. (they compile) Public API is getting locked down. Void me...Project Naduvar: com.declum.naduvar.locking: First ReleaseQueryToGrid Module for DotNetNuke®: QueryToGrid Module version 01.00.01: This module is a proof of concept for both using AJAX in a DotNetNuke® module, and for using SQL in a module. »»» IMPORTANT NOTE ««« Using this mo...SCSI Interface for Multimedia and Block Devices: Release 10 - Almost like a commercial burner!!: I made many changes in the ISOBurn program in this version, making it much more user-friendly than before. You can now add, rename, and delete file...Silverlight Input Keyboard: Initial Release: For more information see http://www.orktane.com/Blog/post/2009/11/09/Virtual-Input-Keyboard-Behaviours-for-Silverlight.aspxThe Silverlight Hyper Video Player [http://slhvp.com]: RC: The release candidate is now in place. Unfortunately, because there are aspects of it that I'm not yet ready to discuss, the code for the RC will...twNowplaying: twNowplaying 1.0.0.3: Press the Twitter icon to get started, don't forget to submit bugs to the issue tracker. What's new This release has some minor UI fixes.uh: 1.0: This is the first stable release. It isn't super full featured but it does the basics.UriTree: UriTree 2.0.0: This release is the WPF version of this application.VCC: Latest build, v2.1.30313.0: Automatic drop of latest buildVr30 OS: Blackra1n: The software was made by blackra1n for jailbreak iphone and ipod touch. Is not the Vr30 OS Team ProjectVr30 OS: Vr30 Operating System Live Cd 1.0: The Operating system linux made by team. For more information go to http://vr30os.tuxfamily.orgWatchersNET.TagCloud: WatchersNET.TagCloud 01.01.00: Whats New Decide between Tags generated from the Search words, or create your own Tag List Custom Tag list changes Small BugfixesZeta Resource Editor: Source code release 2010-03-13: New sources, some small fixes.ZipStorer - A Pure C# Class to Store Files in Zip: ZipStorer 2.35: Improved UTF-8 Support Correct writting of modification time for extracted filesMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NET Ajax LibraryASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsRawrN2 CMSBlogEngine.NETpatterns & practices – Enterprise LibrarySharePoint Team-MailerFasterflect - A Fast and Simple Reflection APICaliburn: An Application Framework for WPF and SilverlightjQuery Library for SharePoint Web ServicesCalcium: A modular application toolset leveraging PrismFarseer Physics Engine

    Read the article

  • EBS 12.1.1 Test Starter Kit now Available for Oracle Application Testing Suite

    - by Steven Chan
    We've discussed automated testing tools for the E-Business Suite several times on this blog, since testing is such a key part of everyone's implementation lifecycle.  An important part of our testing arsenal in E-Business Suite Development is the Oracle Application Testing Suite.  The Oracle Automated Testing Suite (OATS) is built on the foundation of the e-TEST suite of products acquired from Empirix  in 2008.  The testing suite is comprised of:   1. Oracle Load Testing for scalability, performance, and load testing   2. Oracle Functional Testing for automated functional and regression testing   3. Oracle Test Manager for test process management, test execution, and defect trackingOracle Application Testing Suite 9.0 has been supported for use with the E-Business Suite since 2009.  I'm very pleased to let you know that our E-Business Suite Release 12.1.1 Test Starter Kit is now available for Oracle Application Testing Suite 9.1.  You can download it here:Oracle Application Testing Suite Downloads

    Read the article

  • jQuery Templates and Data Linking (and Microsoft contributing to jQuery)

    - by ScottGu
    The jQuery library has a passionate community of developers, and it is now the most widely used JavaScript library on the web today. Two years ago I announced that Microsoft would begin offering product support for jQuery, and that we’d be including it in new versions of Visual Studio going forward. By default, when you create new ASP.NET Web Forms and ASP.NET MVC projects with VS 2010 you’ll find jQuery automatically added to your project. A few weeks ago during my second keynote at the MIX 2010 conference I announced that Microsoft would also begin contributing to the jQuery project.  During the talk, John Resig -- the creator of the jQuery library and leader of the jQuery developer team – talked a little about our participation and discussed an early prototype of a new client templating API for jQuery. In this blog post, I’m going to talk a little about how my team is starting to contribute to the jQuery project, and discuss some of the specific features that we are working on such as client-side templating and data linking (data-binding). Contributing to jQuery jQuery has a fantastic developer community, and a very open way to propose suggestions and make contributions.  Microsoft is following the same process to contribute to jQuery as any other member of the community. As an example, when working with the jQuery community to improve support for templating to jQuery my team followed the following steps: We created a proposal for templating and posted the proposal to the jQuery developer forum (http://forum.jquery.com/topic/jquery-templates-proposal and http://forum.jquery.com/topic/templating-syntax ). After receiving feedback on the forums, the jQuery team created a prototype for templating and posted the prototype at the Github code repository (http://github.com/jquery/jquery-tmpl ). We iterated on the prototype, creating a new fork on Github of the templating prototype, to suggest design improvements. Several other members of the community also provided design feedback by forking the templating code. There has been an amazing amount of participation by the jQuery community in response to the original templating proposal (over 100 posts in the jQuery forum), and the design of the templating proposal has evolved significantly based on community feedback. The jQuery team is the ultimate determiner on what happens with the templating proposal – they might include it in jQuery core, or make it an official plugin, or reject it entirely.  My team is excited to be able to participate in the open source process, and make suggestions and contributions the same way as any other member of the community. jQuery Template Support Client-side templates enable jQuery developers to easily generate and render HTML UI on the client.  Templates support a simple syntax that enables either developers or designers to declaratively specify the HTML they want to generate.  Developers can then programmatically invoke the templates on the client, and pass JavaScript objects to them to make the content rendered completely data driven.  These JavaScript objects can optionally be based on data retrieved from a server. Because the jQuery templating proposal is still evolving in response to community feedback, the final version might look very different than the version below. This blog post gives you a sense of how you can try out and use templating as it exists today (you can download the prototype by the jQuery core team at http://github.com/jquery/jquery-tmpl or the latest submission from my team at http://github.com/nje/jquery-tmpl).  jQuery Client Templates You create client-side jQuery templates by embedding content within a <script type="text/html"> tag.  For example, the HTML below contains a <div> template container, as well as a client-side jQuery “contactTemplate” template (within the <script type="text/html"> element) that can be used to dynamically display a list of contacts: The {{= name }} and {{= phone }} expressions are used within the contact template above to display the names and phone numbers of “contact” objects passed to the template. We can use the template to display either an array of JavaScript objects or a single object. The JavaScript code below demonstrates how you can render a JavaScript array of “contact” object using the above template. The render() method renders the data into a string and appends the string to the “contactContainer” DIV element: When the page is loaded, the list of contacts is rendered by the template.  All of this template rendering is happening on the client-side within the browser:   Templating Commands and Conditional Display Logic The current templating proposal supports a small set of template commands - including if, else, and each statements. The number of template commands was deliberately kept small to encourage people to place more complicated logic outside of their templates. Even this small set of template commands is very useful though. Imagine, for example, that each contact can have zero or more phone numbers. The contacts could be represented by the JavaScript array below: The template below demonstrates how you can use the if and each template commands to conditionally display and loop the phone numbers for each contact: If a contact has one or more phone numbers then each of the phone numbers is displayed by iterating through the phone numbers with the each template command: The jQuery team designed the template commands so that they are extensible. If you have a need for a new template command then you can easily add new template commands to the default set of commands. Support for Client Data-Linking The ASP.NET team recently submitted another proposal and prototype to the jQuery forums (http://forum.jquery.com/topic/proposal-for-adding-data-linking-to-jquery). This proposal describes a new feature named data linking. Data Linking enables you to link a property of one object to a property of another object - so that when one property changes the other property changes.  Data linking enables you to easily keep your UI and data objects synchronized within a page. If you are familiar with the concept of data-binding then you will be familiar with data linking (in the proposal, we call the feature data linking because jQuery already includes a bind() method that has nothing to do with data-binding). Imagine, for example, that you have a page with the following HTML <input> elements: The following JavaScript code links the two INPUT elements above to the properties of a JavaScript “contact” object that has a “name” and “phone” property: When you execute this code, the value of the first INPUT element (#name) is set to the value of the contact name property, and the value of the second INPUT element (#phone) is set to the value of the contact phone property. The properties of the contact object and the properties of the INPUT elements are also linked – so that changes to one are also reflected in the other. Because the contact object is linked to the INPUT element, when you request the page, the values of the contact properties are displayed: More interesting, the values of the linked INPUT elements will change automatically whenever you update the properties of the contact object they are linked to. For example, we could programmatically modify the properties of the “contact” object using the jQuery attr() method like below: Because our two INPUT elements are linked to the “contact” object, the INPUT element values will be updated automatically (without us having to write any code to modify the UI elements): Note that we updated the contact object above using the jQuery attr() method. In order for data linking to work, you must use jQuery methods to modify the property values. Two Way Linking The linkBoth() method enables two-way data linking. The contact object and INPUT elements are linked in both directions. When you modify the value of the INPUT element, the contact object is also updated automatically. For example, the following code adds a client-side JavaScript click handler to an HTML button element. When you click the button, the property values of the contact object are displayed using an alert() dialog: The following demonstrates what happens when you change the value of the Name INPUT element and click the Save button. Notice that the name property of the “contact” object that the INPUT element was linked to was updated automatically: The above example is obviously trivially simple.  Instead of displaying the new values of the contact object with a JavaScript alert, you can imagine instead calling a web-service to save the object to a database. The benefit of data linking is that it enables you to focus on your data and frees you from the mechanics of keeping your UI and data in sync. Converters The current data linking proposal also supports a feature called converters. A converter enables you to easily convert the value of a property during data linking. For example, imagine that you want to represent phone numbers in a standard way with the “contact” object phone property. In particular, you don’t want to include special characters such as ()- in the phone number - instead you only want digits and nothing else. In that case, you can wire-up a converter to convert the value of an INPUT element into this format using the code below: Notice above how a converter function is being passed to the linkFrom() method used to link the phone property of the “contact” object with the value of the phone INPUT element. This convertor function strips any non-numeric characters from the INPUT element before updating the phone property.  Now, if you enter the phone number (206) 555-9999 into the phone input field then the value 2065559999 is assigned to the phone property of the contact object: You can also use a converter in the opposite direction also. For example, you can apply a standard phone format string when displaying a phone number from a phone property. Combining Templating and Data Linking Our goal in submitting these two proposals for templating and data linking is to make it easier to work with data when building websites and applications with jQuery. Templating makes it easier to display a list of database records retrieved from a database through an Ajax call. Data linking makes it easier to keep the data and user interface in sync for update scenarios. Currently, we are working on an extension of the data linking proposal to support declarative data linking. We want to make it easy to take advantage of data linking when using a template to display data. For example, imagine that you are using the following template to display an array of product objects: Notice the {{link name}} and {{link price}} expressions. These expressions enable declarative data linking between the SPAN elements and properties of the product objects. The current jQuery templating prototype supports extending its syntax with custom template commands. In this case, we are extending the default templating syntax with a custom template command named “link”. The benefit of using data linking with the above template is that the SPAN elements will be automatically updated whenever the underlying “product” data is updated.  Declarative data linking also makes it easier to create edit and insert forms. For example, you could create a form for editing a product by using declarative data linking like this: Whenever you change the value of the INPUT elements in a template that uses declarative data linking, the underlying JavaScript data object is automatically updated. Instead of needing to write code to scrape the HTML form to get updated values, you can instead work with the underlying data directly – making your client-side code much cleaner and simpler. Downloading Working Code Examples of the Above Scenarios You can download this .zip file to get with working code examples of the above scenarios.  The .zip file includes 4 static HTML page: Listing1_Templating.htm – Illustrates basic templating. Listing2_TemplatingConditionals.htm – Illustrates templating with the use of the if and each template commands. Listing3_DataLinking.htm – Illustrates data linking. Listing4_Converters.htm – Illustrates using a converter with data linking. You can un-zip the file to the file-system and then run each page to see the concepts in action. Summary We are excited to be able to begin participating within the open-source jQuery project.  We’ve received lots of encouraging feedback in response to our first two proposals, and we will continue to actively contribute going forward.  These features will hopefully make it easier for all developers (including ASP.NET developers) to build great Ajax applications. Hope this helps, Scott P.S. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu]

    Read the article

  • SQL SERVER – Tricks to Comment T-SQL in SSMS – SQL in Sixty Seconds #019 – Video

    - by pinaldave
    Code commeting is the one of the most common tasks developers perform. There are two major reasons why developer comment code. 1) During Debug 2) Documenting the code. While debugging the T-SQL code I have often seen developers struggling to comment code.  They spend (or waste) more time in commenting and uncommenting  than doing actual debugging of the procedure.  When I see developer struggling to comment the code I feel little uncomfortable as commenting should be a very easy task over. Today we will see three quick method to comment T-SQL code in Query Editor. There are three different method to comment and uncomment statements in SQL Server Management Studio Using Keyboard Shortcuts Using Tool Bar Using Menu Bar Method 1: Using Keyboard Shortcuts Commenting the statement – CTRL+K, CTRL+C Commenting the statement – CTRL+K, CTRL+U Method 2: Using Tool Bar Using Tool bar buttons. (See Video) Method 3: Using Menu Bar Commenting the statement – Menu Bar >> Edit >> Advanced >> Click on Comment Selection. Unommenting the statement – Menu Bar >> Edit >> Advanced >> Click on Uncomment Selection. More on Importing CSV Data: Two Different Ways to Comment Code – Explanation and Example I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQLAuthority News – Microsoft SQL Server 2008 R2 – PowerPivot for Microsoft Excel 2010

    - by pinaldave
    Microsoft has really and truly created some buzz for PowerPivot. I have been asked to show the demo of Powerpivot in recent time even when I am doing relational database training. Attached is the few details where everyone can download PowerPivot and use the same. Microsoft SQL Server 2008 R2 – PowerPivot for Microsoft Excel 2010 – RTM Microsoft® PowerPivot for Microsoft® Excel 2010 provides ground-breaking technology, such as fast manipulation of large data sets (often millions of rows), streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft® SharePoint 2010. Microsoft PowerPivot for Excel 2010 Samples Microsoft® PowerPivot for Microsoft® Excel 2010 provides ground-breaking technology, such as fast manipulation of large data sets (often millions of rows), streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft® SharePoint 2010. Download examples of the types of reports you can create. Microsoft PowerPivot for Excel 2010 Data Analysis Expressions Sample version 1.0 Microsoft® PowerPivot for Microsoft® Excel 2010 provides ground-breaking technology, such as fast manipulation of large data sets (often millions of rows), streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft® SharePoint 2010. Download this PowerPivot workbook to learn more about DAX calculations. Note: The brief description below the download link is taken from respective download page. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning Training

    - by pinaldave
    Last 3 days to register for the courses. This is one time offer with big discount. The deadline for the course registration is 5th May, 2010. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 2 days Download the complete PDF brochure. To register, either send an email to [email protected] or call +91 95940 43399. Feel free to drop me an email at pinal “at” SQLAuthority.com for any additional information and clarification. Training Venue: Abridge Solutions, #90/B/C/3/1, Ganesh GHR & MSY Plaza, Vittalrao Nagar, Near Image Hospital, Madhapur, Hyderabad – 500 081. Additionally there is special program of SolidQ India Insider. This is only available to first few registrants of the courses only. Read more details about the course here. Read my TechEd India 2010 experience here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • The Most Common and Least Used 4-Digit PIN Numbers [Security Analysis Report]

    - by Asian Angel
    How ‘secure’ is your 4-digit PIN number? Is your PIN number a far too common one or is it a bit more unique in comparison to others? The folks over at the Data Genetics blog have put together an interesting analysis report that looks at the most common and least used 4-digit PIN numbers chosen by people. Numerically based (0-9) 4-digit PIN numbers only allow for a total of 10,000 possible combinations, so it stands to reason that some combinations are going to be far more common than others. The question is whether or not your personal PIN number choices are among the commonly used ones or ‘stand out’ as being more unique. Note 1: Data Genetics used data condensed from released, exposed, & discovered password tables and security breaches to generate the analysis report. Note 2: The updates section at the bottom has some interesting tidbits concerning peoples’ use of dates and certain words for PIN number generation. The analysis makes for very interesting reading, so browse on over to get an idea of where you stand with regards to your personal PIN number choices. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • IASA Sessions on Social Networking Note Influence of Millennial Generation on Insurance Technology

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance is blogging from the 2010 IASA Annual Conference and Business Show this week. Social networking continues to be a buzzword for many in the industry. Erin Esurance, the Geico Gecko and even Nationwide's "The World's Greatest Spokesperson in the World" all have a prominent presence in the social media world. Sessions at the 2010 IASA Annual Conference and Business Show this week in Grapevine, Texas, highlighted how the millennial generation's exploding use of social media is spurring more carriers to leverage tools like Facebook, LinkedIn and other social networks to engage prospect and customers. While panelists encouraged carriers to leverage social networking tools for marketing and communications, they expressed the need for caution and corporate governance when it comes to using the tools as a part of claims, underwriting, and human resources recruitment business practices, and interactions with producers. (A previous Oracle Insurance blog entry by my colleague Susan Keuer noted that social networking and its impact on the underwriting process was also a hot topic at the recent AHOU conference.) Speaking of the millennial generation, IASA announced a new scholarship program and awarded three scholarships during the association's conference this week. The IASA Insurance Industry Collegiate Scholarship Program awards $2,000 scholarships to students in their second or third year of college who are studying an insurance-related field at a four-year college or university. The IASA scholarship committee is co-chaired by Wendy Gibson, vice president of business development for Oracle Insurance. Gibson, a long time IASA volunteer, is completing her second term on IASA's volunteer management team as vice president of industry relations. Helen Pitts is senior product marketing manager for Oracle Insurance.

    Read the article

  • SQL SERVER – Configure Management Data Collection in Quick Steps – T-SQL Tuesday #005

    - by pinaldave
    This article was written as a response to T-SQL Tuesday #005 – Reporting. The three most important components of any computer and server are the CPU, Memory, and Hard disk specification. This post talks about  how to get more details about these three most important components using the Management Data Collection. Management Data Collection generates the reports for the three said components by default. Configuring Data Collection is a very easy task and can be done very quickly. Please note: There are many different ways to get reports generated for CPU, Memory and IO. You can use DMVs, Extended Events as well Perfmon to trace the data. Keeping the T-SQL Tuesday subject of reporting this post is created to give visual tutorial to quickly configure Data Collection and generate Reports. From Book On-Line: The data collector is a core component of the Data Collection platform for SQL Server 2008 and the tools that are provided by SQL Server. The data collector provides one central point for data collection across your database servers and applications. This collection point can obtain data from a variety of sources and is not limited to performance data, unlike SQL Trace. Let us go over the visual tutorial on how quickly Data Collection can be configured. Expand the management node under the main server node and follow the direction in the pictures. This reports can be exported to PDF as well Excel by writing clicking on reports. Now let us see more additional screenshots of the reports. The reports are very self-explanatory  but can be drilled down to get further details. Click on the image to make it larger. Well, as we can see, it is very easy to configure and utilize this tool. Do you use this tool in your organization? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Reporting, SQL Reports

    Read the article

  • SQL SERVER – A Successful Performance Tuning Seminar – Hyderabad – Nov 27-28, 2010 – Next Pune

    - by pinaldave
    My recent SQL Server Performance Tuning Seminar in Colombo was oversubscribed with total of 35 attendees. You can read the details over here SQLAuthority News – SQL Server Performance Optimizations Seminar – Grand Success – Colombo, Sri Lanka – Oct 4 – 5, 2010. I had recently completed another seminar in Hyderabad which was again blazing success. We had 25 attendees to the seminar and had wonderful time together. There is one thing very different between usual class room training and this seminar series. In this seminar series we go 100% demo oriented and real world scenario deep down. We do not talk usual theory talk-talk. The goal of this seminar to give anybody who attends a jump start and deep dive on the performance tuning subject. I will share many different examples and scenarios from my years of experience of performance tuning. The beginning of the second day is always interesting as I take attendees the server as example of the talk, and together we will attempt to identify the bottleneck and see if we can resolve the same. So far I have got excellent feedback on this unique session, where we pick database of the attendees and address the issues. I plan to do the same again in next sessions. The next Seminar is in Pune.I am very excited for the same. Date and Time: December 4-5, 2010. 10 AM to 6 PM The Pride Hotel 05, University Road, Shivaji Nagar, Pune – 411 005 Tel: 020 255 34567 Click here for the agenda of the seminar. Instead of writing more details, I will let the photos do the talk for latest Hyderabad Seminar. Hotel Amrutha Castle King Arthur's Court Pinal Presenting Seminar Pinal Presenting Seminar Seminar Attendees Pinal Presenting Seminar Group Photo of Hyderabad Seminar Attendees Seminar Support Staff - Nupur and Shaivi Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • SQLAuthority News – Statistics and Best Practices – Virtual Tech Days – Nov 22, 2010

    - by pinaldave
    I am honored that I have been invited to speak at Virtual TechDays on Nov 22, 2010 by Microsoft. I will be speaking on my favorite subject of Statistics and Best Practices. This exclusive online event will have 80 deep technical sessions across 3 days – and, attendance is completely FREE. There are dedicated tracks for Architects, Software Developers/Project Managers, Infrastructure Managers/Professionals and Enterprise Developers. So, REGISTER for this exclusive online event TODAY. Statistics and Best Practices Timing: 11:45am-12:45pm Statistics are a key part of getting solid performance. In this session we will go over the basics of the statistics and various best practices related to Statistics. We will go over various frequently asked questions like a) when to update statistics, b) different between sync and async update of statistics c) best method to update statistics d) optimal interval of updating statistics. We will also discuss the pros and cons of the statistics update. This session is for all of you – whether you’re a DBA or developer! You can register for this event over here. If you have never attended my session on this subject I strongly suggest that you attend the event as this is going to be very interesting conversation between us. If you have attended this session earlier, this will contain few new information which will for sure interesting to share with all. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: MVP, Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • PDC and Tech-Ed Europe Slides and Code

    - by Stephen Walther
    I spent close to three weeks on the road giving talks at Tech-Ed Europe (Berlin), PDC (Los Angeles), and the Los Angeles Code Camp (Los Angeles). I got to talk about two topics that I am very passionate about: ASP.NET MVC and Ajax. Thanks everyone for coming to all my talks! At PDC, I announced all of the new features of our ASP.NET Ajax Library. In particular, I made five big announcements: ASP.NET Ajax Library Beta Released – You can download the beta from Ajax.CodePlex.com ASP.NET Ajax Library includes the AJAX Control Toolkit – You can use the Ajax Control Toolkit with ASP.NET MVC. ASP.NET Ajax Library being contributed to the CodePlex Foundation – ASP.NET Ajax is the founding project for the CodePlex Foundation (see CodePlex.org) ASP.NET Ajax Library is receiving full product support – Complain to Microsoft Customer Service at midnight on Christmas ASP.NET Ajax Library supports jQuery integration – Use (almost) all of the Ajax Control Toolkit controls in jQuery For more details on the Ajax announcements, see James Senior’s blog entry on the Ajax announcements at: http://jamessenior.com/post/News-on-the-ASPNET-Ajax-Library.aspx In my MVC talks, I discussed the new features being introduced with ASP.NET MVC 2. Here are three of my favorite new features: Client Validation – Client validation done the right way. Do your validation in your model and let the validation bubble up to JavaScript code automatically. Areas – Divide your ASP.NET MVC application into sub-applications. Great for managing both medium and large projects. RenderAction() – Finally, a way to add content to master pages and multiple pages without doing anything strange or twisted. There are demos of all of these features in the MVC downloads below. Here are the power point and code from all of the talks: PDC – Introducing the New ASP.NET Ajax Library PDC – ASP.NET MVC: The New Stuff Tech-Ed Europe - What's New in Microsoft ASP.NET Model-View-Controller Tech-Ed Europe - Microsoft ASP.NET AJAX: Taking AJAX to the Next Level

    Read the article

  • SQL SERVER – Delay Command in SQL Server – SQL in Sixty Seconds #055

    - by Pinal Dave
    Have you ever needed WAIT or DELAY function in SQL Server?  Well, I personally have never needed it but I see lots of people asking for the same. It seems the need of the function is when developers are working with asynchronous applications or programs. When they are working with an application where user have to wait for a while for another application to complete the processing. If you are programming language developer, it is very easy for you to make the application wait for command however, in SQL I personally have rarely used this feature.  However, I have seen lots of developers asking for this feature in SQL Server, hence I have decided to build this quick video on the same subject. We can use WAITFOR DELAY ‘timepart‘ to create a SQL Statement to wait. Let us see the same concept in following SQL in Sixty Seconds Video: Related Tips in SQL in Sixty Seconds: Delay Function – WAITFOR clause – Delay Execution of Commands What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Interview Questions and Answers, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Identity

    Read the article

  • Oracle Linux Forum

    - by rickramsey
    This forum includes live chat so you can tell Wim, Lenz, and the gang what you really think. Linux Forum - Tuesday March 27 Since Oracle recently made Release 2 of its Unbreakable Enterprise Kernel available (see Lenz's blog), we're following up with an online forum with Oracle's Linux executives and engineers. Topics will be: 9:30 - 9:45 am PT Oracle's Linux Strategy Edward Screven, Oracle's Chief Corporate Architect and Wim Coekaerts, Senior VP of Linux and Virtualization Engineering, will explain Oracle's Linux strategy, the benefits of Oracle Linux, Oracle's role in the Linux community, and the Oracle Linux roadmap. 9:45 - 10:00 am PT Why Progressive Insurance Chose Oracle Linux John Dome, Lead Systems Engineer at Progressive Insurance, outlines why they selected Oracle Linux with the Unbreakable Enterprise Kernel to reduce cost and increase the performance of database applications. 10:00 - 11:00 am PT What's New in Oracle Linux Oracle engineers walk you through new features in Oracle Linux, including zero-downtime updates with Ksplice, Btrfs and OCFS2, DTrace for Linux, Linux Containers, vSwitch and T-Mem. 11:00 am - 12:00 pm PT Get More Value from your Linux Vendor Why Oracle Linux delivers more value than Red Hat Enterprise Linux, including better support at lower cost, best practices for deployments, extreme performance for cloud deployments and engineered systems, and more. Date: Tuesday, March 27, 2012 Time: 9:30 AM PT / 12:30 PM ET Duration: 2.5 hours Register here. - Rick

    Read the article

  • Reusable VS clean code - where's the balance?

    - by Radek Šimko
    Let's say I have a data model for a blog posts and have two use-cases of that model - getting all blogposts and getting only blogposts which were written by specific author. There are basically two ways how I can realize that. 1st model class Articles { public function getPosts() { return $this->connection->find() ->sort(array('creation_time' => -1)); } public function getPostsByAuthor( $authorUid ) { return $this->connection->find(array('author_uid' => $authorUid)) ->sort(array('creation_time' => -1)); } } 1st usage (presenter/controller) if ( $GET['author_uid'] ) { $posts = $articles->getPostsByAuthor($GET['author_uid']); } else { $posts = $articles->getPosts(); } 2nd one class Articles { public function getPosts( $authorUid = NULL ) { $query = array(); if( $authorUid !== NULL ) { $query = array('author_uid' => $authorUid); } return $this->connection->find($query) ->sort(array('creation_time' => -1)); } } 2nd usage (presenter/controller) $posts = $articles->getPosts( $_GET['author_uid'] ); To sum up (dis)advantages: 1) cleaner code 2) more reusable code Which one do you think is better and why? Is there any kind of compromise between those two?

    Read the article

  • Building an HTML5 App with ASP.NET

    - by Stephen Walther
    I’m teaching several JavaScript and ASP.NET workshops over the next couple of months (thanks everyone!) and I thought it would be useful for my students to have a really easy to use JavaScript reference. I wanted a simple interactive JavaScript reference and I could not find one so I decided to put together one of my own. I decided to use the latest features of JavaScript, HTML5 and jQuery such as local storage, offline manifests, and jQuery templates. What could be more appropriate than building a JavaScript Reference with JavaScript? You can try out the application by visiting: http://Superexpert.com/JavaScriptReference Because the app takes advantage of several advanced features of HTML5, it won’t work with Internet Explorer 6 (but really, you should stop using that browser). I have tested it with IE 8, Chrome 8, Firefox 3.6, and Safari 5. You can download the source for the JavaScript Reference application at the end of this article. Superexpert JavaScript Reference Let me provide you with a brief walkthrough of the app. When you first open the application, you see the following lookup screen: As you type the name of something from the JavaScript language, matching results are displayed: You can click the details link for any entry to view details for an entry in a modal dialog: Alternatively, you can click on any of the tabs -- Objects, Functions, Properties, Statements, Operators, Comments, or Directives -- to filter results by type of syntax. For example, you might want to see a list of all JavaScript built-in objects: You can login to the application to make modification to the application: After you login, you can add, update, or delete entries in the reference database: HTML5 Local Storage The application takes advantage of HTML5 local storage to store all of the reference entries on the local browser. IE 8, Chrome 8, Firefox 3.6, and Safari 5 all support local storage. When you open the application for the first time, all of the reference entries are transferred to the browser. The data is stored persistently. Even if you shutdown your computer and return to the application many days later, the data does not need to be transferred again. Whenever you open the application, the app checks with the server to see if any of the entries have been updated on the server. If there have been updates, then only the updates are transferred to the browser and the updates are merged with the existing entries in local storage. After the reference database has been transferred to your browser once, only changes are transferred in the future. You get two benefits from using local storage. First, the application loads very fast and works very fast after the data has been loaded once. The application does not query the server whenever you filter or view entries. All of the data is persisted in the browser. Second, you can browse the JavaScript reference even when you are not connected to the Internet (when you are on the proverbial airplane). The JavaScript Reference works as an offline application for browsers that support offline applications (unfortunately, not IE). When using Google Chrome, you can easily view the contents of local storage by selecting Tools, Developer Tools (CTRL-SHIFT I) and selecting Storage, Local Storage: The JavaScript Reference app stores two items in local storage: entriesLastUpdated and entries. HTML5 Offline App For browsers that support HTML5 offline applications – Chrome 8 and Firefox 3.6 but not Internet Explorer – you do not need to be connected to the Internet to use the JavaScript Reference. The JavaScript Reference can execute entirely on your machine just like any other desktop application. When you first open the application with Firefox, you are presented with the following warning: Notice the notification bar that asks whether you want to accept offline content. If you click the Allow button then all of the files (generated ASPX, images, CSS, JavaScript) needed for the JavaScript Reference will be stored on your local computer. Automatic Script Minification and Combination All of the custom JavaScript files are combined and minified automatically whenever the application is built with Visual Studio. All of the custom scripts are contained in a folder named App_Scripts: When you perform a build, the combine.js and combine.debug.js files are generated. The Combine.config file contains the list of files that should be combined (importantly, it specifies the order in which the files should be combined). Here’s the contents of the Combine.config file:   <?xml version="1.0"?> <combine> <scripts> <file path="compat.js" /> <file path="storage.js" /> <file path="serverData.js" /> <file path="entriesHelper.js" /> <file path="authentication.js" /> <file path="default.js" /> </scripts> </combine>   jQuery and jQuery UI The JavaScript Reference application takes heavy advantage of jQuery and jQuery UI. In particular, the application uses jQuery templates to format and display the reference entries. Each of the separate templates is stored in a separate ASP.NET user control in a folder named Templates: The contents of the user controls (and therefore the templates) are combined in the default.aspx page: <!-- Templates --> <user:EntryTemplate runat="server" /> <user:EntryDetailsTemplate runat="server" /> <user:BrowsersTemplate runat="server" /> <user:EditEntryTemplate runat="server" /> <user:EntryDetailsCloudTemplate runat="server" /> When the default.aspx page is requested, all of the templates are retrieved in a single page. WCF Data Services The JavaScript Reference application uses WCF Data Services to retrieve and modify database data. The application exposes a server-side WCF Data Service named EntryService.svc that supports querying, adding, updating, and deleting entries. jQuery Ajax calls are made against the WCF Data Service to perform the database operations from the browser. The OData protocol makes this easy. Authentication is handled on the server with a ChangeInterceptor. Only authenticated users are allowed to update the JavaScript Reference entry database. JavaScript Unit Tests In order to build the JavaScript Reference application, I depended on JavaScript unit tests. I needed the unit tests, in particular, to write the JavaScript merge functions which merge entry change sets from the server with existing entries in browser local storage. In order for unit tests to be useful, they need to run fast. I ran my unit tests after each build. For this reason, I did not want to run the unit tests within the context of a browser. Instead, I ran the unit tests using server-side JavaScript (the Microsoft Script Control). The source code that you can download at the end of this blog entry includes a project named JavaScriptReference.UnitTests that contains all of the JavaScripts unit tests. JavaScript Integration Tests Because not every feature of an application can be tested by unit tests, the JavaScript Reference application also includes integration tests. I wrote the integration tests using Selenium RC in combination with ASP.NET Unit Tests. The Selenium tests run against all of the target browsers for the JavaScript Reference application: IE 8, Chrome 8, Firefox 3.6, and Safari 5. For example, here is the Selenium test that checks whether authenticating with a valid user name and password correctly switches the application to Admin Mode: [TestMethod] [HostType("ASP.NET")] [UrlToTest("http://localhost:26303/JavaScriptReference")] [AspNetDevelopmentServerHost(@"C:\Users\Stephen\Documents\Repos\JavaScriptReference\JavaScriptReference\JavaScriptReference", "/JavaScriptReference")] public void TestValidLogin() { // Run test for each controller foreach (var controller in this.Controllers) { var selenium = controller.Value; var browserName = controller.Key; // Open reference page. selenium.Open("http://localhost:26303/JavaScriptReference/default.aspx"); // Click login button displays login form selenium.Click("btnLogin"); Assert.IsTrue(selenium.IsVisible("loginForm"), "Login form appears after clicking btnLogin"); // Enter user name and password selenium.Type("userName", "Admin"); selenium.Type("password", "secret"); selenium.Click("btnDoLogin"); // Should set adminMode == true selenium.WaitForCondition("selenium.browserbot.getCurrentWindow().adminMode==true", "30000"); } }   The results for running the Selenium tests appear in the Test Results window just like the unit tests: The Selenium tests take much longer to execute than the unit tests. However, they provide test coverage for actual browsers. Furthermore, if you are using Visual Studio ALM, you can run the tests automatically every night as part of your standard nightly build. You can view the Selenium tests by opening the JavaScriptReference.QATests project. Summary I plan to write more detailed blog entries about this application over the next week. I want to discuss each of the features – HTML5 local storage, HTML5 offline apps, jQuery templates, automatic script combining and minification, JavaScript unit tests, Selenium tests -- in more detail. You can download the source control for the JavaScript Reference Application by clicking the following link: Download You need Visual Studio 2010 and ASP.NET 4 to build the application. Before running the JavaScript unit tests, install the Microsoft Script Control. Before running the Selenium tests, start the Selenium server by running the StartSeleniumServer.bat file located in the JavaScriptReference.QATests project.

    Read the article

  • Oracle Linux Delivers Top CPU Benchmark Results on Sun Blades

    - by sergio.leunissen
    From the Performance and Best Practices blog: Fresh SPEC CPU2006 results for Sun Blade X6275 M2 Server Modules running Oracle Linux 5.5. The highlights: The dual-node Sun Blade X6275 M2 server module, equipped with two Intel Xeon X5670 2.93 GHz processors per node and running the Oracle Enterprise Linux 5.5 operating system delivered the best SPECint_rate2006 and SPECfp_rate2006 benchmark results for all systems with Intel Xeon processor 5000 sequence. With a SPECint_rate2006 benchmark result of 679, the Sun Blade X6275 M2 server module, with two compute nodes per blade, delivers maximum performance for space constrained environments. Comparing Oracle's dual-node blade to HP's dual-node blade server, based on their single node performance, the Sun Blade X6275 M2 server module SPECfp_rate2006 score of 241 outperforms the best published HP ProLiant BL2X220c G5 server score by 3.2x. A single node of a Sun Blade X6275 M2 server module using 2.93 GHz Intel Xeon X5670 processors delivered 37% improvement in SPECint_rate2006 benchmark results and 22% improvement in SPECfp_rate2006 benchmark results compared to the previous generation Sun Blade X6275 server module. Both nodes of a Sun Blade X6275 M2 server module using 2.93 GHz Intel Xeon X5670 processors delivered 59% improvement on the SPECint_rate2006 benchmark and 40% improvement on the SPECfp_rate2006 benchmark compared to the previous generation Sun Blade X6275 server module.

    Read the article

  • Quartz.Net Writing your first Hello World Job

    - by Tarun Arora
    In this blog post I’ll be covering, 01: A few things to consider before you should schedule a Job using Quartz.Net 02: Setting up your solution to use Quartz.Net API 03: Quartz.Net configuration 04: Writing & scheduling a hello world job with Quartz.Net If you are new to Quartz.Net I would recommend going through, A brief introduction to Quartz.net Walkthrough of Installing & Testing Quartz.Net as a Windows Service A few things to consider before you should schedule a Job using Quartz.Net - An instance of the scheduler service - A trigger - And last but not the least a job For example, if I wanted to schedule a script to run on the server, I should be jotting down answers to the below questions, a. Considering there are multiple machines set up with Quartz.Net windows service, how can I choose the instance of Quartz.Net where I want my script to be run b. What will trigger the execution of the job c. How often do I want the job to run d. Do I want the job to run right away or start after a delay or may be have the job start at a specific time e. What will happen to my job if Quartz.Net windows service is reset f. Do I want multiple instances of this job to run concurrently g. Can I pass parameters to the job being executed by Quartz.Net windows service Setting up your solution to use Quartz.Net API 1. Create a new C# Console Application project and call it “HelloWorldQuartzDotNet” and add a reference to Quartz.Net.dll. I use the NuGet Package Manager to add the reference. This can be done by right clicking references and choosing Manage NuGet packages, from the Nuget Package Manager choose Online from the left panel and in the search box on the right search for Quartz.Net. Click Install on the package “Quartz” (Screen shot below). 2. Right click the project and choose Add New Item. Add a new Interface and call it ‘IScheduledJob.cs’. Mark the Interface public and add the signature for Run. Your interface should look like below. namespace HelloWorldQuartzDotNet { public interface IScheduledJob { void Run(); } }   3. Right click the project and choose Add new Item. Add a class and call it ‘Scheduled Job’. Use this class to implement the interface ‘IscheduledJob.cs’. Look at the pseudo code in the implementation of the Run method. using System; namespace HelloWorldQuartzDotNet { class ScheduledJob : IScheduledJob { public void Run() { // Get an instance of the Quartz.Net scheduler // Define the Job to be scheduled // Associate a trigger with the Job // Assign the Job to the scheduler throw new NotImplementedException(); } } }   I’ll get into the implementation in more detail, but let’s look at the minimal configuration a sample configuration file for Quartz.Net service to work. Quartz.Net configuration In the App.Config file copy the below configuration <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" /> </configSections> <quartz> <add key="quartz.scheduler.instanceName" value="ServerScheduler" /> <add key="quartz.threadPool.type" value="Quartz.Simpl.SimpleThreadPool, Quartz" /> <add key="quartz.threadPool.threadCount" value="10" /> <add key="quartz.threadPool.threadPriority" value="2" /> <add key="quartz.jobStore.misfireThreshold" value="60000" /> <add key="quartz.jobStore.type" value="Quartz.Simpl.RAMJobStore, Quartz" /> </quartz> </configuration>   As you can see in the configuration above, I have included the instance name of the quartz scheduler, the thread pool type, count and priority, the job store type has been defined as RAM. You have the option of configuring that to ADO.NET JOB store. More details here. Writing & scheduling a hello world job with Quartz.Net Once fully implemented the ScheduleJob.cs class should look like below. I’ll walk you through the details of the implementation… - GetScheduler() uses the name of the quartz.net and listens on localhost port 555 to try and connect to the quartz.net windows service. - Run() an attempt is made to start the scheduler in case it is in standby mode - I have defined a job “WriteHelloToConsole” (that’s the name of the job), this job belongs to the group “IT”. Think of group as a logical grouping feature. It helps you bucket jobs into groups. Quartz.Net gives you the ability to pause or delete all jobs in a group (We’ll look at that in some of the future posts). I have requested for recovery of this job in case the quartz.net service fails over to the other node in the cluster. The jobType is “HelloWorldJob”. This is the class that would be called to execute the job. More details on this below… - I have defined a trigger for my job. I have called the trigger “WriteHelloToConsole”. The Trigger works on the cron schedule “0 0/1 * 1/1 * ? *” which means fire the job once every minute. I would recommend that you look at www.cronmaker.com a free and great website to build and parse cron expressions. The trigger has a priority 1. So, if two jobs are run at the same time, this trigger will have high priority and will be run first. - Use the Job and Trigger to schedule the job. This method returns a datetime offeset. It is possible to see the next fire time for the job from this variable. using System.Collections.Specialized; using System.Configuration; using Quartz; using System; using Quartz.Impl; namespace HelloWorldQuartzDotNet { class ScheduledJob : IScheduledJob { public void Run() { // Get an instance of the Quartz.Net scheduler var schd = GetScheduler(); // Start the scheduler if its in standby if (!schd.IsStarted) schd.Start(); // Define the Job to be scheduled var job = JobBuilder.Create<HelloWorldJob>() .WithIdentity("WriteHelloToConsole", "IT") .RequestRecovery() .Build(); // Associate a trigger with the Job var trigger = (ICronTrigger)TriggerBuilder.Create() .WithIdentity("WriteHelloToConsole", "IT") .WithCronSchedule("0 0/1 * 1/1 * ? *") // visit http://www.cronmaker.com/ Queues the job every minute .WithPriority(1) .Build(); // Assign the Job to the scheduler var schedule = schd.ScheduleJob(job, trigger); Console.WriteLine("Job '{0}' scheduled for '{1}'", "", schedule.ToString("r")); } // Get an instance of the Quartz.Net scheduler private static IScheduler GetScheduler() { try { var properties = new NameValueCollection(); properties["quartz.scheduler.instanceName"] = "ServerScheduler"; // set remoting expoter properties["quartz.scheduler.proxy"] = "true"; properties["quartz.scheduler.proxy.address"] = string.Format("tcp://{0}:{1}/{2}", "localhost", "555", "QuartzScheduler"); // Get a reference to the scheduler var sf = new StdSchedulerFactory(properties); return sf.GetScheduler(); } catch (Exception ex) { Console.WriteLine("Scheduler not available: '{0}'", ex.Message); throw; } } } }   The above highlighted values have been taken from the Quartz.config file, this file is available in the Quartz.net server installation directory. Implementation of my HelloWorldJob Class below. The HelloWorldJob class gets called to execute the job “WriteHelloToConsole” using the once every minute trigger set up for this job. The HelloWorldJob is a class that implements the interface IJob. I’ll walk you through the details of the implementation… - context is passed to the method execute by the quartz.net scheduler service. This has everything you need to pull out the job, trigger specific information. - for example. I have pulled out the value of the jobKey name, the fire time and next fire time. using Quartz; using System; namespace HelloWorldQuartzDotNet { class HelloWorldJob : IJob { public void Execute(IJobExecutionContext context) { try { Console.WriteLine("Job {0} fired @ {1} next scheduled for {2}", context.JobDetail.Key, context.FireTimeUtc.Value.ToString("r"), context.NextFireTimeUtc.Value.ToString("r")); Console.WriteLine("Hello World!"); } catch (Exception ex) { Console.WriteLine("Failed: {0}", ex.Message); } } } }   I’ll add a call to call the scheduler in the Main method in Program.cs using System; using System.Threading; namespace HelloWorldQuartzDotNet { class Program { static void Main(string[] args) { try { var sj = new ScheduledJob(); sj.Run(); Thread.Sleep(10000 * 10000); } catch (Exception ex) { Console.WriteLine("Failed: {0}", ex.Message); } } } }   This was third in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering how to pass parameters to the scheduled task scheduled on Quartz.net windows service. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >