Search Results

Search found 10060 results on 403 pages for 'column'.

Page 187/403 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Redirecting to a dynamic page

    - by binarydev
    I have a page displaying blog posts (latest_posts.php) and another page that display single blog posts (blog.php) . I intend to link the image title in latest_posts.php so that it redirects to blog.php where it would display the particular post that was clicked. latest_posts.php: <!-- Header --> <h2 class="underline"> <span>What&#039;s new</span> <span></span> </h2> <!-- /Header --> <!-- Posts list --> <ul class="post-list post-list-1"> <?php /* Fetches Date/Time, Post Content and title */ include 'dbconnect.php'; $sql = "SELECT * FROM wp_posts"; $res = mysql_query($sql); while ( $row = mysql_fetch_array($res) ) { ?> <!-- Post #1 --> <li class="clear-fix"> <!-- Date --> <div class="post-list-date"> <div class="post-date-box"> <?php //Timestamp broken down to show accordingly $timestamp = $row['post_date']; $datetime = new DateTime($timestamp); $date = $datetime->format("d"); $month = $datetime->format("M"); ?> <h3> <?php echo $date; ?> </h3> <span> <?php echo $month; ?> </span> </div> </div> <!-- /Date --> <!-- Image + comments count --> <div class="post-list-image"> <!-- Image --> <div class="image image-overlay-url image-fancybox-url"> <a href="post.php" class="preloader-image"> <?php echo '<img src="', $row['image'], '" alt="' , $row['post_title'] , '\'s Blog Image" />'; ?> </a> </div> <!-- /Image --> </div> <!-- /Image + comments count --> <!-- Content --> <div class="post-list-content"> <div> <!-- Header --> <h4> <a href="post.php? . $row['ID'] . "> <?php echo $row['post_title']; ?> </a> </h4> <!-- /Header --> <!-- Excerpt --> <p> <?php echo $row ['post_content']; }?> </p> <!-- /Excerpt --> </div> </div> <!-- /Content --> </li> <!-- /Post #1 --> </ul> <!-- /Posts list --> <a href="blog.php" class="button-browse">Browse All Posts</a> </div> <?php require_once('include/twitter_user_timeline.php'); ?> blog.php: <?php require_once('include/header.php'); ?> <body class="blog"> <?php require_once('include/navigation_bar_blog.php'); ?> <div class="blog"> <div class="main"> <!-- Header --> <h2 class="underline"> <span>What&#039;s new</span> <span></span> </h2> <!-- /Header --> <!-- Layout 66x33 --> <div class="layout-p-66x33 clear-fix"> <!-- Left column --> <!-- <div class="column-left"> --> <!-- Posts list --> <ul class="post-list post-list-2"> <?php /* Fetches Date/Time, Post Content and title with Pagination */ include 'dbconnect.php'; //sets to default page if(empty($_GET['pn'])){ $page=1; } else { $page = $_GET['pn']; } // Index of the page $index = ($page-1)*3; $sql = "SELECT * FROM `wp_posts` ORDER BY `post_date` DESC LIMIT " . $index . " ,3"; $res = mysql_query($sql); //Loops through the values while ( $row = mysql_fetch_array($res) ) { ?> <!-- Post #1 --> <li class="clear-fix"> <!-- Date --> <div class="post-list-date"> <div class="post-date-box"> <?php //Timestamp broken down to show accordingly $timestamp = $row['post_date']; $datetime = new DateTime($timestamp); $date = $datetime->format("d"); $month = $datetime->format("M"); ?> <h3> <?php echo $date; ?> </h3> <span> <?php echo $month; ?> </span> </div> </div> <!-- /Date --> <!-- Image + comments count --> <div class="post-list-image"> <!-- Image --> <div class="image image-overlay-url image-fancybox-url"> <a href="post.php" class="preloader-image"> <?php echo '<img src="', $row['image'], '" alt="' , $row['post_title'] , '\'s Blog Image" />'; ?> </a> </div> <!-- /Image --> </div> <!-- /Image + comments count --> <!-- Content --> <div class="post-list-content"> <div> <?php $id = $_GET['ID']; $post = lookup_post_somehow($id); if($post) { // render post } else { echo 'blog post not found..'; } ?> <!-- Header --> <h4> <a href="post.php"> <?php echo $row['post_title']; ?> </a> </h4> <!-- /Header --> <!-- Excerpt --> <p> <?php echo $row ['post_content']; ?> </p> <!-- /Excerpt --> </div> </div> <!-- /Content --> </li> <!-- /Post #1 --> <?php } // close while loop ?> </ul> <!-- /Posts list --> <div><!-- Pagination --> <ul class="blog-pagination clear-fix"> <?php //Count the number of rows $numberofrows = mysql_query("SELECT COUNT(ID) FROM `wp_posts`"); //Do ciel() to round the result according to number of posts $postsperpage = 4; $numOfPages = ceil($numberofrows / $postsperpage); for($i=1; $i < $numOfPages; $i++) { //echos links for each page $paginationDisplay = '<li><a href="blog.php?pn=' . $i . '">' . $i . '</a></li>'; echo $paginationDisplay; } ?> <!-- <li><a href="#" class="selected">1</a></li> <li><a href="#">2</a></li> <li><a href="#">3</a></li> <li><a href="#">4</a></li> --> </ul> </div><!-- /Pagination --> <!-- /div> --> <!-- Left column --> </div> <!-- /Layout 66x33 --> </div> </div> <?php require_once('include/twitter_user_timeline.php'); ?> <?php require_once('include/footer_blog.php'); ?> How do I render?

    Read the article

  • Auto-hydrate your objects with ADO.NET

    - by Jake Rutherford
    Recently while writing the monotonous code for pulling data out of a DataReader to hydrate some objects in an application I suddenly wondered "is this really necessary?" You've probably asked yourself the same question, and many of you have: - Used a code generator - Used a ORM such as Entity Framework - Wrote the code anyway because you like busy work     In most of the cases I've dealt with when making a call to a stored procedure the column names match up with the properties of the object I am hydrating. Sure that isn't always the case, but most of the time it's 1 to 1 mapping.  Given that fact I whipped up the following method of hydrating my objects without having write all of the code. First I'll show the code, and then explain what it is doing.      /// <summary>     /// Abstract base class for all Shared objects.     /// </summary>     /// <typeparam name="T"></typeparam>     [Serializable, DataContract(Name = "{0}SharedBase")]     public abstract class SharedBase<T> where T : SharedBase<T>     {         private static List<PropertyInfo> cachedProperties;         /// <summary>         /// Hydrates derived class with values from record.         /// </summary>         /// <param name="dataRecord"></param>         /// <param name="instance"></param>         public static void Hydrate(IDataRecord dataRecord, T instance)         {             var instanceType = instance.GetType();                         //Caching properties to avoid repeated calls to GetProperties.             //Noticable performance gains when processing same types repeatedly.             if (cachedProperties == null)             {                 cachedProperties = instanceType.GetProperties().ToList();             }                         foreach (var property in cachedProperties)             {                 if (!dataRecord.ColumnExists(property.Name)) continue;                 var ordinal = dataRecord.GetOrdinal(property.Name);                 var isNullable = property.PropertyType.IsGenericType &&                                  property.PropertyType.GetGenericTypeDefinition() == typeof (Nullable<>);                 var isNull = dataRecord.IsDBNull(ordinal);                 var propertyType = property.PropertyType;                 if (isNullable)                 {                     if (!string.IsNullOrEmpty(propertyType.FullName))                     {                         var nullableType = Type.GetType(propertyType.FullName);                         propertyType = nullableType != null ? nullableType.GetGenericArguments()[0] : propertyType;                     }                 }                 switch (Type.GetTypeCode(propertyType))                 {                     case TypeCode.Int32:                         property.SetValue(instance,                                           (isNullable && isNull) ? (int?) null : dataRecord.GetInt32(ordinal), null);                         break;                     case TypeCode.Double:                         property.SetValue(instance,                                           (isNullable && isNull) ? (double?) null : dataRecord.GetDouble(ordinal),                                           null);                         break;                     case TypeCode.Boolean:                         property.SetValue(instance,                                           (isNullable && isNull) ? (bool?) null : dataRecord.GetBoolean(ordinal),                                           null);                         break;                     case TypeCode.String:                         property.SetValue(instance, (isNullable && isNull) ? null : isNull ? null : dataRecord.GetString(ordinal),                                           null);                         break;                     case TypeCode.Int16:                         property.SetValue(instance,                                           (isNullable && isNull) ? (int?) null : dataRecord.GetInt16(ordinal), null);                         break;                     case TypeCode.DateTime:                         property.SetValue(instance,                                           (isNullable && isNull)                                               ? (DateTime?) null                                               : dataRecord.GetDateTime(ordinal), null);                         break;                 }             }         }     }   Here is a class which utilizes the above: [Serializable] [DataContract] public class foo : SharedBase<foo> {     [DataMember]     public int? ID { get; set; }     [DataMember]     public string Name { get; set; }     [DataMember]     public string Description { get; set; }     [DataMember]     public string Subject { get; set; }     [DataMember]     public string Body { get; set; }            public foo(IDataRecord record)     {         Hydrate(record, this);                }     public foo() {} }   Explanation: - Class foo inherits from SharedBase specifying itself as the type. (NOTE SharedBase is abstract here in the event we want to provide additional methods which could be overridden by the instance class) public class foo : SharedBase<foo> - One of the foo class constructors accepts a data record which then calls the Hydrate method on SharedBase passing in the record and itself. public foo(IDataRecord record) {      Hydrate(record, this); } - Hydrate method on SharedBase will use reflection on the object passed in to determine its properties. At the same time, it will effectively cache these properties to avoid repeated expensive reflection calls public static void Hydrate(IDataRecord dataRecord, T instance) {      var instanceType = instance.GetType();      //Caching properties to avoid repeated calls to GetProperties.      //Noticable performance gains when processing same types repeatedly.      if (cachedProperties == null)      {           cachedProperties = instanceType.GetProperties().ToList();      } . . . - Hydrate method on SharedBase will iterate each property on the object and determine if a column with matching name exists in data record foreach (var property in cachedProperties) {      if (!dataRecord.ColumnExists(property.Name)) continue;      var ordinal = dataRecord.GetOrdinal(property.Name); . . . NOTE: ColumnExists is an extension method I put on IDataRecord which I’ll include at the end of this post. - Hydrate method will determine if the property is nullable and whether the value in the corresponding column of the data record has a null value var isNullable = property.PropertyType.IsGenericType && property.PropertyType.GetGenericTypeDefinition() == typeof (Nullable<>); var isNull = dataRecord.IsDBNull(ordinal); var propertyType = property.PropertyType; . . .  - If Hydrate method determines the property is nullable it will determine the underlying type and set propertyType accordingly - Hydrate method will set the value of the property based upon the propertyType   That’s it!!!   The magic here is in a few places. First, you may have noticed the following: public abstract class SharedBase<T> where T : SharedBase<T> This says that SharedBase can be created with any type and that for each type it will have it’s own instance. This is important because of the static members within SharedBase. We want this behavior because we are caching the properties for each type. If we did not handle things in this way only 1 type could be cached at a time, or, we’d need to create a collection that allows us to cache the properties for each type = not very elegant.   Second, in the constructor for foo you may have noticed this (literally): public foo(IDataRecord record) {      Hydrate(record, this); } I wanted the code for auto-hydrating to be as simple as possible. At first I wasn’t quite sure how I could call Hydrate on SharedBase within an instance of the class and pass in the instance itself. Fortunately simply passing in “this” does the trick. I wasn’t sure it would work until I tried it out, and fortunately it did.   So, to actually use this feature when utilizing ADO.NET you’d do something like the following:        public List<foo> GetFoo(int? fooId)         {             List<foo> fooList;             const string uspName = "usp_GetFoo";             using (var conn = new SqlConnection(_dbConnection))             using (var cmd = new SqlCommand(uspName, conn))             {                 cmd.CommandType = CommandType.StoredProcedure;                 cmd.Parameters.Add(new SqlParameter("@FooID", SqlDbType.Int)                                        {Direction = ParameterDirection.Input, Value = fooId});                 conn.Open();                 using (var dr = cmd.ExecuteReader())                 {                     fooList= (from row in dr.Cast<DbDataRecord>()                                             select                                                 new foo(row)                                            ).ToList();                 }             }             return fooList;         }   Nice! Instead of having line after line manually assigning values from data record to an object you simply create a new instance and pass in the data record. Note that there are certainly instances where columns returned from stored procedure do not always match up with property names. In this scenario you can still use the above method and simply do your manual assignments afterward.

    Read the article

  • CodePlex Daily Summary for Friday, August 31, 2012

    CodePlex Daily Summary for Friday, August 31, 2012Popular ReleasesStartComp: Beta Release 1.0.0: Beta Release 1 Featured Content Bing-Search has been removed Window anchor implemented The listview can now be configured to be shown in details view or tile view through the context menu The listview now allows sorting through the context menu The view, sort order and sort column are now saved for each repository The listview now shows the background image in the lower right The listview now shows a background image for the user defined repositories Added a "Tell-A-Friend" bu...SharePoint Column & View Permission: SharePoint Column and View Permission v1.2: Version 1.2 of this project. If you will find any bugs please let me know at enti@zoznam.sk or post your findings in Issue TrackerDotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...DotNetNuke Translator: 01.00.00 Beta: First release of the project.Audio Pitch & Shift: Audio Pitch And Shift 5.1.0.2: fixed several issues with streaming modeUrlPager: UrlPager 1.2: Fixed bug in which url parameters will lost after paging; ????????url???bug;EntLib.com????????: EntLib.com???????? v3.0: EntLib eCommerce Solution ???Microsoft .Net Framework?????????????????????。Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.NicAudio: NicAudio 2.0.6: ac3,dts Solved some initialization issues with no-linear decode.ExpressProfiler: Initial release of ExpressProfiler v1.2: This is initial release of ExpressProfilerNabu Library: 2012-08-29, 14: .Net Framework 4.0, .Net Framework 4.5 Debug and Release builds.Math.NET Numerics: Math.NET Numerics v2.2.1: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Since v2.2.0: Student-T density more robust for very large degrees of freedom Sparse Kronecker product much more efficient (now leverages sparsity) Direct access to raw matrix storage implementations for advanced extensibility Now also separate package for signed core library with a strong name (we dropped strong names in v2.2.0) Also available as NuGet packages...Microsoft SQL Server Product Samples: Database: AdventureWorks Databases – 2012, 2008R2 and 2008: About this release This release consolidates AdventureWorks databases for SQL Server 2012, 2008R2 and 2008 versions to one page. Each zip file contains an mdf database file and ldf log file. This should make it easier to find and download AdventureWorks databases since all OLTP versions are on one page. There are no database schema changes. For each release of the product, there is a light-weight and full version of the AdventureWorks sample database. The light-weight version is denoted by ...DotNetNuke® Blog: 05.00.00: Version 5.0.0 - Final This version of the module requires DotNetNuke Core 6.2 or greater. FYI: Developers should be aware that the module uses Visual Studio 2010 only. Release Highlights: Corrected blog comment sorting problem. 20228 - Integrated with the core Journal API. 20789, 21988 - wired in fix submitted by J Sheely around blank author names. 20210 - Updated manifest to 5.0 format (from 3.0). Automated packaging and made project structure more inline with other DotNetNuke m...Christoc's DotNetNuke Module Development Template: DotNetNuke Project Templates V1.1 for VS2012: This release is specifically for Visual Studio 2012 Support, distributed through the Visual Studio Extensions gallery at http://visualstudiogallery.msdn.microsoft.com/ After you build in Release mode the installable packages (source/install) can be found in the INSTALL folder now, within your module's folder, not the packages folder anymore Check out the blog post for all of the details about this release. http://www.dotnetnuke.com/Resources/Blogs/EntryId/3471/New-Visual-Studio-2012-Projec...Home Access Plus+: v8.0: v8.0828.1800 RELEASE CHANGED TO BETA Any issues, please log them on http://www.edugeek.net/forums/home-access-plus/ This is full release, NO upgrade ZIP will be provided as most files require replacing. To upgrade from a previous version, delete everything but your AppData folder, extract all but the AppData folder and run your HAP+ install Documentation is supplied in the Web Zip The Quota Services require executing a script to register the service, this can be found in there install di...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3406 (September 2012): New features: Extended ReflectionClass libxml error handling, constants DateTime::modify(), DateTime::getOffset() TreatWarningsAsErrors MSBuild option OnlyPrecompiledCode configuration option; allows to use only compiled code Fixes: ArgsAware exception fix accessing .NET properties bug fix ASP.NET session handler fix for OutOfProc mode DateTime methods (WordPress posting fix) Phalanger Tools for Visual Studio: Visual Studio 2010 & 2012 New debugger engine, PHP-like debugging ...MabiCommerce: MabiCommerce 1.0.1: What's NewSetup now creates shortcuts Fix spelling errors Minor enhancement to the Map window.ScintillaNET: ScintillaNET 2.5.2: This release has been built from the 2.5 branch. Version 2.5.2 is functionally identical to the 2.5.1 release but also includes the XML documentation comments file generated by Visual Studio. It is not 100% comprehensive but it will give you Visual Studio IntelliSense for a large part of the API. Just make sure the ScintillaNET.xml file is in the same folder as the ScintillaNET.dll reference you're using in your projects. (The XML file does not need to be distributed with your application)....BlackJumboDog: Ver5.7.1: 2012.08.25 Ver5.7.1 (1)?????·?????LING?????????????? (2)SMTP???(????)????、?????\?????????????????????New ProjectsAbcLibrary: A Library of methods and class types used for ABCAprendendo Windows 8: Não foi feito nada ainda...Auto fill template generator (word): This program was designed to help the automate generation of files using keywords.ClarkTestCodePlex2: clark test Code Razor: This tools translates Razor files to code. This allows the Razor views to be compiled and shared across projects.Contrib.Mod.ChangePassword: It is an evil module that abuses users rights and lets you change anyone's password.CurrentConsumption: CurrentConsumptionDbSettings - An API to store settings in a database: This stores settings in an OleDb/Sql database using an API similar to ApplicationSettingsBase. Settings vary by app, version, user.JCI prototipos: summaryMemberAdminService: This is a test projectMeteor Rendering Engine: The Meteor rendering engine is developed in C# with XNA 4.0, and provides various rendering outputs for 3D scenes.Mod.Colorbox: Orchard module for Mod.ColorboxMogulTestProject1: papaMogulTestTRY: papaosmm: this is a sample test projectServer Survey: Server Survey ScriptShops' Cloud: This project is a Cloud Platform for Mini Shops' Daily Management.Simple Grocery 5: This is a very simple application to help me (or you) out setting up a grocery list and use it on the food market using ALL smart phones or tablets.Tikun Korim: Community site to help people learn to read in Sefer Torah. This project is going to use ASP.NET MVC 4 and as much as open source project as we can. TreeCreeper: TreeCreeper programs (Spatial and NonSpatial) support the taxonomic analysis of species assemblagesVisual Studio Icon Patcher: Visual Studio Icon Patcher allows you to update Visual Studio 2012 with the Solution Explorer icons from Visual Studio 2010.WPT Generator: WPT Generator HTML5 , Google API 3.0, Javascript and CSS 3.0 Web Application for generating a WPT file (Ozi Explorer Format).

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • Towards Ultra-Reusability for ADF - Adaptive Bindings

    - by Duncan Mills
    The task flow mechanism embodies one of the key value propositions of the ADF Framework, it's primary contribution being the componentization of your applications and implicitly the introduction of a re-use culture, particularly in large applications. However, what if we could do more? How could we make task flows even more re-usable than they are today? Well one great technique is to take advantage of a feature that is already present in the framework, a feature which I will call, for want of a better name, "adaptive bindings". What's an adaptive binding? well consider a simple use case.  I have several screens within my application which display tabular data which are all essentially identical, the only difference is that they happen to be based on different data collections (View Objects, Bean collections, whatever) , and have a different set of columns. Apart from that, however, they happen to be identical; same toolbar, same key functions and so on. So wouldn't it be nice if I could have a single parametrized task flow to represent that type of UI and reuse it? Hold on you say, great idea, however, to do that we'd run into problems. Each different collection that I want to display needs different entries in the pageDef file and: I want to continue to use the ADF Bindings mechanism rather than dropping back to passing the whole collection into the taskflow   If I do use bindings, there is no way I want to have to declare iterators and tree bindings for every possible collection that I might want the flow to handle  Ah, joy! I reply, no need to panic, you can just use adaptive bindings. Defining an Adaptive Binding  It's easiest to explain with a simple before and after use case.  Here's a basic pageDef definition for our familiar Departments table.  <executables> <iterator Binds="DepartmentsView1" DataControl="HRAppModuleDataControl" RangeSize="25"             id="DepartmentsView1Iterator"/> </executables> <bindings> <tree IterBinding="DepartmentsView1Iterator" id="DepartmentsView1">   <nodeDefinition DefName="oracle.demo.model.vo.DepartmentsView" Name="DepartmentsView10">     <AttrNames>       <Item Value="DepartmentId"/>         <Item Value="DepartmentName"/>         <Item Value="ManagerId"/>         <Item Value="LocationId"/>       </AttrNames>     </nodeDefinition> </tree> </bindings>  Here's the adaptive version: <executables> <iterator Binds="${pageFlowScope.voName}" DataControl="HRAppModuleDataControl" RangeSize="25"             id="TableSourceIterator"/> </executables> <bindings> <tree IterBinding="TableSourceIterator" id="GenericView"> <nodeDefinition Name="GenericViewNode"/> </tree> </bindings>  You'll notice three changes here.   Most importantly, you'll see that the hard-coded View Object name  that formally populated the iterator Binds attribute is gone and has been replaced by an expression (${pageFlowScope.voName}). This of course, is key, you can see that we can pass a parameter to the task flow, telling it exactly what VO to instantiate to populate this table! I've changed the IDs of the iterator and the tree binding, simply to reflect that they are now re-usable The tree binding itself has simplified and the node definition is now empty.  Now what this effectively means is that the #{node} map exposed through the tree binding will expose every attribute of the underlying iterator's collection - neat! (kudos to Eugene Fedorenko at this point who reminded me that this was even possible in his excellent "deep dive" session at OpenWorld  this year) Using the adaptive binding in the UI Now we have a parametrized  binding we have to make changes in the UI as well, first of all to reflect the new ID that we've assigned to the binding (of course) but also to change the column list from being a fixed known list to being a generic metadata driven set: <af:table value="#{bindings.GenericView.collectionModel}" rows="#{bindings.GenericView.rangeSize}"         fetchSize="#{bindings.GenericView.rangeSize}"           emptyText="#{bindings.GenericView.viewable ? 'No data to display.' : 'Access Denied.'}"           var="row" rowBandingInterval="0"           selectedRowKeys="#{bindings.GenericView.collectionModel.selectedRow}"           selectionListener="#{bindings.GenericView.collectionModel.makeCurrent}"           rowSelection="single" id="t1"> <af:forEach items="#{bindings.GenericView.attributeDefs}" var="def">   <af:column headerText="#{bindings.GenericView.labels[def.name]}" sortable="true"            sortProperty="#{def.name}" id="c1">     <af:outputText value="#{row[def.name]}" id="ot1"/>     </af:column>   </af:forEach> </af:table> Of course you are not constrained to a simple read only table here.  It's a normal tree binding and iterator that you are using behind the scenes so you can do all the usual things, but you can see the value of using ADFBC as the back end model as you have the rich pantheon of UI hints to use to derive things like labels (and validators and converters...)  One Final Twist  To finish on a high note I wanted to point out that you can take this even further and achieve the ultra-reusability I promised. Here's the new version of the pageDef iterator, see if you can notice the subtle change? <iterator Binds="{pageFlowScope.voName}"  DataControl="${pageFlowScope.dataControlName}" RangeSize="25"           id="TableSourceIterator"/>  Yes, as well as parametrizing the collection (VO) name, we can also parametrize the name of the data control. So your task flow can graduate from being re-usable within an application to being truly generic. So if you have some really common patterns within your app you can wrap them up and reuse then across multiple developments without having to dictate data control names, or connection names. This also demonstrates the importance of interacting with data only via the binding layer APIs. If you keep any code in the task flow generic in that way you can deal with data from multiple types of data controls, not just one flavour. Enjoy!

    Read the article

  • SQL SERVER – History of SQL Server Database Encryption

    - by pinaldave
    I recently met Michael Coles and Rodeney Landrum the author of one of the kind book Expert SQL Server 2008 Encryption at SQLPASS in Seattle. During the conversation we ended up how Microsoft is evolving encryption technology. The same discussion lead to talking about history of encryption tools in SQL Server. Michale pointed me to page 18 of his book of encryption. He explicitly give me permission to re-produce relevant part of history from his book. Encryption in SQL Server 2000 Built-in cryptographic encryption functionality was nonexistent in SQL Server 2000 and prior versions. In order to get server-side encryption in SQL Server you had to resort to purchasing or creating your own SQL Server XPs. Creating your own cryptographic XPs could be a daunting task owing to the fact that XPs had to be compiled as native DLLs (using a language like C or C++) and the XP application programming interface (API) was poorly documented. In addition there were always concerns around creating wellbehaved XPs that “played nicely” with the SQL Server process. Encryption in SQL Server 2005 Prior to the release of SQL Server 2005 there was a flurry of regulatory activity in response to accounting scandals and attacks on repositories of confidential consumer data. Much of this regulation centered onthe need for protecting and controlling access to sensitive financial and consumer information. With the release of SQL Server 2005 Microsoft responded to the increasing demand for built-in encryption byproviding the necessary tools to encrypt data at the column level. This functionality prominently featured the following: Support for column-level encryption of data using symmetric keys or passphrases. Built-in access to a variety of symmetric and asymmetric encryption algorithms, including AES, DES, Triple DES, RC2, RC4, and RSA. Capability to create and manage symmetric keys. Key creation and management. Ability to generate asymmetric keys and self-signed certificates, or to install external asymmetric keys and certificates. Implementation of hierarchical model for encryption key management, similar to the ANSI X9.17 standard model. SQL functions to generate one-way hash codes and digital signatures, including SHA-1 and MD5 hashes. Additional SQL functions to encrypt and decrypt data. Extensions to the SQL language to support creation, use, and administration of encryption keys and certificates. SQL CLR extensions that provide access to .NET-based encryption functionality. Encryption in SQL Server 2008 Encryption demands have increased over the past few years. For instance, there has been a demand for the ability to store encryption keys “off-the-box,” physically separate from the database and the data it contains. Also there is a recognized requirement for legacy databases and applications to take advantage of encryption without changing the existing code base. To address these needs SQL Server 2008 adds the following features to its encryption arsenal: Transparent Data Encryption (TDE): Allows you to encrypt an entire database, including log files and the tempdb database, in such a way that it is transparent to client applications. Extensible Key Management (EKM): Allows you to store and manage your encryption keys on an external device known as a hardware security module (HSM). Cryptographic random number generation functionality. Additional cryptography-related catalog views and dynamic management views. SQL language extensions to support the new encryption functionality. The encryption book covers all the tools in its various chapter in one simple story. If you are interested how encryption evolved and reached to the stage where it is today, this book is must for everyone. You can read my earlier review of the book over here. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology Tagged: Encryption, SQL Server Encryption, SQLPASS

    Read the article

  • How To Disable Control Panel in Windows 7

    - by Mysticgeek
    If you have a shared computer that your family and friends can access, you might not want them to mess around in the Control Panel, and luckily with a simple tweak you can disable it. Disable Control Panel with Group Policy Note: This process uses Local Group Policy Editor which is not available in Home versions of Windows 7. Skip down below for the registry hack version that works on Home editions as well. First type gpedit.msc into the Search box in the Start menu and hit Enter. When Local Group Policy Editor opens, navigate to User Configuration \ Administrative Templates then select Control Panel in the left Column. In the right column double-click on Prohibit access to the Control Panel. In the next window, select Enable, click OK, then close out of Local Group Policy Editor. After the Control Panel is disabled, you’ll notice it’s no longer listed in the Start Menu. If the user tries to type Control Panel into the Search box in the Start menu, they will get the following message indicating it’s restricted. Disable Control Panel with a Registry Tweak You can also tweak the Registry to disable Control Panel. This will work with all versions of Windows 7, Vista, and XP. Making changes in the Registry is not recommended for beginners and you should create a Restore Point, or backup the Registry before making any changes. Type regedit into the Search box in the Start menu and hit Enter. In Registry Editor navigate to HKEY_CURRENT_USER\Software\Microsoft\Windows\Current Version\Policies\Explorer. Then right-click in the right pane and create a new DWORD (32-bit) Value. Name the value NoControlPanel. Then right-click on the new Value and click Modify…   In the Value data field change the value to “1” then click OK. Close out of Registry Editor and restart the machine to complete the process. When you get back from reboot, you’ll notice Control Panel is no longer listed in the Start menu. If a user tries to access it by typing Control Panel into the Search box in the Start menu… They will get the following message indicating it is restricted, just like if you were to disable it via Group Policy. If you want to re-enable the Control Panel, go back into the Registry and change the NoControlPanel value back to “0” then reboot the computer. This comes in handy if you have inexperienced users working on your machine and don’t want them messing with Control Panel settings. Similar Articles Productive Geek Tips Disable User Account Control (UAC) the Easy Way on Win 7 or VistaStill Useful in Vista: Startup Control PanelRestore Missing Items in Windows Vista Control PanelHow To Manage Action Center in Windows 7New Vista Syntax for Opening Control Panel Items from the Command-line TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • Visual Studio 2010 Productivity Power Tool Extensions

    - by ScottGu
    Last month I blogged about the Extension Manager that is built-into VS 2010 – as well as about a cool VS 2010 PowerCommands extension that provides some extra features for Visual Studio.  The Visual Studio 2010 Extension Manager provides an easy way for developers to quickly find and install extensions and plugins that enhance the built-in functionality to VS 2010. New VS 2010 Productivity Power Tools Release Earlier this week Jason Zander announced the availability of a new VS 2010 Productivity Power Tools release that includes a bunch of great new VS 2010 extensions that provide a bunch of cool new functionality for you to take advantage of.  You can download and install the release for free here.  Some of the code editor improvements it provides include: Entire Line Highlighting: Makes it easier to track cursor location within the editor Entire Line Selection: Triple Clicking a line in the code editor now selects the entire line (like with MS Word) Code Block Movement: Use Alt+Up/Down Arrow now moves selected code blocks up/down in the editor Consistent Tabs vs. Spaces: Ensure consistent tab vs. space usage across your projects Colorized Parameters: It is now easier to see/identify method parameters Column Guide: You can now add vertical column guidelines to help with text alignment and sizes Align assignments: Makes it easier to line-up multiple variable assignments within your code HTML Clipboard Support: Copy/paste code from VS into an HTML buffer (useful for blogging!) Ctrl + Click Go to Definition: You can now hold down the Ctrl key and click a type to go to its definition It also includes several tab management improvements for managing document tabs within the IDE: Show Close Button in Tab Well: Shows a close button in document well for the active tab (like VS 2008 did) Colored Tabs: You can now select the color of each document tab by project or by regex Pinned Tabs: Enables you to pin tabs to keep them always visible and available Vertical Tabs: You can now show document tabs vertically to fit more tabs than normal Remove Tabs by Usage Order: Better behavior when adding new tabs and one needs to be hidden for space reasons Sort Tabs by Project: Tabs can be sorted by project they belong to, keeping them grouped together Sort Tabs Alphabetically: Tabs can be sorted alphabetically And last – but not least – it includes a new and improved “Add Reference” dialog: This new Add Reference dialog caches assembly information – which means it loads within a second or two (note: the very first time it still loads assembly data – but it then caches it and makes it fast afterwards). The new Add Reference dialog also now includes searching support – making it easier to find the assembly you are looking for. You can read more about all of the above improvements in Jason’s blog post about the release. New Visualization and Modeling Feature Pack Release Earlier this week we also shipped a new feature pack that adds additional modeling and code visualization features to VS 2010 Ultimate.  You can download it here. The Visualization and Modeling Feature Pack includes a bunch of great new capabilities including: Web Site Visualization: New support for generating a DGML visualization for ASP.NET projects C/C++ Native Code Visualization: New support for generating DGML diagrams for C/C++ projects Generate Code from UML Class Diagrams: You can now generate code from your UML diagrams Create UML Class Diagrams from Code: Create UML diagrams from existing code bases Import UML from XML: Import UML class, sequence, and use case elements from XMI 2.1 files Custom Validation Layer Rules: Write custom code to create, modify, and validate layer diagrams Jason’s blog post covers more about these features as well. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • The Beginner’s Guide to Greasemonkey User Scripts in Firefox

    - by Asian Angel
    Everybody knows that Firefox has add-ons for virtually everything, but if you don’t want to bloat your installation you’ve always got the option of Greasemonkey scripts instead. Here’s a quick primer on how to use them. Getting Started with User Scripts Once you have Greasemonkey installed, managing the extension is really easy. Left click on the status bar icon to turn the extension on/off and right click to access the context menu shown here. Whether you use the Options button in the Add-ons Manager Window or the context menu shown above, both will bring up the Manage User Scripts dialog. At the moment you have a nice clean slate to work with… time to get some scripts added in. The majority of user scripts can be found at two different sites, the first being appropriately named userscripts.org, and you can either browse by tag or search for a script. As you can see here your search for a particular type of script can be quickly narrowed down based on category. There is definitely a lot to choose from. For our example we focused on the “textarea” tag. There were 62 scripts available but we quickly found what we were looking for on the first page. Installing, Managing, & Using Your Scripts When you find a script that you want to install visit the script’s homepage and click on the “Install” button. Note: Link for this script provided below. Once you have clicked on the Install button, Greasemonkey will open up the following installation window. You will be able to view: A summary of what the script does A list of websites that the script is supposed to function on (our example is set for all) View the script source if desired Make a final decision on whether to install the script or cancel the process Right-clicking on our status bar icon shows our new script listed and active. Reopening the Manage User Scripts window shows: Our new script listed in the column on the left The websites/pages included An option to disable the script (can also be done in the context menu) The ability to edit the script The ability to uninstall the script If you choose to edit the script you will be asked to browse for and select a default text editor of your choice (first time only). Once you have selected a text editor you can make any changes desired to the script. We decided to test our new user script on the site. Going to the comment box at the bottom we could easily resize the window as desired. The Comment box definitely got a lot bigger. Conclusion If you prefer to keep the number of extensions to a minimum in your Firefox installation then Greasemonkey and the Userscripts website can easily provide that extra functionality without the bloat. For added auto website script detection goodness see our article on Greasefire. Note: See our article here for specialized How-To Geek User Style Scripts that can be added to Greasemonkey. Links Download the Greasemonkey Extension (Mozilla Add-ons) Install the Textarea & Input Resize User Script Visit the Userscripts.org Website Visit the Userstyles.org Website Similar Articles Productive Geek Tips Enjoy How-To Geek User Style Script GoodnessEnable Multi-Column Google Searches with a User ScriptSearch Alternative Search Engines from within Bing’s Search PageFind User Scripts for Your Favorite Websites the Easy WaySet Up User Scripts in Opera Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • SQL SERVER – Auto Complete and Format T-SQL Code – Devart SQL Complete

    - by pinaldave
    Some people call it laziness, some will call it efficiency, some think it is the right thing to do. At any rate, tools are meant to make a job easier, and I like to use various tools. If we consider the history of the world, if we all wanted to keep traditional practices, we would have never invented the wheel.  But as time progressed, people wanted convenience and efficiency, which then led to laziness. Wanting a more efficient way to do something is not inherently lazy.  That’s how I see any efficiency tools. A few days ago I found Devart SQL Complete.  It took less than a minute to install, and after installation it just worked without needing any tweaking.  Once I started using it I was impressed with how fast it formats SQL code – you can write down any terms or even copy and paste.  You can start typing right away, and it will complete keywords, object names, and fragmentations. It completes statement expressions.  How many times do we write insert, update, delete?  Take this example: to alter a stored procedure name, we don’t remember the code written in it, you have to write it over again, or go back to SQL Server Studio Manager to create and alter which is very difficult.  With SQL Complete , you can write “alter stored procedure,” and it will finish it for you, and you can modify as needed. I love to write code, and I love well-written code.  When I am working with clients, and I find people whose code have not been written properly, I feel a little uncomfortable.  It is difficult to deal with code that is in the wrong case, with no line breaks, no white spaces, improper indents, and no text wrapping.  The worst thing to encounter is code that goes all the way to the right side, and you have to scroll a million times because there are no breaks or indents.  SQL Complete will take care of this for you – if a developer is too lazy for proper formatting, then Devart’s SQL formatter tool will make them better, not lazier. SQL Management Studio gives information about your code when you hover your mouse over it, however SQL Complete goes further in it, going into the work table, and the current rate idea, too. It gives you more information about the parameters; and last but not least, it will just take you to the help file of code navigation.  It will open object explorer in a document viewer.  You can start going through the various properties of your code – a very important thing to do. Here are are interesting Intellisense examples: 1) We are often very lazy to expand *however, when we are using SQL Complete we can just mouse over the * and it will give us all the the column names and we can select the appropriate columns. 2) We can put the cursor after * and it will give us option to expand it to all the column names by pressing the Tab key. 3) Here is one more Intellisense feature I really liked it. I always alias my tables and I always select the alias with special logic. When I was using SQL Complete I selected just a tablename (without schema name) and…(just like below image) … and it autocompleted the schema and alias name (the way I needed it). I believe using SQL Complete we can work faster.  It supports all versions of SQL Server, and works SQL formatting.  Many businesses perform code review and have code standards, so why not use an efficiency tool on everyone’s computer and make sure the code is written correctly from the first time?  If you’re interested in this tool, there are free editions available.  If you like it, you can buy it.  I bought it because it works.  I love it, and I want to hear all your opinions on it, too. You can get the product for FREE.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Text Expansion Awareness for UX Designers: Points to Consider

    - by ultan o'broin
    Awareness of translated text expansion dynamics is important for enterprise applications UX designers (I am assuming all source text for translation is in English, though apps development can takes place in other natural languages too). This consideration goes beyond the standard 'character multiplication' rule and must take into account the avoidance of other layout tricks that a designer might be tempted to try. Follow these guidelines. For general text expansion, remember the simple rule that the shorter the word is in the English, the longer it will need to be in English. See the examples provided by Richard Ishida of the W3C and you'll get the idea. So, forget the 30 percent or one inch minimum expansion rule of the old Forms days. Unfortunately remembering convoluted text expansion rules, based as a percentage of the US English character count can be tough going. Try these: Up to 10 characters: 100 to 200% 11 to 20 characters: 80 to 100% 21 to 30 characters: 60 to 80% 31 to 50 characters: 40 to 60% 51 to 70 characters: 31 to 40% Over 70 characters: 30% (Source: IBM) So it might be easier to remember a rule that if your English text is less than 20 characters then allow it to double in length (200 percent), and then after that assume an increase by half the length of the text (50%). (Bear in mind that ADF can apply truncation rules on some components in English too). (If your text is stored in a database, developers must make sure the table column widths can accommodate the expansion of your text when translated based on byte size for the translated character and not numbers of characters. Use Unicode. One character does not equal one byte in the multilingual enterprise apps world.) Rely on a graceful transformation of translated text. Let all pages to resize dynamically so the text wraps and flow naturally. ADF pages supports this already. Think websites. Don't hard-code alignments. Use Start and End properties on components and not Left or Right. Don't force alignments of components on the page by using texts of a certain length as spacers. Use proper label positioning and anchoring in ADF components or other technologies. Remember that an increase in text length means an increase in vertical space too when pages are resized. So don't hard-code vertical heights for any text areas. Don't be tempted to manually create text or printed reports this way either. They cannot be translated successfully, and are very difficult to maintain in English. Use XML, HTML, RTF and so on. Check out what Oracle BI Publisher offers. Don't force wrapping by using tricks such as /n or /t characters or HTML BR tags or forced page breaks. Once the text is translated the alignment will be destroyed. The position of the breaking character or tag would need to be moved anyway, or even removed. When creating tables, then use table components. Don't use manually created tables that reply on word length to maintain column and row alignment. For example, don't use codeblock elements in HTML; use the proper table elements instead. Once translated, the alignment of manually formatted tabular data is destroyed. Finally, if there is a space restriction, then don't use made-up acronyms, abbreviations or some form of daft text speak to save space. Besides being incomprehensible in English, they may need full translations of the shortened words, even if they can be figured out. Use approved or industry standard acronyms according to the UX style rules, not as a space-saving device. Restricted Real Estate on Mobile Devices On mobile devices real estate is limited. Using shortened text is fine once it is comprehensible. Users in the mobile space prefer brevity too, as they are on the go, performing three-minute tasks, with no time to read lengthy texts. Using fragments and lightning up on unnecessary articles and getting straight to the point with imperative forms of verbs makes sense both on real estate and user experience grounds.

    Read the article

  • Web Services for Info Explorer Zones

    - by Anthony Shorten
    One of the most interesting uses for XAI and Configurable objects is the exposure of a query portal as a Web Service. Let me illustrate this with an example. Say you have an interface that requires a list of data from a number of product tables. In the past you would have to build a java program to do this with SQL then use an application service but it is now possible with just configuration. The first step in the process is to create the SQL you want to use for the interface. It can be any valid static SQL or use host variables for the WHERE clause (we call that filtered). Once you are happy with the SQL (and it performs acceptably) you can incorporate that SQL into a Info-Explorer Zone. You can use any of the explorer zone types but I typically recommend F1-DE-SINGLE as it supports a single SQL statement with multiple filters (up to 15) as well as hidden filters (up to 5). Hidden filters are typically not displayed in the UI for criteria (remember explorer zones can be used on the user Interface as well) but for web services they can be used as normal filters (this means you can use up to 20 filters all up). Once you are happy with the zone, you now need to define it as a Business Service. We have a generic service called FWLZDEXP which allows a explorer zone to be defined as a Business Service. If you open any Business Service based upon FWLZDEXP you will see some examples. The schema is standard and pretty self explanatory in terms of the structure. The schema pattern looks like this: Zone element - maps to the ZONE_CD element and the default value is the zone name you just created. This links the business service to the zone. Filter elements - You name the filters as you like but the mapField is set to Fx_VALUE where x is the filter number corresponding to the filter element in the zone definition. Hidden filter elements - You name the filters as you like but the mapField is set to Hx_VALUE where x is the filter number corresponding to the hidden filter element in the zone definition. results group - this holds the elements of the result set. Each element in your result set has a tagname and is linked to the COL_VALUE mapField and the row element is lists the SEQNO of the column. This corresponds to the column number in the results set in the zone. An example schema is shown below for the F1-USGRACML zone, which returns the access modes for a user group and application service filters. In the example, the userGroup and applicationService elements are the filters and the rows would contain a list of accessModeDescr. This is just a simple example to illustrate the point. There are lots of examples in the product that you can investigate. One recommendation, to save time, is that you copy the schema from one of the examples to save you typing it from scratch. You can simply modify the tags and other elements to suit your needs. Once the Business Service is defined it can simply be defined as a Web Service by registering an XAI Inbound Service using the Business Service definition as a basis. You now have a Web Service based upon a Info Explorer Zone. This is one of my favorite components as it allows interfaces to be simplified. This will be my last blog entry for this year. I hope you all have a great and safe Christmas and an even greater new year. Next year promises to be an exciting year and I look forward to communicating exciting developments we are working on at the moment as they are released.

    Read the article

  • More SQL Smells

    - by Nick Harrison
    Let's continue exploring some of the SQL Smells from Phil's list. He has been putting together. Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) This is a great example poking holes in the whole theory of "If it works it's not broken" Queries will this probably will generally work and give the correct response. In fact, without careful analysis, you probably may be completely oblivious that there is even a problem. This subtle little problem will needlessly complicate queries and slow them down regardless of the indexes applied. Consider this example: CREATE TABLE [dbo].[Page](     [PageId] [int] IDENTITY(1,1) NOT NULL,     [Title] [varchar](75) NOT NULL,     [Sequence] [int] NOT NULL,     [ThemeId] [int] NOT NULL,     [CustomCss] [text] NOT NULL,     [CustomScript] [text] NOT NULL,     [PageGroupId] [int] NOT NULL;  CREATE PROCEDURE PageSelectBySequence ( @sequenceMin smallint , @sequenceMax smallint ) AS BEGIN SELECT [PageId] , [Title] , [Sequence] , [ThemeId] , [CustomCss] , [CustomScript] , [PageGroupId] FROM [CMS].[dbo].[Page] WHERE Sequence BETWEEN @sequenceMin AND @SequenceMax END  Note that the Sequence column is defined as int while the sequence parameter is defined as a small int. The problem is that the database may have to do a lot of type conversions to evaluate the query. In some cases, this may even negate the indexes that you have in place. Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) There are two main problems here. The first is a little subjective, since this is a non-standard way of expressing the query, it is harder to understand. The other problem is much more objective and potentially problematic. You are taking much of the control away from the optimizer. Written properly, such a query may well out perform a corresponding query written with traditional joins. More likely than not, performance will degrade. Whenever you assume that you know better than the optimizer, you will most likely be wrong. This is the fundmental problem with any hint. Consider a query like this:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , ( SELECT EffectName FROM dbo.Effect WHERE EffectId = dbo.PageEffects.EffectId ) AS EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId  This can and should be written as:  SELECT Page.Title , Page.Sequence , Page.ThemeId , Page.CustomCss , Page.CustomScript , PageEffectParams.Name , PageEffectParams.Value , EffectName FROM Page INNER JOIN PageEffect ON Page.PageId = PageEffects.PageId INNER JOIN PageEffectParam ON PageEffects.PageEffectId = PageEffectParams.PageEffectId INNER JOIN dbo.Effect ON dbo.Effects.EffectId = dbo.PageEffects.EffectId  The correlated query may just as easily show up in the where clause. It's not a good idea in the select clause or the where clause. Few or No comments. This one is a bit more complicated and controversial. All comments are not created equal. Some comments are helpful and need to be included. Other comments are not necessary and may indicate a problem. I tend to follow the rule of thumb that comments that explain why are good. Comments that explain how are bad. Many people may be shocked to hear the idea of a bad comment, but hear me out. If a comment is needed to explain what is going on or how it works, the logic is too complex and needs to be simplified. Comments that explain why are good. Comments may explain why the sql is needed are good. Comments that explain where the sql is used are good. Comments that explain how tables are related should not be needed if the sql is well written. If they are needed, you need to consider reworking the sql or simplify your data model. Use of functions in a WHERE clause. (Anil Das) Calling a function in the where clause will often negate the indexing strategy. The function will be called for every record considered. This will often a force a full table scan on the tables affected. Calling a function will not guarantee that there is a full table scan, but there is a good chance that it will. If you find that you often need to write queries using a particular function, you may need to add a column to the table that has the function already applied.

    Read the article

  • #MIX Day 2 Keynote: Put the Phone Down and Listen

    - by andrewbrust
    MIX day 1’s keynote was all about Windows Phone 7 (WP7).  MIX day 2’s was a reminder that Microsoft has much more going on than a new mobile platform.  Steven Sinofsky, Scott Guthrie, Doug Purdy and others showed us lots of other good things coming from Microsoft, mostly in the developer stack, that we certainly shouldn’t overlook.  These included the forthcoming IE9, its new JavaScript compiling engine and support for HTML 5 that takes full advantage of the local PC resources, including the Graphics Processing Unit.  The announcements also included important additions to ASP.NET (and one subtraction, in the form of lighter-weight ViewState technology) including almost-obsessive jQuery support.  That support is so good that John Resig, creator of the jQuery project, came on stage to tell us so.  Then Scott Guthrie told us that Microsoft would be contributing code to Open Source jQuery project. This is not your father’s Microsoft, it would seem. But to me, the crown jewel in today’s keynote were the numerous announcements around the Open Data Protocol (OData).  OData is nothing more than the protocol side of “Astoria” (now known as WCF Data Services, and until recently called ADO.NET Data Services) separated out and opened up as a platform-neutral standard.  The 2009 Professional Developers Conference (PDC) was Microsoft’s vehicle for first announcing OData, as well as project “Dallas,” an Azure-based cloud platform for publishing commercial OData feeds.  And we had already known about “bridges” for Astoria (and thus OData) for PHP and Java.  We also knew that PowerPivot, Microsoft’s forthcoming self-service BI plug-in for Excel 2010, will consume OData feeds and then facilitate drill-down analysis of their data.  And we recently found out that SQL Reporting Services reports (in the forthcoming SQL Server 2008 R2) and SharePoint 2010 lists will be consumable in OData format as well. So what was left to announce?  How about OData clients for Palm webOS and Apple iPhone/Objective C?  How about the release to Open Source of .NET’s OData client?  Or the ability to publish any SQL Azure database as an OData service by simply checking a checkbox at deployment?  Maybe even a Silverlight tool (code-named “Houston”) to create SQL Azure databases (and then publish them as OData) right in the browser?  And what if you you could get at NetFlix’s entire catalog in OData format?  You can – just go to http://odata.netflix.com/Catalog/ and see for yourself.  Douglas Purdy, who made these announcements said “we want OData to work on as many devices and platforms as possible.”  After all the cross-platform OData announcements made in about a half year’s time, it’s hard to dispute this. When Microsoft plays the data card, and plays it well, watch out, because data programmability is the company’s heritage.  I’ll be discussing OData at length in my April Redmond Review column.  I wrote that column two weeks ago, and was convinced then that OData was a big deal. Today upped the ante even more.  And following the Windows Phone 7 euphoria of yesterday was, I think, smart timing.  The phone, if it’s successful, will be because it’s a good developer platform play.  And developer platforms (as well as their creators) are most successful when they have a good data strategy.  OData is very Silverlight-friendly, and that means it’s WP7-friendly too.  Phone plus service-oriented data is a one-two punch.  A phone platform without data would have been a phone with no signal.

    Read the article

  • SQL SERVER – How to Compare the Schema of Two Databases with Schema Compare

    - by Pinal Dave
    Earlier I wrote about An Efficiency Tool to Compare and Synchronize SQL Server Databases and it was very much well received. Since the blog post I have received quite a many question that just like data how we can also compare schema and synchronize it. If you think about comparing the schema manually, it is almost impossible to do so. Table Schema has been just one of the concept but if you really want the all the schema of the database (triggers, views, stored procedure and everything else) it is just impossible task. If you are developer or database administrator who works in the production environment than you know that there are so many different occasions when we have to compare schema of the database. Before deploying any changes to the production server, I personally like to make note of the every single schema change and document it so in case of any issue , I can always go back and refer my documentation. As discussed earlier it is absolutely impossible to do this task without the help of third party tools. I personally use Devart Schema Compare for this task. This is an extremely easy tool. Let us see how it works. First I have two different databases – a) AdventureWorks2012 and b) AdventureWorks2012-V1. There are total three changes between these databases. Here is the list of the same. One of the table has additional column One of the table have new index One of the stored procedure is changed Now let see how dbForge Schema Compare works in this scenario. First open dbForge Schema Compare studio. Click on New Schema Comparison. It will bring you to following screen where we have to configure the database needed to configure. I have selected AdventureWorks2012 and AdventureWorks-V1 databases. In the next screen we can verify various options but for this demonstration we will keep it as it is. We will not change anything in schema mapping screen as in our case it is not required but generically if you are comparing across schema you may need this. This is the most important screen as on this screen we select which kind of object we want to compare. You can see the options which are available to select. The screen lets you select the objects from SQL Server 2000 to SQL Server 2012. Once you click on compare in previous screen it will bring you to this screen, which will essentially display the comparative difference between two of the databases which we had selected in earlier screen. As mentioned above there are three different changes in the database and the same has been listed over here. Two of the changes belongs to the tables and one changes belong to the procedure. Let us click each of them one by one to see what is the difference between them. In very first option we can see that there is an additional column in another database which did not exist earlier. In this example we can see that AdventureWorks2012 database have an additional index. Following example is very interesting as in this case, we have changed the definition of the stored procedure and the result pan contains the same. dbForget Schema Compare very effectively identify the changes in schema and lists them neatly to developers. Here is one more screen. This software not only compares the schema but also provides the options to update or drop them as per the choice. I think this is brilliant option. Well, I have been using schema compare for quite a while and have found it very useful. Here are few of the things which dbForge Schema Compare can do for developers and DBAs. Compare and synchronize SQL Server database schemas Compare schemas of live database and SQL Server backup Generate comparison reports in Excel and HTML formats Eliminate mistakes in schema changes propagation across environments Track production database changes and customizations Automate migration of schema changes using command line interface I suggest that you try out dbForge Schema Compare and let me know what you think of this product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • How the "migrations" approach makes database continuous integration possible

    - by David Atkinson
    Testing a database upgrade script as part of a continuous integration process will only work if there is an easy way to automate the generation of the upgrade scripts. There are two common approaches to managing upgrade scripts. The first is to maintain a set of scripts as-you-go-along. Many SQL developers I've encountered will store these in a folder prefixed numerically to ensure they are ordered as they are intended to be run. Occasionally there is an accompanying document or a batch file that ensures that the scripts are run in the defined order. Writing these scripts during the course of development requires discipline. It's all too easy to load up the table designer and to make a change directly to the development database, rather than to save off the ALTER statement that is required when the same change is made to production. This discipline can add considerable overhead to the development process. However, come the end of the project, everything is ready for final testing and deployment. The second development paradigm is to not do the above. Changes are made to the development database without considering the incremental update scripts required to effect the changes. At the end of the project, the SQL developer or DBA, is tasked to work out what changes have been made, and to hand-craft the upgrade scripts retrospectively. The end of the project is the wrong time to be doing this, as the pressure is mounting to ship the product. And where data deployment is involved, it is prudent not to feel rushed. Schema comparison tools such as SQL Compare have made this latter technique more bearable. These tools work by analyzing the before and after states of a database schema, and calculating the SQL required to transition the database. Problem solved? Not entirely. Schema comparison tools are huge time savers, but they have their limitations. There are certain changes that can be made to a database that can't be determined purely from observing the static schema states. If a column is split, how do we determine the algorithm required to copy the data into the new columns? If a NOT NULL column is added without a default, how do we populate the new field for existing records in the target? If we rename a table, how do we know we've done a rename, as we could equally have dropped a table and created a new one? All the above are examples of situations where developer intent is required to supplement the script generation engine. SQL Source Control 3 and SQL Compare 10 introduced a new feature, migration scripts, allowing developers to add custom scripts to replace the default script generation behavior. These scripts are committed to source control alongside the schema changes, and are associated with one or more changesets. Before this capability was introduced, any schema change that required additional developer intent would break any attempt at auto-generation of the upgrade script, rendering deployment testing as part of continuous integration useless. SQL Compare will now generate upgrade scripts not only using its diffing engine, but also using the knowledge supplied by developers in the guise of migration scripts. In future posts I will describe the necessary command line syntax to leverage this feature as part of an automated build process such as continuous integration.

    Read the article

  • Visual Studio Exceptions dialogs

    - by Daniel Moth
    Previously I covered step 1 of live debugging with start and attach. Once the debugger is attached, you want to go to step 2 of live debugging, which is to break. One way to break under the debugger is to do nothing, and just wait for an exception to occur in your code. This is true for all types of code that you debug in Visual Studio, and let's consider the following piece of C# code:3: static void Main() 4: { 5: try 6: { 7: int i = 0; 8: int r = 5 / i; 9: } 10: catch (System.DivideByZeroException) {/*gulp. sue me.*/} 11: System.Console.ReadLine(); 12: } If you run this under the debugger do you expect an exception on line 8? It is a trick question: you have to know whether I have configured the debugger to break when exceptions are thrown (first-chance exceptions) or only when they are unhandled. The place you do that is in the Exceptions dialog which is accessible from the Debug->Exceptions menu and on my installation looks like this: Note that I have checked all CLR exceptions. I could have expanded (like shown for the C++ case in my screenshot) and selected specific exceptions. To read more about this dialog, please read the corresponding Exception Handling debugging msdn topic and all its subtopics. So, for the code above, the debugger will break execution due to the thrown exception (exactly as if the try..catch was not there), so I see the following Exception Thrown dialog: Note the following: I can hit continue (or hit break and then later continue) and the program will continue fine since I have a catch handler. If this was an unhandled exception, then that is what the dialog would say (instead of first chance exception) and continuing would crash the app. That hyperlinked text ("Open Exception Settings") opens the Exceptions dialog I described further up. The coolest thing to note is the checkbox - this is new in this latest release of Visual Studio: it is a shortcut to the checkbox in the Exceptions dialog, so you don't have to open it to change this setting for this specific exception - you can toggle that option right from this dialog. Finally, if you try the code above on your system, you may observe a couple of differences from my screenshots. The first is that you may have an additional column of checkboxes in the Exceptions dialog. The second is that the last dialog I shared may look different to you. It all depends on the Debug->Options settings, and the two relevant settings are in this screenshot: The Exception assistant is what configures the look of the UI when the debugger wants to indicate exception to you, and the Just My Code setting controls the extra column in the Exception dialog. You can read more about those options on MSDN: How to break on User-Unhandled exceptions (plus Gregg’s post) and Exception Assistant. Before I leave you to go play with this stuff a bit more, please note that this level of debugging is now available for JavaScript too, and if you are looking at the Exceptions dialog and wondering what the "GPU Memory Access Exceptions" node is about, stay tuned on the C++ AMP blog ;-) Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Whether to use UNION or OR in SQL Server Queries

    - by Dinesh Asanka
    Recently I came across with an article on DB2 about using Union instead of OR. So I thought of carrying out a research on SQL Server on what scenarios UNION is optimal in and which scenarios OR would be best. I will analyze this with a few scenarios using samples taken  from the AdventureWorks database Sales.SalesOrderDetail table. Scenario 1: Selecting all columns So we are going to select all columns and you have a non-clustered index on the ProductID column. --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 874 So query 1 is using OR and the later is using UNION. Let us analyze the execution plans for these queries. Query 1 Query 2 As expected Query 1 will use Clustered Index Scan but Query 2, uses all sorts of things. In this case, since it is using multiple CPUs you might have CX_PACKET waits as well. Let’s look at the profiler results for these two queries: CPU Reads Duration Row Counts OR 78 1252 389 3854 UNION 250 7495 660 3854 You can see from the above table the UNION query is not performing well as the  OR query though both are retuning same no of rows (3854).These results indicate that, for the above scenario UNION should be used. Scenario 2: Non-Clustered and Clustered Index Columns only --Query 1 : OR SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 GO --Query 2 : UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 874 GO So this time, we will be selecting only index columns, which means these queries will avoid a data page lookup. As in the previous case we will analyze the execution plans: Query 1 Query 2 Again, Query 2 is more complex than Query 1. Let us look at the profile analysis: CPU Reads Duration Row Counts OR 0 24 208 3854 UNION 0 38 193 3854 In this analyzis, there is only slight difference between OR and UNION. Scenario 3: Selecting all columns for different fields Up to now, we were using only one column (ProductID) in the where clause.  What if we have two columns for where clauses and let us assume both are covered by non-clustered indexes? --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2: As we can see, the query plan for the second query has improved. Let us see the profiler results. CPU Reads Duration Row Counts OR 47 1278 443 1228 UNION 31 1334 400 1228 So in this case too, there is little difference between OR and UNION. Scenario 4: Selecting Clustered index columns for different fields Now let us go only with clustered indexes: --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2 Now both execution plans are almost identical except is an additional Stream Aggregate is used in the first query. This means UNION has advantage over OR in this scenario. Let us see profiler results for these queries again. CPU Reads Duration Row Counts OR 0 319 366 1228 UNION 0 50 193 1228 Now see the differences, in this scenario UNION has somewhat of an advantage over OR. Conclusion Using UNION or OR depends on the scenario you are faced with. So you need to do your analyzing before selecting the appropriate method. Also, above the four scenarios are not all an exhaustive list of scenarios, I selected those for the broad description purposes only.

    Read the article

  • Windows Phone 8, possible tablets and what the latest update might mean

    - by Roger Hart
    Microsoft have just announced an update to Windows Phone 8. As one of the five, maybe six people who actually bought a WP8 handset I found this interesting. Then I read the blog post about it, and rushed off to write somewhat less than a thousand words about a single picture. The blog post announces an extra column of tiles on the start screen, and support for higher resolutions. If we ignore all the usual flummery about how this will make your life better, that (and the rotation lock) sounds a little like stage setting for tablets. Looking at the preview screenshot, I started to wonder. What it’s called Phablet_5F00_StartScreenProductivity_5F00_01_5F00_072A1240.jpg Pretty conclusive. If you can brand something a “phablet” and sleep at night you’re made of sterner stuff than I am, but that’s beside the point. It’s explicit in the post that Microsoft are expecting a broader range of form factors for WP8, but they stop short of quite calling out tablet size. The extra columns and resolution definitely back that up, so why stop at a 6 inch “phablet”? Sadly, the string of numbers there don’t really look like a Lumia model number – that would be a bit tendentious even for a speculative blog post about a single screenshot. “Productivity” is interesting too. I get into this a bit more below, but this is a pretty clear pitch for a business device. What it looks like Something that would look quite decent on a 7 inch screen, but something a bit too vertical to go toe-to-toe with the Surface. Certainly, it would look a lot better on a large-factor phone than any of the current models. Those tiles are going to get cramped and a bit ugly if the handsets aren’t getting bigger. What’s on it You have a bunch of missed calls, you rarely text, use a stocks app, and your budget spreadsheet and meeting notes are a thumb-reach away. Outlook is your main form of email. You care enough about LinkedIn to not only install its app but give it a huge live tile. There’s no beating about the bush here, the implicit persona is a corporate exec. With Nokia in the bag and Blackberry pushing daisies, that may not be a stupid play. There’s almost certainly a niche there if they can parlay their corporate credentials into filling it before BYOD (which functionally means an iPhone) reaches the late adopters. The really quite slick WP8 Office implementation ought to help here. This is the face they’ve chosen to present, the cultural milieu they’re normalizing for Windows Phone. It’s an iPhone for Serious Business Grown-ups. Could work, I guess. Does it mean anything? Is the latest WP8 update a sign that we can expect to see tablets running Windows Phone rather than WinRT? Well, WinRT tablets haven’t exactly taken off but I’m not quite going to make a leap like that just from a file name and a column of icons. I feel pretty safe, however, conjecturing that Microsoft would like to squeeze a WP8 “phablet” into the palm of every exec who’s ever grumbled about their Blackberry, and this release might get them a bit closer. If it works well incrementing up to larger devices, then that could be a fair hedge against WinRt crashing and burning any harder in the marketplace.

    Read the article

  • Normalisation and 'Anima notitia copia' (Soul of the Database)

    - by Phil Factor
    (A Guest Editorial for Simple-Talk) The other day, I was staring  at the sys.syslanguages  table in SQL Server with slightly-raised eyebrows . I’d just been reading Chris Date’s  interesting book ‘SQL and Relational Theory’. He’d made the point that you’re not necessarily doing relational database operations by using a SQL Database product.  The same general point was recently made by Dino Esposito about ASP.NET MVC.  The use of ASP.NET MVC doesn’t guarantee you a good application design: It merely makes it possible to test it. The way I’d describe the sentiment in both cases is ‘you can hit someone over the head with a frying-pan but you can’t call it cooking’. SQL enables you to create relational databases. However,  even if it smells bad, it is no crime to do hideously un-relational things with a SQL Database just so long as it’s necessary and you can tell the difference; not only that but also only if you’re aware of the risks and implications. Naturally, I’ve never knowingly created a database that Codd would have frowned at, but around the edges are interfaces and data feeds I’ve written  that have caused hissy fits amongst the Normalisation fundamentalists. Part of the problem for those who agonise about such things  is the misinterpretation of Atomicity.  An atomic value is one for which, in the strange virtual universe you are creating in your database, you don’t have any interest in any of its component parts.  If you aren’t interested in the electrons, neutrinos,  muons,  or  taus, then  an atom is ..er.. atomic. In the same way, if you are passed a JSON string or XML, and required to store it in a database, then all you need to do is to ask yourself, in your role as Anima notitia copia (Soul of the database) ‘have I any interest in the contents of this item of information?’.  If the answer is ‘No!’, or ‘nequequam! Then it is an atomic value, however complex it may be.  After all, you would never have the urge to store the pixels of images individually, under the misguided idea that these are the atomic values would you?  I would, of course,  ask the ‘Anima notitia copia’ rather than the application developers, since there may be more than one application, and the applications developers may be designing the application in the absence of full domain knowledge, (‘or by the seat of the pants’ as the technical term used to be). If, on the other hand, the answer is ‘sure, and we want to index the XML column’, then we may be in for some heavy XML-shredding sessions to get to store the ‘atomic’ values and ensure future harmony as the application develops. I went back to looking at the sys.syslanguages table. It has a months column with the months in a delimited list January,February,March,April,May,June,July,August,September,October,November,December This is an ordered list. Wicked? I seem to remember that this value, like shortmonths and days, is treated as a ‘thing’. It is merely passed off to an external  C++ routine in order to format a date in a particular language, and never accessed directly within the database. As far as the database is concerned, it is an atomic value.  There is more to normalisation than meets the eye.

    Read the article

  • Sweet and Sour Source Control

    - by Tony Davis
    Most database developers don't use Source Control. A recent anonymous poll on SQL Server Central asked its readers "Which Version Control system do you currently use to store you database scripts?" The winner, with almost 30% of the vote was...none: "We don't use source control for database scripts". In second place with almost 28% of the vote was Microsoft's VSS. VSS? Given its reputation for being buggy, unstable and lacking most of the basic features required of a proper source control system, answering VSS is really just another way of saying "I don't use Source Control". At first glance, it's a surprising thought. You wonder how database developers can work in a team and find out what changed, when the system worked before but is now broken; to work out what happened to their changes that now seem to have vanished; to roll-back a mistake quickly so that the rest of the team have a functioning build; to find instantly whether a suspect change has been deployed to production. Unfortunately, the survey didn't ask about the scale of the database development, and correlate the two questions. If there is only one database developer within a schema, who has an automated approach to regular generation of build scripts, then the need for a formal source control system is questionable. After all, a database stores far more about its metadata than a traditional compiled application. However, what is meat for a small development is poison for a team-based development. Here, we need a form of Source Control that can reconcile simultaneous changes, store the history of changes, derive versions and builds and that can cope with forks and merges. The problem comes when one borrows a solution that was designed for conventional programming. A database is not thought of as a "file", but a vast, interdependent and intricate matrix of tables, indexes, constraints, triggers, enumerations, static data and so on, all subtly interconnected. It is an awkward fit. Subversion with its support for merges and forks, and the tolerance of different work practices, can be made to work well, if used carefully. It has a standards-based architecture that allows it to be used on all platforms such as Windows Mac, and Linux. In the words of Erland Sommerskog, developers should "just do it". What's in a database is akin to a "binary file", and the developer must work only from the file. You check out the file, edit it, and save it to disk to compile it. Dependencies are validated at this point and if you've broken anything (e.g. you renamed a column and broke all the objects that reference the column), you'll find out about it right away, and you'll be forced to fix it. Nevertheless, for many this is an alien way of working with SQL Server. Subversion is the powerhouse, not the GUI. It doesn't work seamlessly with your existing IDE, and that usually means SSMS. So the question then becomes more subtle. Would developers be less reluctant to use a fully-featured source (revision) control system for a team database development if they had a turn-key, reliable system that fitted in with their existing work-practices? I'd love to hear what you think. Cheers, Tony.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for September 2-8, 2012

    - by Bob Rhubart
    The Top 10 items shared on the OTN Facebook Page for the week of September 2-8, 2012. Adding a runtime LOV for a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena illustrates how to customize the parameters tab for a taskflow in WebCenter. Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET | Scott Nelson "It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings," says Scott Nelson. "Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there." Free Event: Oracle Technology Network Architect Day – Boston, MA – 9/12/2012 Sure, you could ask a voodoo priestess for help in improving your solution architecture skills. But there's the whole snake thing, and the zombie thing, and other complications. So why not keep it simple and register for Oracle Technology Network Architect Day in Boston, MA. There's no magic, just a full day of technical sessions covering Cloud, SOA, Engineered Systems, and more. Registration is free, but seating is limited. You'll curse yourself if you miss this one. Starting and Stopping Fusion Applications the Right Way | Ronaldo Viscuso While the fastartstop tool that ships with Oracle Fusion Applications does most of the work to start/stop/bounce the Fusion Apps environment, it does not do it all. Oracle Fusion Applications A-Team blogger Ronaldo Viscuso's post "aims to explain all tasks involved in starting and stopping a Fusion Apps environment completely." Article Index: Architect Community Column in Oracle Magazine Did you know that Oracle Magazine features a regular column devoted specifically to the architect community? Every issue includes insight and expertise from architects who regularly work with Oracle Technologies. Click here to see a complete list of these articles. Using FMAP and AnalyticsRes in a Oracle BI High Availability Implementation | Art of Business Intelligence "The fmap syntax has been used for a long time in Oracle BI / Siebel Analytics when referencing images inherent in the application as well as custom images," says Oracle ACE Christian Screen. "This syntax is used on Analysis requests an dashboards." Dodeca Customer Feedback - The Rosewood Company | Tim Tow Oracle ACE Director Tim Tow shares anecdotal comments from one of his clients, a company that is deploying Dodeca to replace an aging VBA/Essbase application. Configuring UCM cache to check for external Content Server changes | Martin Deh Oracle WebCenter and ADF A-Team blogger shares the background information and the solution to a recently encountered customer scenario. Attend OTN Architect Day in Los Angeles – by Architects, for Architects – October 25 The OTN Architect Day roadshow stops in Boston next week, then it's on to Los Angeles for another all architecture, all day event on Thursday October 25, 2012 at the Sofitel Los Angeles, 555 Beverly Boulevard, Los Angeles, CA 90048. Like all Architect Day events, this one is absolutley free, so register now. The Role of Oracle VM Server for SPARC In a Virtualization Strategy New OTN article from Matthias Pfutzner. Thought for the Day "Practicing architects, through eduction, experience and examples, accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • SSIS Technique to Remove/Skip Trailer and/or Bad Data Row in a Flat File

    - by Compudicted
    I noticed that the question on how to skip or bypass a trailer record or a badly formatted/empty row in a SSIS package keeps coming back on the MSDN SSIS Forum. I tried to figure out the reason why and after an extensive search inside the forum and outside it on the entire Web (using several search engines) I indeed found that it seems even thought there is a number of posts and articles on the topic none of them are employing the simplest and the most efficient technique. When I say efficient I mean the shortest time to solution for the fellow developers. OK, enough talk. Let’s face the problem: Typically a flat file (e.g. a comma delimited/CSV) needs to be processed (loaded into a database in most cases really). Oftentimes, such an input file is produced by some sort of an out of control, 3-rd party solution and would come in with some garbage characters and/or even malformed/miss-formatted rows. One such example could be this imaginary file: As you can see several rows have no data and there is an occasional garbage character (1, in this example on row #7). Our task is to produce a clean file that will only capture the meaningful data rows. As an aside, our output/target may be a database table, but for the purpose of this exercise we will simply re-format the source. Let’s outline our course of action to start off: Will use SSIS 2005 to create a DFT; The DFT will use a Flat File Source to our input [bad] flat file; We will use a Conditional Split to process the bad input file; and finally Dump the resulting data to a new [clean] file. Well, only four steps, let’s see if it is too much of work. 1: Start the BIDS and add a DFT to the Control Flow designer (I named it Process Dirty File DFT): 2, and 3: I had added the data viewer to just see what I am getting, alas, surprisingly the data issues were not seen it:   What really is the key in the approach it is to properly set the Conditional Split Transformation. Visually it is: and specifically its SSIS Expression LEN([After CS Column 0]) > 1 The point is to employ the right Boolean expression (yes, the Conditional Split accepts only Boolean conditions). For the sake of this post I re-named the Output Name “No Empty Rows”, but by default it will be named Case 1 (remember to drag your first column into the expression area)! You can close your Conditional Split now. The next part will be crucial – consuming the output of our Conditional Split. Last step - #4: Add a Flat File Destination or any other one you need. Click on the Conditional Split and choose the green arrow to drop onto the target. When you do so make sure you choose the No Empty Rows output and NOT the Conditional Split Default Output. Make the necessary mappings. At this point your package must look like: As the last step will run our package to examine the produced output file. F5: and… it looks great!

    Read the article

  • Azure Mobile Services: lessons learned

    - by svdoever
    When I first started using Azure Mobile Services I thought of it as a nice way to: authenticate my users - login using Twitter, Google, Facebook, Windows Live create tables, and use the client code to create the columns in the table because that is not possible in the Azure Mobile Services UI run some Javascript code on the table crud actions (Insert, Update, Delete, Read) schedule a Javascript to run any 15 or more minutes I had no idea of the magic that was happening inside… where is the data stored? Is it a kind of big table, are relationships between tables possible? those Javascripts on the table crud actions, is that interpreted, what is that exactly? After working for some time with Azure Mobile Services I became a lot wiser: Those tables are just normal tables in an Azure SQL Server 2012 Creating the table columns through client code sucks, at least from my Javascript code, because the columns are deducted from the sent JSON data, and a datetime field is sent as string in JSON, so a string type column is created instead of a datetime column You can connect with SQL Management Studio to the Azure SQL Server, and although you can’t manage your columns through the SQL Management Studio UI, it is possible to just run SQL scripts to drop and create tables and indices When you create a table through SQL script, add the table with the same name in the Azure Mobile Services UI to hook it up and be able to access the table through the provided abstraction layer You can also go to the SQL Database through the Azure Mobile Services UI, and from there get in a web based SQL management studio where you can create columns and manage your data The table crud scripts and the scheduler scripts are full blown node.js scripts, introducing a lot of power with great performance The web based script editor is really powerful, I do most of my editing currently in the editor which has syntax highlighting and code completing. While editing the code JsHint is used for script validation. The documentation on Azure Mobile Services is… suboptimal. It is such a pity that there is no way to comment on it so the community could fill in the missing holes, like which node modules are already loaded, and which modules are available on Azure Mobile Services. Soon I was hacking away on Azure Mobile Services, creating my own database tables through script, and abusing the read script of an empty table named query to implement my own set of “services”. The latest updates to Azure Mobile Services described in the following posts added some great new features like creating web API’s, use shared code from your scripts, command line tools for managing Azure Mobile Services (upload and download scripts for example), support for node modules and git support: http://weblogs.asp.net/scottgu/archive/2013/06/14/windows-azure-major-updates-for-mobile-backend-development.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/14/custom-apis-in-azure-mobile-services.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/19/custom-api-in-azure-mobile-services-client-sdks.aspx In the mean time I rewrote all my “service-like” table scripts to API scripts, which works like a breeze. Bad thing with the current state of Azure Mobile Services is that the git support is not working if you are a co-administrator of your Azure subscription, and not and administrator (as in my case). Another bad thing is that Cross Origin Request Sharing (CORS) is not supported for the API yet, so no go yet from the browser client for API’s, which is my case. See http://social.msdn.microsoft.com/Forums/windowsazure/en-US/2b79c5ea-d187-4c2b-823a-3f3e0559829d/known-limitations-for-source-control-and-custom-api-features for more on these and other limitations. In his talk at Build 2013 Josh Twist showed that there is a work-around for accessing shared script code from the table scripts as well (another limitation mentioned in the post above). I could not find that code in the Votabl2 code example from the presentation at https://github.com/joshtwist/votabl2, but we can grab it from the presentation when it comes online on Channel9. By the way: you can always express your needs and ideas at http://mobileservices.uservoice.com, that’s the place they are listening to (I hope!).

    Read the article

  • SQL SERVER – Renaming Index – Index Naming Conventions

    - by pinaldave
    If you are regular reader of this blog, you must be aware of that there are two kinds of blog posts 1) I share what I learn recently 2) I share what I learn and request your participation. Today’s blog post is where I need your opinion to make this blog post a good reference for future. Background Story Recently I came across system where users have changed the name of the few of the table to match their new standard naming convention. The name of the table should be self explanatory and they should have explain their purpose without either opening it or reading documentations. Well, not every time this is possible but again this should be the goal of any database modeler. Well, I no way encourage the name of the tables to be too long like ‘ContainsDetailsofNewInvoices’. May be the name of the table should be ‘Invoices’ and table should contain a column with New/Processed bit filed to indicate if the invoice is processed or not (if necessary). Coming back to original story, the database had several tables of which the name were changed. Story Continues… To continue the story let me take simple example. There was a table with the name  ’ReceivedInvoices’, it was changed to new name as ‘TblInvoices’. As per their new naming standard they had to prefix every talbe with the words ‘Tbl’ and prefix every view with the letters ‘Vw’. Personally I do not see any need of the prefix but again, that issue is not here to discuss.  Now after changing the name of the table they faced very interesting situation. They had few indexes on the table which had name of the table. Let us take an example. Old Name of Table: ReceivedInvoice Old Name of Index: Index_ReceivedInvoice1 Here is the new names New Name of Table: TblInvoices New Name of Index: ??? Well, their dilemma was what should be the new naming convention of the Indexes. Here is a quick proposal of the Index naming convention. Do let me know your opinion. If Index is Primary Clustered Index: PK_TableName If Index is  Non-clustered Index: IX_TableName_ColumnName1_ColumnName2… If Index is Unique Non-clustered Index: UX_TableName_ColumnName1_ColumnName2… If Index is Columnstore Non-clustered Index: CL_TableName Here ColumnName is the column on which index is created. As there can be only one Primary Key Index and Columnstore Index per table, they do not require ColumnName in the name of the index. The purpose of this new naming convention is to increase readability. When any user come across this index, without opening their properties or definition, user can will know the details of the index. T-SQL script to Rename Indexes Here is quick T-SQL script to rename Indexes EXEC sp_rename N'SchemaName.TableName.IndexName', N'New_IndexName', N'INDEX'; GO Your Contribute Please Well, the organization has already defined above four guidelines, personally I follow very similar guidelines too. I have seen many variations like adding prefixes CL for Clustered Index and NCL for Non-clustered Index. I have often seen many not using UX prefix for Unique Index but rather use generic IX prefix only. Now do you think if they have missed anything in the coding standard. Is NCI and CI prefixed required to additionally describe the index names. I have once received suggestion to even add fill factor in the index name – which I do not recommend at all. What do you think should be ideal name of the index, so it explains all the most important properties? Additionally, you are welcome to vote if you believe changing the name of index is just waste of time and energy.  Note: The purpose of the blog post is to encourage all to participate with their ideas. I will write follow up blog posts in future compiling all the suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >