Search Results

Search found 14354 results on 575 pages for 'existing records'.

Page 101/575 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Rendering ASP.NET MVC Razor Views outside of MVC revisited

    - by Rick Strahl
    Last year I posted a detailed article on how to render Razor Views to string both inside of ASP.NET MVC and outside of it. In that article I showed several different approaches to capture the rendering output. The first and easiest is to use an existing MVC Controller Context to render a view by simply passing the controller context which is fairly trivial and I demonstrated a simple ViewRenderer class that simplified the process down to a couple lines of code. However, if no Controller Context is available the process is not quite as straight forward and I referenced an old, much more complex example that uses my RazorHosting library, which is a custom self-contained implementation of the Razor templating engine that can be hosted completely outside of ASP.NET. While it works inside of ASP.NET, it’s an awkward solution when running inside of ASP.NET, because it requires a bit of setup to run efficiently.Well, it turns out that I missed something in the original article, namely that it is possible to create a ControllerContext, if you have a controller instance, even if MVC didn’t create that instance. Creating a Controller Instance outside of MVCThe trick to make this work is to create an MVC Controller instance – any Controller instance – and then configure a ControllerContext through that instance. As long as an HttpContext.Current is available it’s possible to create a fully functional controller context as Razor can get all the necessary context information from the HttpContextWrapper().The key to make this work is the following method:/// <summary> /// Creates an instance of an MVC controller from scratch /// when no existing ControllerContext is present /// </summary> /// <typeparam name="T">Type of the controller to create</typeparam> /// <returns>Controller Context for T</returns> /// <exception cref="InvalidOperationException">thrown if HttpContext not available</exception> public static T CreateController<T>(RouteData routeData = null) where T : Controller, new() { // create a disconnected controller instance T controller = new T(); // get context wrapper from HttpContext if available HttpContextBase wrapper = null; if (HttpContext.Current != null) wrapper = new HttpContextWrapper(System.Web.HttpContext.Current); else throw new InvalidOperationException( "Can't create Controller Context if no active HttpContext instance is available."); if (routeData == null) routeData = new RouteData(); // add the controller routing if not existing if (!routeData.Values.ContainsKey("controller") && !routeData.Values.ContainsKey("Controller")) routeData.Values.Add("controller", controller.GetType().Name .ToLower() .Replace("controller", "")); controller.ControllerContext = new ControllerContext(wrapper, routeData, controller); return controller; }This method creates an instance of a Controller class from an existing HttpContext which means this code should work from anywhere within ASP.NET to create a controller instance that’s ready to be rendered. This means you can use this from within an Application_Error handler as I needed to or even from within a WebAPI controller as long as it’s running inside of ASP.NET (ie. not self-hosted). Nice.So using the ViewRenderer class from the previous article I can now very easily render an MVC view outside of the context of MVC. Here’s what I ended up in my Application’s custom error HttpModule: protected override void OnDisplayError(WebErrorHandler errorHandler, ErrorViewModel model) { var Response = HttpContext.Current.Response; Response.ContentType = "text/html"; Response.StatusCode = errorHandler.OriginalHttpStatusCode; var context = ViewRenderer.CreateController<ErrorController>().ControllerContext; var renderer = new ViewRenderer(context); string html = renderer.RenderView("~/Views/Shared/GenericError.cshtml", model); Response.Write(html); }That’s pretty sweet, because it’s now possible to use ViewRenderer just about anywhere in any ASP.NET application, not only inside of controller code. This also allows the constructor for the ViewRenderer from the last article to work without a controller context parameter, using a generic view as a base for the controller context when not passed:public ViewRenderer(ControllerContext controllerContext = null) { // Create a known controller from HttpContext if no context is passed if (controllerContext == null) { if (HttpContext.Current != null) controllerContext = CreateController<ErrorController>().ControllerContext; else throw new InvalidOperationException( "ViewRenderer must run in the context of an ASP.NET " + "Application and requires HttpContext.Current to be present."); } Context = controllerContext; }In this case I use the ErrorController class which is a generic controller instance that exists in the same assembly as my ViewRenderer class and that works just fine since ‘generically’ rendered views tend to not rely on anything from the controller other than the model which is explicitly passed.While these days most of my apps use MVC I do still have a number of generic pieces in most of these applications where Razor comes in handy. This includes modules like the above, which when they error often need to display error output. In other cases I need to generate string template output for emailing or logging data to disk. Being able to render simply render an arbitrary View to and pass in a model makes this super nice and easy at least within the context of an ASP.NET application!You can check out the updated ViewRenderer class below to render your ‘generic views’ from anywhere within your ASP.NET applications. Hope some of you find this useful.ResourcesViewRenderer Class in Westwind.Web.Mvc Library (Github)Original ViewRenderer ArticleRazor Hosting Library (GitHub)Original Razor Hosting Article© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Major Analyst Report Chooses Oracle As An ECM Leader

    - by brian.dirking(at)oracle.com
    Oracle announced that Gartner, Inc. has named Oracle as a Leader in its latest "Magic Quadrant for Enterprise Content Management" in a press release issued this morning. Gartner's Magic Quadrant reports position vendors within a particular quadrant based on their completeness of vision and ability to execute. According to Gartner, "Leaders have the highest combined scores for Ability to Execute and Completeness of Vision. They are doing well and are prepared for the future with a clearly articulated vision. In the context of ECM, they have strong channel partners, presence in multiple regions, consistent financial performance, broad platform support and good customer support. In addition, they dominate in one or more technology or vertical market. Leaders deliver a suite that addresses market demand for direct delivery of the majority of core components, though these are not necessarily owned by them, tightly integrated, unique or best-of-breed in each area. We place more emphasis this year on demonstrated enterprise deployments; integration with other business applications and content repositories; incorporation of Web 2.0 and XML capabilities; and vertical-process and horizontal-solution focus. Leaders should drive market transformation." "To extend content governance and best practices across the enterprise, organizations need an enterprise content management solution that delivers a broad set of functionality and is tightly integrated with business processes," said Andy MacMillan, vice president, Product Management, Oracle. "We believe that Oracle's position as a Leader in this report is recognition of the industry-leading performance, integration and scalability delivered in Oracle Enterprise Content Management Suite 11g." With Oracle Enterprise Content Management Suite 11g, Oracle offers a comprehensive, integrated and high-performance content management solution that helps organizations increase efficiency, reduce costs and improve content security. In the report, Oracle is grouped among the top three vendors for execution, and is the furthest to the right, placing Oracle as the most visionary vendor. This vision stems from Oracle's integration of content management right into key business processes, delivering content in context as people need it. Using a PeopleSoft Accounts Payable user as an example, as an employee processes an invoice, Oracle ECM Suite brings that invoice up on the screen so the processor can verify the content right in the process, improving speed and accuracy. Oracle integrates content into business processes such as Human Resources, Travel and Expense, and others, in the major enterprise applications such as PeopleSoft, JD Edwards, Siebel, and E-Business Suite. As part of Oracle's Enterprise Application Documents strategy, you can see an example of these integrations in this webinar: Managing Customer Documents and Marketing Assets in Siebel. You can also get a white paper of the ROI Embry Riddle achieved using Oracle Content Management integrated with enterprise applications. Embry Riddle moved from a point solution for content management on accounts payable to an infrastructure investment - they are now using Oracle Content Management for accounts payable with Oracle E-Business Suite, and for student on-boarding with PeopleSoft e-Campus. They continue to expand their use of Oracle Content Management to address further use cases from a core infrastructure. Oracle also shows its vision in the ability to deliver content optimized for online channels. Marketers can use Oracle ECM Suite to deliver digital assets and offers as part of an integrated campaign that understands website visitors and ensures that they are given the most pertinent information and offers. Oracle also provides full lifecycle management through its built-in records management. Companies are able to manage the lifecycle of content (both records and non-records) through built-in retention management. And with the integration of Oracle ECM Suite and Sun Storage Archive Manager, content can be routed to the appropriate storage media based upon content type, usage data or other business rules. This ensures that the most accessed content is instantly available, and archived content is stored on a more appropriate medium like tape. You can learn more in this webinar - Oracle Content Management and Sun Tiered Storage. If you are interested in reading more about why Oracle was chosen as a Leader, view the Gartner Magic Quadrant for Enterprise Content Management.

    Read the article

  • How do I draw a border around a display object in Corona Lua?

    - by Greg
    What would be the easiest way to draw a thin border around a display object in Corona Lua? You could assume it's rectangular image display object. EDIT - re "this question shows no research effort. You should tell us what you've tried and how it didn't work" Reviewed API and could not find a "border" method/property on displayObject Have tried creating a black box slightly bigger behind object, however can not see how to place object behind an existing object hence question How do I move an existing display object behind another in Corona Lua? Google results for putting a border around a display object in corona didn't help

    Read the article

  • SQL SERVER – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    After having excellent response to my quiz – Why SELECT * throws an error but SELECT COUNT(*) does not?I have decided to ask another puzzling question to all of you. I am running this test on SQL Server 2008 R2. Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Note: Auto Update Statistics and Auto Create Statistics for database is TRUE Expected Result – Statistics should be updated – SQL SERVER – When are Statistics Updated – What triggers Statistics to Update Now the question is why the statistics are not updated? The common answer is – we can update the statistics ourselves using UPDATE STATISTICS TableName WITH FULLSCAN, ALL However, the solution I am looking is where statistics should be updated automatically based on algorithm mentioned here. Now the solution is to ____________________. Vinod Kumar is not allowed to take participate over here as he is the one who has helped me to build this puzzle. I will publish the solution on next week. Please leave a comment and if your comment consist valid answer, I will publish with due credit. Here is the script to reproduce the scenario which I mentioned. -- Execution Plans Difference -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table - none listed sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -------------------------------------------------------------- -- Round 2 -- Insert Ten Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 10000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here -- NOTE: Replace your _WA_Sys with stats from above query DBCC SHOW_STATISTICS('ExecTable', _WA_Sys_00000004_7D78A4E7); GO -- You will notice that Statistics are still updated with 1000 rows -- Clean up Database DROP TABLE ExecTable GO USE MASTER GO ALTER DATABASE SampleDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE; GO DROP DATABASE SampleDB GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • Knockout.js - Filtering, Sorting, and Paging

    - by jtimperley
    Originally posted on: http://geekswithblogs.net/jtimperley/archive/2013/07/28/knockout.js---filtering-sorting-and-paging.aspxKnockout.js is fantastic! Maybe I missed it but it appears to be missing flexible filtering, sorting, and pagination of its grids. This is a summary of my attempt at creating this functionality which has been working out amazingly well for my purposes. Before you continue, this post is not intended to teach you the basics of Knockout. They have already created a fantastic tutorial for this purpose. You'd be wise to review this before you continue. http://learn.knockoutjs.com/ Please view the full source code and functional example on jsFiddle. Below you will find a brief explanation of some of the components. http://jsfiddle.net/JTimperley/pyCTN/13/ First we need to create a model to represent our records. This model is a simple container with defined and guaranteed members. function CustomerModel(data) { if (!data) { data = {}; } var self = this; self.id = data.id; self.name = data.name; self.status = data.status; } Next we need a model to represent the page as a whole with an array of the previously defined records. I have intentionally overlooked the filtering and sorting options for now. Note how the filtering, sorting, and pagination are chained together to accomplish all three goals. This strategy allows each of these pieces to be used selectively based on the page's needs. If you only need sorting, just sort, etc. function CustomerPageModel(data) { if (!data) { data = {}; } var self = this; self.customers = ExtractModels(self, data.customers, CustomerModel); var filters = […]; var sortOptions = […]; self.filter = new FilterModel(filters, self.customers); self.sorter = new SorterModel(sortOptions, self.filter.filteredRecords); self.pager = new PagerModel(self.sorter.orderedRecords); } The code currently supports text box and drop down filters. Text box filters require defining the current 'Value' and the 'RecordValue' function to retrieve the filterable value from the provided record. Drop downs allow defining all possible values, the current option, and the 'RecordValue' as before. Once defining these filters, they are automatically added to the screen and any changes to their values will automatically update the results, causing their sort and pagination to be re-evaluated. var filters = [ { Type: "text", Name: "Name", Value: ko.observable(""), RecordValue: function(record) { return record.name; } }, { Type: "select", Name: "Status", Options: [ GetOption("All", "All", null), GetOption("New", "New", true), GetOption("Recently Modified", "Recently Modified", false) ], CurrentOption: ko.observable(), RecordValue: function(record) { return record.status; } } ]; Sort options are more simplistic and are also automatically added to the screen. Simply provide each option's name and value for the sort drop down as well as function to allow defining how the records are compared. This mechanism can easily be adapted for using table headers as the sort triggers. That strategy hasn't crossed my functionality needs at this point. var sortOptions = [ { Name: "Name", Value: "Name", Sort: function(left, right) { return CompareCaseInsensitive(left.name, right.name); } } ]; Paging options are completely contained by the pager model. Because we will be chaining arrays between our filtering, sorting, and pagination models, the following utility method is used to prevent errors when handing an observable array to another observable array. function GetObservableArray(array) { if (typeof(array) == 'function') { return array; }   return ko.observableArray(array); }

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 22 (sys.dm_db_index_physical_stats)

    - by Tamarick Hill
    The sys.dm_db_index_physical_stats Dynamic Management Function is used to return information about the fragmentation levels, page counts, depth, number of levels, record counts, etc. about the indexes on your database instance. One row is returned for each level in a given index, which we will discuss more later. The function takes a total of 5 input parameters which are (1) database_id, (2) object_id, (3) index_id, (4) partition_number, and (5) the mode of the scan level that you would like to run. Let’s use this function with our AdventureWorks2012 database to better illustrate the information it provides. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, NULL) As you can see from the result set, there is a lot of beneficial information returned from this DMF. The first couple of columns in the result set (database_id, object_id, index_id, partition_number, index_type_desc, alloc_unit_type_desc) are either self-explanatory or have been explained in our previous blog sessions so I will not go into detail about these at this time. The next column in the result set is the index_depth which represents how deep the index goes. For example, If we have a large index that contains 1 root page, 3 intermediate levels, and 1 leaf level, our index depth would be 5. The next column is the index_level which refers to what level (of the depth) a particular row is referring to. Next is probably one of the most beneficial columns in this result set, which is the avg_fragmentation_in_percent. This column shows you how fragmented a particular level of an index may be. Many people use this column within their index maintenance jobs to dynamically determine whether they should do REORG’s or full REBUILD’s of a given index. The fragment count represents the number of fragments in a leaf level while the avg_fragment_size_in_pages represents the number of pages in a fragment. The page_count column tells you how many pages are in a particular index level. From my result set above, you see the the remaining columns all have NULL values. This is because I did not specify a ‘mode’ in my query and as a result it used the ‘LIMITED’ mode by default. The LIMITED mode is meant to be lightweight so it does collect information for every column in the result set. I will re-run my query again using the ‘DETAILED’ mode and you will see we now have results for these rows. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, ‘DETAILED’)   From the remaining columns, you see we get even more detailed information such as how many records are in a particular index level (record_count). We have a column for ghost_record_count which represents the number of records that have been marked for deletion, but have not physically been removed by the background ghost cleanup process. We later see information on the MIN, MAX, and AVG record size in bytes. The forwarded_record_count column refers to records that have been updated and now no longer fit within the row on the page anymore and thus have to be moved. A forwarded record is left in the original location with a pointer to the new location. The last column in the result set is the compressed_page_count column which tells you how many pages in your index have been compressed. This is a very powerful DMF that returns good information about the current indexes in your system. However, based on the mode you select, it could be a very resource intensive function so be careful with how you use it. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188917.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Oracle Utilities Application Framework future feature deprecation

    - by Paula Speranza-Hadley
    From time to time, existing functionality is replaced with alternative features to offer greater flexibility and standardization. In Oracle Utilities Application Framework V4.2.0.0.0 the following features are being announced for deprecation in the next release or have been previously announced and are not being delivered with this version of the Oracle Utilities Application Framework: ·         No SQL Server Support – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with any support for SQL Server. ·         No MPL Support – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with the Multi-Purpose Listener (MPL) component of the XML Application Integration (XAI) component. Customers using the MPL should migrate to Oracle Service Bus. ·         No provided Crystal Reports/Business Objects Interface – Oracle Utilities Application Framework V4.2.0.0.0 or above does not ship with a supported Crystal Reports/Business Objects Interface. This facility is now available as downloadable customization for existing or new customers. Responsibility for maintenance and new features is now individual customer's responsibility. ·         XAI Servlet deprecation – The XAI Servlet (xaiserver and classicxai) will be removed in the next release of the Oracle Utilities Application Framework. Customers are encouraged to migrate to the native Web Services Support as outlined in XAI Best Practices whitepaper available from My Oracle Support (Doc Id: 942074.1). ·         ConfigLab deprecation – The ConfigLab facility will be removed in the next release of Oracle Utilities Application Framework for products it is shipped with. Customers are recommended to migrate to the Configuration Migration Assistant which provides the same and more functionality.   ·         Archiving deprecation – The inbuilt Archiving has been removed from Oracle Utilities Application Framework V4.2.0.0.0 or above, for products it is shipped with. Customers considering Archiving solution should migrate to the Information Lifecycle Management based solution provided for your product. ·         DISTRIBUTED batch execution mode deprecation – The DISTRIBUTED execution mode used by the batch component of the Oracle Utilities Application Framework will be deprecated in the next release of the Oracle Utilities Application Framework. Customers using DISTRUBUTED mode should migrate to CLUSTERED mode as outlined in the Batch Best Practices For Oracle Utilities Application Framework Based Products whitepaper available from My Oracle Support (Doc Id: 836362.1). ·         XAI Schema Editor deprecation – The XAI Schema Editor which is a component of the Oracle Utilities Software Development Kit will be removed in the next release of the Oracle Utilities Application Framework. Customers should migrate their existing schemas to Business Object based schemas and use the browser based Schema Editor instead.  

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • Should OpenID clients accept adding WWW to the domain?

    - by Steve Clay
    For a long time I've used OpenID delegation on my site: http://example.org/ delegated to: http://example.openid-provider.com/, so I logged into OpenID-consuming sites using the former as ID. Recently I added www. to my site's canonical domain so http://example.org/ now redirects to http://www.example.org/. Should I be able to continue logging into existing OpenID accounts using http://example.org/? StackExchange sites say "yes". I can use either URL. At least one other doesn't recognize my existing account. Who's "right" (per spec) and is there anything I can fix on my end?

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • OWB 11gR2: Migration and Upgrade Paths from Previous Versions

    - by antonio romero
    Over the next several months, we expect widespread adoption of OWB 11gR2, both for its new features and because it is the only release of Warehouse Builder certified for use with database 11gR2. Customers seeking to move existing environments to OWB 11gR2 should review the new whitepaper, OWB 11.2: Upgrade and Migration Paths. This whitepaper covers the following topics: The difference between upgrade and migration, and how to choose between them An outline of how to perform each process When and where intermediate upgrade steps are required Tips for upgrading an existing environment to 11gR2 without having to regenerate and redeploy code to your production environment. Moving up from 10gR2 and 11gR1 is generally straightforward. For customers still using OWB 9 or 10.1, it is generally possible to move an entire environment forward complete with design and runtime audit metadata, but the upgrade process can be complex and may require intermediate processing using OWB 10.2 or OWB 11.1. Moving a design by itself is much simpler, though it requires regeneration and redeployment. Relevant details are provided in the whitepaper, so if you are planning an upgrade at some point soon, definitely start there.

    Read the article

  • ROA on top of SOA

    - by Vaibhav Pujari
    I already have a stable Service Oriented Architecture for my application which exposes services as API calls. (the verbs) Now, I need to build a Resource Oriented Architecture to expose a RESTful API to interact with the application objects. (the nouns) What are the best practices to reuse the existing services: - without any persistence inside my new code. - without putting unnecessary logic into the REST layer i.e. it should ideally just leverage the services provided by SOA API. I want this layer to be as thin as possible. - without modifying the existing SOA API - allow easy extension of the REST API i.e. it should be easy to add more resources without changing the (yet to be written) core code. (I want to make resource names and their associated actions configurable so more contributors can easily add resources without a need to understand my module) Any advices/suggestions how to achieve this? Edit: Adding more info My Stack: My existing stacks is in Java. But since I plan to just use the services, I don't think that should affect the design of new REST code. I am planning to implement the new REST code in PHP. How well the services map to resources? Some services are mapped well i.e. there are services for creating, updating application objects. But for other application objects, there are no direct services available. More importantly, there are actions beyond just create, update etc. that apply to application objects. And I would like to provide some way for these actions to be exposed through REST. Since these are verbs, how do I deal with them? Where exactly I need help? I would appreciate any help towards high level design to accomplish the task along-with making the framework extendible. For instance, tomorrow there are some new services added to my SOA layer, I want to make sure it is easy for a fresh developer to write a REST call by simply registering a new resource (in a config file/db) and write code for connecting it with SOA calls. Just like plugin.

    Read the article

  • Create a Social Community of Trust Along With Your Federal Digital Services Governance

    - by TedMcLaughlan
    The Digital Services Governance Recommendations were recently released, supporting the US Federal Government's Digital Government Strategy Milestone Action #4.2 to establish agency-wide governance structures for developing and delivering digital services. Figure 1 - From: "Digital Services Governance Recommendations" While extremely important from a policy and procedure perspective within an Agency's information management and communications enterprise, these recommendations only very lightly reference perhaps the most important success enabler - the "Trusted Community" required for ultimate usefulness of the services delivered. By "ultimate usefulness", I mean the collection of public, transparent properties around government information and digital services that include social trust and validation, social reach, expert respect, and comparative, standard measures of relative value. In other words, do the digital services meet expectations of the public, social media ecosystem (people AND machines)? A rigid governance framework, controlling by rules, policies and roles the creation and dissemination of digital services may meet the expectations of direct end-users and most stakeholders - including the agency information stewards and security officers. All others who may share comments about the services, write about them, swap or review extracts, repackage, visualize or otherwise repurpose the output for use in entirely unanticipated, social ways - these "stakeholders" will not be governed, but may observe guidance generated by a "Trusted Community". As recognized members of the trusted community, these stakeholders may ultimately define the right scope and detail of governance that all other users might observe, promoting and refining the usefulness of the government product as the social ecosystem expects. So, as part of an agency-centric governance framework, it's advised that a flexible governance model be created for stewarding a "Community of Trust" around the digital services. The first steps follow the approach outlined in the Recommendations: Step 1: Gather a Core Team In addition to the roles and responsibilities described, perhaps a set of characteristics and responsibilities can be developed for the "Trusted Community Steward/Advocate" - i.e. a person or team who (a) are entirely cognizant of and respected within the external social media communities, and (b) are trusted both within the agency and outside as practical, responsible, non-partisan communicators of useful information. The may seem like a standard Agency PR/Outreach team role - but often an agency or stakeholder subject matter expert with a public, active social persona works even better. Step 2: Assess What You Have In addition to existing, agency or stakeholder decision-making bodies and assets, it's important to take a PR/Marketing view of the social ecosystem. How visible are the services across the social channels utilized by current or desired constituents of your agency? What's the online reputation of your agency and perhaps the service(s)? Is Search Engine Optimization (SEO) a facet of external communications/publishing lifecycles? Who are the public champions, instigators, value-adders for the digital services, or perhaps just influential "communicators" (i.e. with no stake in the game)? You're essentially assessing your market and social presence, and identifying the actors (including your own agency employees) in the existing community of trust. Step 3: Determine What You Want The evolving Community of Trust will most readily absorb, support and provide feedback regarding "Core Principles" (Element B of the "six essential elements of a digital services governance structure") shared by your Agency, and obviously play a large, though probably very unstructured part in Element D "Stakeholder Input and Participation". Plan for this, and seek input from the social media community with respect to performance metrics - these should be geared around the outcome and growth of the trusted communities actions. How big and active is this community? What's the influential reach of this community with respect to particular messaging or campaigns generated by the Agency? What's the referral rate TO your digital services, FROM channels owned or operated by members of this community? (this requires governance with respect to content generation inclusive of "markers" or "tags"). At this point, while your Agency proceeds with steps 4 ("Build/Validate the Governance Structure") and 5 ("Share, Review, Upgrade"), the Community of Trust might as well just get going, and start adding value and usefulness to the existing conversations, existing data services - loosely though directionally-stewarded by your trusted advocate(s). Why is this an "Enterprise Architecture" topic? Because it's increasingly apparent that a Public Service "Enterprise" is not wholly contained within Agency facilities, firewalls and job titles - it's also manifested in actual, perceived or representative forms outside the walls, on the social Internet. An Agency's EA model and resulting investments both facilitate and are impacted by the "Social Enterprise". At Oracle, we're very active both within our Enterprise and outside, helping foster social architectures that enable truly useful public services, digital or otherwise.

    Read the article

  • How to install new Intel Ethernet driver

    - by Alex Farber
    Ubuntu 12.04 x64 doesn't recognize newest Intel Ethernet adapter on my desktop (Intel Ethernet Connection i217-V). I downloaded required driver from Intel and compiled it using make. Now I have: alex@alex64-six:~$ find / -name 'e1000e.ko' 2>/dev/null /home/alex/Documents/IntelEthernetDriver/e1000e-3.0.4/src/e1000e.ko /lib/modules/3.2.0-64-generic/kernel/drivers/net/ethernet/intel/e1000e/e1000e.ko The first line is new driver compiled from Intel sources. The second line is probably existing driver from Ubuntu distribution, which doesn't recognize new Ethernet adapter. How can I apply the new driver instead of existing one? Any other solution is welcome. For now, I cannot upgrade to latest Ubuntu release, because I use some third-party products.

    Read the article

  • Announcement: Employee Info Starter Kit (v5.0) is Released

    - by Mohammad Ashraful Alam
    Ever wanted to have a simple jQuery menu bound with ASP.NET web site map file? Ever wanted to have cool css design stuffs implemented on your ASP.NET data bound controls? Ever wanted to let Visual Studio generate logical layers for you, which can be easily tested, customized and bound with ASP.NET data controls? If your answers with respect to above questions are ‘yes’, then you will probably happy to try out latest release (v5.0) of Employee Starter Kit, which is intended to address different types of real world challenges faced by web application developers when performing common CRUD operations. Using a single database table ‘Employee’, the current release illustrates how to utilize Microsoft ASP.NET 4.0 Web Form Data Controls, Entity Framework 4.0 and Visual Studio 2010 effectively in that context. Employee Info Starter Kit is an open source ASP.NET project template that is highly influenced by the concept ‘Pareto Principle’ or 80-20 rule, where it is targeted to enable a web developer to gain 80% productivity with 20% of effort with respect to learning curve and production. This project template is titled as “Employee Info Starter Kit”, which was initially hosted on Microsoft Code Gallery and been downloaded 1, 50,000+ of copies afterword.  The latest version of this starter kit is hosted in Codeplex. Release Highlights User End Functional Specification The user end functionalities of this starter kit are pretty simple and straight forward that are focused in to perform CRUD operation on employee records as described below. Creating a new employee record Read existing employee records Update an existing employee record Delete existing employee records Architectural Overview Simple 3 layer architecture (presentation, business logic and data access layer) ASP.NET web form based user interface Built-in code generators for logical layers, implemented in Visual Studio default template engine (T4) Built-in Entity Framework entities as business entities (aka: data containers) Data Mapper design pattern based Data Access Layer, implemented in C# and Entity Framework Domain Model design pattern based Business Logic Layer, implemented in C# Object Model for Cross Cutting Concerns (such as validation, logging, exception management) Minimum System Requirements Visual Studio 2010 (Web Developer Express Edition) or higher Sql Server 2005 (Express Edition) or higher Technology Utilized Programming Languages/Scripts Browser side: JavaScript Web server side: C# Code Generation Template: T-4 Template Frameworks .NET Framework 4.0 JavaScript Framework: jQuery 1.5.1 CSS Framework: 960 grid system .NET Framework Components .NET Entity Framework .NET Optional/Named Parameters (new in .net 4.0) .NET Tuple (new in .net 4.0) .NET Extension Method .NET Lambda Expressions .NET Anonymous Type .NET Query Expressions .NET Automatically Implemented Properties .NET LINQ .NET Partial Classes and Methods .NET Generic Type .NET Nullable Type ASP.NET Meta Description and Keyword Support (new in .net 4.0) ASP.NET Routing (new in .net 4.0) ASP.NET Grid View (CSS support for sorting - (new in .net 4.0)) ASP.NET Repeater ASP.NET Form View ASP.NET Login View ASP.NET Site Map Path ASP.NET Skin ASP.NET Theme ASP.NET Master Page ASP.NET Object Data Source ASP.NET Role Based Security Getting Started Guide To see Employee Info Starter Kit in action is pretty easy! Download the latest version. Extract the file. From the extracted folder click the C# project file (Eisk.Web.csproj) to open it in Visual Studio 2010 Hit Ctrl+F5! The current release (v5.0) of Employee Info Starter Kit is properly packaged, fully documented and well tested. If you want to learn more about it in details, just check the following links: Release Home Page Installation Walkthrough Hand on Coding Walkthrough Technical Reference Enjoy!

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • Oracle + Sun Product Strategy Webcast Series

    - by Paulo Folgado
    The Oracle + Sun Product Strategy Webcast series is composed of informative, on-demand sessions that offer strategies for Sun's major product lines related to the company combination, explain how Oracle will deliver more innovation to our customers, and outline our approach to protecting customers' investments. Ranging from 5 to 27 minutes each, the Webcasts cover the strategies for hardware, systems, software, solutions, and partners.In addition, Judson Althoff, SVP, Worldwide Alliances and Channels, Oracle, followed up the Webcast series with a video FAQ to help answer the following top partner questions about the Oracle + Sun combination and the OPN Specialized program: What is the impact the overall combined company will have on the partners?What are Oracle's plans for selling direct and what is the impact to partners?How will Sun partners integrate into OPN Specialized?As a Sun partner, am I automatically migrated into OPN Specialized?Will Oracle continue to partner with other hardware vendors?How will Oracle map existing Sun investments and certifications into OPN Specialized?As a Sun partner new to Oracle, where should I be placing my focus?What can partners expect to see relative to Exadata V2?How do content delivery platforms (CDPs) fit into the Oracle framework?How do existing Sun Partners place orders?

    Read the article

  • Model Driven Architecture Approach in programming / modelling

    - by yak
    I know the basics of the model driven architecture: it is all about model the system which I want to create and create the core code afterwards. I used CORBA a while ago. First thing that I needed to do was to create an abstract interface (some kind of model of the system I want to build) and generate core code later. But I have a different question: is model driven architecture a broad approach or not? I mean, let's say, that I have the language (modelling language) in which I want to model EXISTING system (opposite to the system I want to CREATE), and then analyze the model of the created system and different facts about that modeled abstraction. In this case, can the process I described above be considered the model driven architecture approach? I mean, I have the model, but this is the model of the existing system, not the system to be created.

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • SQL SERVER – Clustered Index and Primary Key – Contest Win Joes 2 Pros Combo (USD 198) – Day 3 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following datatype is usually NOT the best choice for Primary Key and Clustered Index? a) INT b) BIGINT c) GUID d) SMALLINT Query Hints: BIG HINT POST The clustered index is the placement order of a table’s records in memory pages. When you insert new records, then each record will be inserted into the memory page in the order it belongs. In the figure below we see another new record (Major Disarray) being inserted, in sequence, between Jonny and Rick. Since there is no room in this memory page, some records will need to shift around. The page split occurs when Irenes’ record moves to the second page. Page splits are considered very bad for performance, and there are a number of techniques to reduce, or even eliminate, the risk of page splits. You can create a clustered index on the table on any field you choose. Sometime SQL will create a clustered index for you. Often times the field having the Primary Key makes a great candidate for the clustered index. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 3. SQL Joes 2 Pros Development Series – All about SQL Statistics SQL Joes 2 Pros Development Series – Introduction to Page Split SQL Joes 2 Pros Development Series – The Clustered Index – Simple Understanding SQL Joes 2 Pros Development Series – Geography Data Type – Calculating Distance Between Two Points on the Earth SQL Joes 2 Pros Development Series – Sparse Data and Space Used by Sparse Data SQL Joes 2 Pros Development Series – System and Time Data Types SQL Joes 2 Pros Development Series – Data Row Space Usage and NULL Storage Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >