Search Results

Search found 36946 results on 1478 pages for 'sample database'.

Page 381/1478 | < Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >

  • Update table instantly or “Bulk” Update in database later? And is it advisable?

    - by Mestika
    Hi, I have a question regarding a semi-constant update in a database. In short it is regarding a checkout function on a web page, which each time the checkout function is evoked it do five steps. I want to try to optimize this function and have my eye on a step where I update a table each time the checkout is performed. I take the information retrieved from the shopping cart and then update the table in question. I do have some indexes on the table, the gain from those are greater than leaving them so this is a cost I’m willing to take. Now, my question is. Could it in some way regarding to performance be better to not update the table instantly but collect every checkout items and save them in some way (maybe in a file) and then at a specific time (or several times) at day take this file and then update the table with the new information. Then I started thinking about if there was a possibility to use some sort of Bulk Update to take a file, hashmap, array (or?) and then update it. And I’m using IBM DB2 version 9.7 Mestika

    Read the article

  • Is it best to make fewer calls to the database and output the results in an array?

    - by Jonathan
    I'm trying to create a more succinct way to make hundreds of db calls. Instead of writing the whole query out every time I wanted to output a single field, I tried to port the code into a class that did all the query work. This is the class I have so far: class Listing { /* Connect to the database */ private $mysql; function __construct() { $this->mysql = new mysqli(DB_LOC, DB_USER, DB_PASS, DB) or die('Could not connect'); } function getListingInfo($l_id = "", $category = "", $subcategory = "", $username = "", $status = "active") { $condition = "`status` = '$status'"; if (!empty($l_id)) $condition .= "AND `L_ID` = '$l_id'"; if (!empty($category)) $condition .= "AND `category` = '$category'"; if (!empty($subcategory)) $condition .= "AND `subcategory` = '$subcategory'"; if (!empty($username)) $condition .= "AND `username` = '$username'"; $result = $this->mysql->query("SELECT * FROM listing WHERE $condition") or die('Error fetching values'); $info = $result->fetch_object() or die('Could not create object'); return $info; } } This makes it easy to access any info I want from a single row. $listing = new Listing; echo $listing->getListingInfo('','Books')->title; This outputs the title of the first listing in the category "Books". But if I want to output the price of that listing, I have to make another call to getListingInfo(). This makes another query on the db and again returns only the first row. This is much more succinct than writing the entire query each time, but I feel like I may be calling the db too often. Is there a better way to output the data from my class and still be succinct in accessing it (maybe outputting all the rows to an array and returning the array)? If yes, How?

    Read the article

  • problem by displaying images stored in a MySQL database?

    - by user1400
    i have images in my blob field in database i use blow colde for displaying image but it does not show any pic , someone could tell me where is my wrong? this is my model: public function getListUser() { $select = $this->select() ->order('lastname DESC'); $adapter = new Zend_Paginator_Adapter_DbTableSelect ($select); return $adapter; } this my controller: $userModel = new Admin_Model_User(); $adapter = $userModel->getListUser(); $paginator = new Zend_Paginator ($adapter); $paginator->setItemCountPerPage(1); $page = $this->_request->getParam('page', 1); $paginator->setCurrentPageNumber($page); $this->view->paginator = $paginator; this my view code: <td style="width: 20%;"> <?php echo $this->lastname ?> </td> <td style="width: 20%;"><?php header("Content-type: image/gif"); print $this->image; ?></td>

    Read the article

  • CodePlex Daily Summary for Tuesday, September 18, 2012

    CodePlex Daily Summary for Tuesday, September 18, 2012Popular ReleasesfastJSON: v2.0.5: 2.0.5 - fixed number parsing for invariant format - added a test for German locale number testing (,. problems)????????API for .Net SDK: SDK for .Net ??? Release 4: 2012?9?17??? ?????,???????????????。 ?????Release 3??????,???????,???,??? ??????????????????SDK,????????。 ??,??????? That's all.VidCoder: 1.4.0 Beta: First Beta release! Catches up to HandBrake nightlies with SVN 4937. Added PGS (Blu-ray) subtitle support. Additional framerates available: 30, 50, 59.94, 60 Additional sample rates available: 8, 11.025, 12 and 16 kHz Additional higher bitrates available for audio. Same as Source Constant Framerate available. Added Apple TV 3 preset. Added new Bob deinterlacing option. Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will keep running and continue pro...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.02.01: Stabilization release fixed this issues: Links not worked on FF, Chrome and Safari Modified packaging with own manifest file for install and source package. Moved the user Image on the Login to the left side. Moved h2 font-size to 24px. Note : This release Comes w/o source package about we still work an a solution. Who Needs the Visual Studio source files please go to source and download it from there. Known 16 CSS issues that related to the skin.css. All others are DNN default o...Visual Studio Icon Patcher: Version 1.5.1: This fixes a bug in the 1.5 release where it would crash when no language packs were installed for VS2010.sheetengine - Isometric HTML5 JavaScript Display Engine: sheetengine v1.1.0: This release of sheetengine introduces major drawing optimizations. A background canvas is created with the full drawn scenery onto which only the changed parts are redrawn. For example a moving object will cause only its bounding box to be redrawn instead of the full scene. This background canvas is copied to the main canvas in each iteration. For this reason the size of the bounding box of every object needs to be defined and also the width and height of the background canvas. The example...VFPX: Desktop Alerts 1.0.2: This update for the Desktop Alerts contains changes to behavior for setting custom sounds for alerts. I have removed ALERTWAV.TXT from the project, and also removed DA_DEFAULTSOUND from the VFPALERT.H file. The AlertManager class and Alert class both have a "default" cSound of ADDBS(JUSTPATH(_VFP.ServerName))+"alert.wav" --- so, as long as you distribute a sound file with the file name "alert.wav" along with the EXE, that file will be used. You can set your own sound file globally by setti...MCEBuddy 2.x: MCEBuddy 2.2.15: Changelog for 2.2.15 (32bit and 64bit) 1. Added support for %originalfilepath% to get the source file full path. Used for custom commands only. 2. Added support for better parsing of Media Portal XML files to extract ShowName and Episode Name and download additional details from TVDB (like Season No, Episode No etc). 3. Added support for TVDB seriesID in metadata 4. Added support for eMail non blocking UI testCrashReporter.NET : Exception reporting library for C# and VB.NET: CrashReporter.NET 1.2: *Added html mail format which shows hierarchical exception report for better understanding.PDF Viewer Web part: PDF Viewer Web Part: PDF Viewer Web PartMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.67: Fix issue #18629 - incorrectly handling null characters in string literals and not throwing an error when outside string literals. update for Issue #18600 - forgot to make the ///#DEBUG= directive also set a known-global for the given debug namespace. removed the kill-switch for disregarding preprocessor define-comments (///#IF and the like) and created a separate CodeSettings.IgnorePreprocessorDefines property for those who really need to turn that off. Some people had been setting -kil...MPC-BE: Media Player Classic BE 1.0.1.0 build 1122: MPC-BE is a free and open source audio and video player for Windows. MPC-BE is based on the original "Media Player Classic" project (Gabest) and "Media Player Classic Home Cinema" project (Casimir666), contains additional features and bug fixes. Supported Operating Systems: Windows XP SP2, Vista, 7 32bit/64bit System Requirements: An SSE capable CPU The latest DirectX 9.0c runtime (June 2010). Install it regardless of the operating system, they all need it. Web installer: http://www.micro...Preactor Object Model: Visual Studio Template .NET 3.5: Visual Studio Template with all the necessary files to get started with POM. You will still need to Get the Preactor.ObjectModel and Preactor.ObjectModleExtensions libraries from Nuget though. You will also need to sign with assembly with a strong name key.Lakana - WPF Framework: Lakana V2: Lakana V2 contains : - Lakana WPF Forms (with sample project) - Lakana WPF Navigation (with sample project)myCollections: Version 2.3.0.0: New in this version : Added TheGamesDB.net API for games and nds Added Fast search options Added order by Artist/Album for music Fixed several provider Performance improvement New Splash Screen BugFixingMicrosoft SQL Server Product Samples: Database: OData QueryFeed workflow activity: The OData QueryFeed sample activity shows how to create a workflow activity that consumes an OData resource, and renders entity properties in a Microsoft Excel 2010 worksheet or Microsoft Word 2010 document. Using the sample QueryFeed activity, you can consume any OData resource. The sample activity uses LINQ to project OData metadata into activity designer expression items. By setting activity expressions, a fully qualified OData query string is constructed consisting of Resource, Filter, Or...F# 3.0 Sample Pack: FSharp 3.0 Sample Pack for Visual Studio 2012 RTM: F# 3.0 Sample Pack for Visual Studio 2012 RTMANPR MX: ANPR_MX Release 1: ANPR MX Release 1 Features: Correctly detects plate area for the average North American plate. (It won't work for the "European" plate size.) Provides potential values for the recognized plate. Allows images 800x600 and below (.jpg / .png). The example requires the VC 10 runtime & .NET 4 Framework to be already installed. The Source code project was made on Visual Studio 2010.Cocktail: Cocktail v1.0.1: PrerequisitesVisual Studio 2010 with SP1 (any edition but Express) Optional: Silverlight 4 or 5 Note: Install Silverlight 4 Tools and then the Silverlight 4 Toolkit. Likewise for Silverlight 5 Tools and the Silverlight 5 Toolkit DevForce Express 6.1.8.1 Included in the Cocktail download, DevForce Express requires registration) Important: Install DevForce after all other components. Download contentsDebug and release assemblies API documentation Source code License.txt Re...weber: weber v0.1: first release, creates a basic browser shell and allows user to navigate to web sites.New Projects.NET Code Editor & Compiler Component: .Net compiler component with integrated advanced text box, VisualStudio like highlightning, ability to intercept and show StandardOutput strings.NET Plugin Manager: Provides agnostic functionality for tiered plugin loading, unloading, and plugin collection management.Amazon Control Panel v2: Amazon Control Panel is a application that lets you control you Amazon Seller Central account using the Amazon MWS (Merchant Web Service) API.AutoSPSourceBuilder: AutoSPSourceBuilder: a utility for automatically building a SharePoint 2010 or 2013 install source including service packs, language packs & cumulative updates.CAOS: RBAC acess controllChat Forum: An Internet  forum,  or message  board,  is  an online discussion  site conversations  in  the  form  of  posted  messages.CRM 2011 - Many-To-Many Relationship Entity View: This Silverlight Web Resource for CRM 2011 will allow user to see N:N relationship entity data from single place.dardasim: dardasim gil and lior Tel Cabir DolphinsDBAManage: ???????ERP??,????!DimDate Generator: A SSIS project for generation a data dimansion table and data.DNL: eine grße .net bibliothek für entwicklerDouban FM for Metro: A music radio client for http://douban.fm running on Windows 8 / WinRTExtended WPF Control: Extended WPF Control for research and learning.FizzBuzzDaveC: Implements the classic FizzBuzz programmine exercise.HamStart: Nothing for now...Infopath XSN Modifier: A tool for editing the dataconnections of Infopath.KH Picture Resizer: Picture Resizer ermöglicht es Bilder per Drag and Dop zu verkleinern. Das Program wurde in C# geschrieben und nutzt Windows Forms.Korean String Extension for .NET: ?? ??? ??? ????? ???? string??? ??? ????? Extension library for "string" class that enhances "Hangul Jamo system" features Lucky Loot - Tattoo Shop Management Application: Lucky Loot - Tattoo Shop Management Application Por: Eric Gabriel Rodrigues Castoldi Objetivo: Trabalho de Conclusão de Curso de Sistemas de InformaçãoMagnOS - C# Cosmos Operating System: MagnOS is an Open Source operating system, made to learn how to make operating systems with Cosmos.Móa mày: Project m?iOData Samples: A collection of samples demonstrating solutions and functionality in WCF Data Services, ODataLib and EdmLib.Online Image Editor: Online Photo CanvasOptimuss Administración: La mejor aplicación de Gestión y Control EscolarOptimuss Obelix: La mejor aplicación de Gestión y Control EscolarPersonal Website: My personal websitePlanisoft: Proyecto de Planilla para clinica los fresnosPROYECTODT: ..................................................................................................................PtLibrary: PtLibrary stands for Peter Thönell's Delphi library. PtSettings and PtSettingsGUI make the management and use of settings extremely easy and powerful.RTS WebServer: A small lightweight, modern and fast webserver (template). with in the feature the newest technologies like SPDY and websocketsStandards: Standards is an Intranet application (using Windows authentication) designed to document and manage company standards. It is written in C#/MVC 4.Truttle OS: This is an OS I made with CosmosWfp System: zdgdsfgdsfgzpo: projekt na zaliczenie zpo???: ???

    Read the article

  • CodePlex Daily Summary for Wednesday, June 01, 2011

    CodePlex Daily Summary for Wednesday, June 01, 2011Popular ReleasesVidCoder: 0.9.1: Added color coding to the Log window. Errors are highlighted in red, HandBrake logs are in black and VidCoder logs are in dark blue. Moved enqueue button to the right with the other control buttons. Added logic to report failures when errors are logged during the encode or when the encode finishes prematurely. Added Copy button to Log window. Adjusted audio track selection box to always show the full track name. Changed encode job progress bar to also be colored yellow when the enco...Terraria Map Generator: TerrariaMapTool 1.0.0.3 Beta: 1) Catch all exception from the gui app. 2) Fixed the use of the graphics device to really use reach.AutoLoL: AutoLoL v2.0.1: - Fixed a small bug in Auto Login - Fixed the updaterEPPlus-Create advanced Excel 2007 spreadsheets on the server: EPPlus 2.9.0.1: EPPlus-Create advanced Excel 2007 spreadsheets on the server This version has been updated to .Net Framework 3.5 New Features Data Validation. PivotTables (Basic functionalliy...) Support for worksheet data sources. Column, Row, Page and Data fields. Date and Numeric grouping Build in styles. ...and more And some minor new features... Ranges Text-Property|Get the formated value AutofitColumns-method to set the column width from the content of the range LoadFromCollection-metho...jQuery ASP.Net MVC Controls: Version 1.4.0.0: Version 1.4.0.0 contains the following additions: Upgraded to MVC 3.0 Upgraded to jQuery 1.6.1 (Though the project supports all jQuery version from 1.4.x onwards) Upgraded to jqGrid 3.8 Better Razor View-Engine support Better Pager support, includes support for custom pagers Added jqGrid toolbar buttons support Search module refactored, with full suport for multiple filters and ordering And Code cleanup, bug-fixes and better controller configuration support.Restbucks on .Net: RestBucks Alpha 1: This is the first release of the application.Nearforums - ASP.NET MVC forum engine: Nearforums v6.0: Version 6.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Authentication using Membership Provider for SQL Server and MySql Spam prevention: Flood Control Moderation: Flag messages Content management: Pages: Create pages (about us/contact/texts) through web administration Allow nearforums to run as an IIS subapp Migrated Facebook Connect to OAuth 2.0 Visit the project Roadmap for more details.NetOffice - The easiest way to use Office in .NET: NetOffice Release 0.8b: Changes: - fix critical issue 15922(AccessViolationException) once and for all update is strongly recommended Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:..............................................................COM Proxy Management, Events, etc. - Examples in C# and VB.Net:............................................................Excel, Word, Outlook, PowerPoint, Access - COMAddin Examples in C# and VB....Facebook Graph Toolkit: Facebook Graph Toolkit 1.5.4186: Updates the API in response to Facebook's recent change of policy: All Graph Api accessing feeds or posts must provide a AccessToken.SharePoint Farm Poster: SharePoint Farm Poster: SharePoint Farm Poster is generated by a PowerShell Script. Run this script under the Farm Admin Account. After downloading, unblock the file in the Property Window. Current version is beta : v0.3.4Serviio for Windows Home Server: Beta Release 0.5.2.0: Ready for widespread beta. Synchronized build number to Serviio version to avoid confusion.AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta4: ??AcDown?????????????,??????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta4 2011-5-31?? ???Bilibili.us????? ???? ?? ???"????" ???Bilibili.us??? ??????? ?? ??????? ?? ???????? ?? ?? ???Bilibili.us?????(??????????????????) ??????(6.cn)?????(????) ?? ?????Acfun?????????? ?????????????? ???QQ???????? ????????????Discussion...EnhSim: EnhSim 2.4.5 ALPHA: 2.4.5 ALPHAThis release supports WoW patch 4.1 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Added in the T12 s...TerrariViewer: TerrariViewer v2.4.1: Added Piggy Bank editor and fixed some minor bugs.Kooboo CMS: Kooboo CMS 3.02: What is new in kooboo cms 3.02 The most important updates of this version is the Kooboo site builder, an unique and creative web design tool, design an professional website and export to Kooboo CMS. See: http://www.sitekin.com Add Version contorl on View, Layout and other elements. Add user CMS language selection, user can select a language to use on their CMS backend. Add User profile provider, you can use now stop website user information on a SQL database. Previously it stored on XML...mojoPortal: 2.3.6.6: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2366-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Terraria World Creator: Terraria World Creator: Version 1.01 Fixed a bug that would cause the application to crash. Re-named the Application.Microsoft All-In-One Code Framework - a centralized code sample library: All-In-One Code Framework 2011-05-26: Alternatively, you can install Sample Browser or Sample Browser VS extension, and download the code samples from Sample Browser. Improved and Newly Added Examples:For an up-to-date code sample index, please refer to All-In-One Code Framework Sample Catalog. NEW Samples for Dynamics Sample Description Owner CSDynamicsNAVWebServices The code sample shows syntax for calling Dynamics NAV Web Services. Lars Lohndorf-Larsen NEW Samples for WPF Sample Description Owner CSWPFDataGridCustomS...Terraria World Viewer: Version 1.1: Update May 26th Added Chest Filtering, this allows chests only containing certain items to have their symbol drawn. (Its under advanced settings tab) GUI elements (checkboxes/etc) are persistant between uses of the application Beta Worlds (i.e. Release #38) will work properly Symbols can be enabled or disabled on a per symbol basis Chest Information tab which is just a dump of the current chest information Meterorite is now visible as a bright magenta pink Application defaults to ...MVC Controls Toolkit: Mvc Controls Toolkit 1.1 RC: *Added: Compatibility with jQuery 1.6.1 Rendering of enumerables with images and/or customizable strings improved the client side tempate engine added new parameters to the template definition binding all new knockout bindings helpers have been fully implemented added a new overload for defining the client-side ViewModel The SetTme method has the option to store the theme in a permanent cookie If no CSS class is provided for the watermark of a TypedTextBox the watermark class of the current t...New ProjectsAlumnus SI Machung: ExSis Alumnus Website makes it easier for Alumni of Ma Chung University, that located at Malang, East Java, Indonesia. You'll no longer have to find your friend that separated in so many years. It's developed in Microsoft Visual Studio 2010. Check our website at kriswanto90.charlezzzz.comb9b18a35-a80a-440c-bb8c-195be0225cfa: b9b18a35-a80a-440c-bb8c-195be0225cfaCustomer Care Portal - SharePoint 2010 for Internet Sites: The Customer Care Portal demonstrates cooperation of Microsoft SharePoint 2010 with several technologies, such as: • Silverlight • Windows Server AppFabric • Business Data Connectivity • Microsoft InfoPath 2010 DBServer Folder Browser: DBServer Folder Browser It's developed in C#Dual Development: A place for colloboration.EVMDOCS ASP.NET: EVMDOCS project for ECM.VSTU.RU FCNS.Money: FCNS.Money?????????????。???,?????????????,??????????????。???????????????? ??,?FCNS.Money??????????????,???????????,????????????????,??????????,???? ???????????。HDBMS: Attempt to implement a hierarchical database that can participate in a distributed 2PCHRO: 3D FPS in a secret underwater soviet bunkerIstream: Istream is a new web browser for windows computers. We have designed the browser so you have everything right at your fingertips. Download it today and discover a new browsing experience, we have a range of features avaialble just now and its only the first release. istream.imLaptop Battery Usage Pattern: Figures out battery discharge and recharge pattern. Suggests estimated discharge time and recharge time based on past history. Also alerts when it detects considerable degradation of battery life. Mediawiki tools for Office: The current version installs an Outlook 2007 plugin. This plugin adds a send to wiki command to the outlook context menu. The functionality is very similar to the behavior of the One Note plugin which is installed in office 2007. PW API Library: The PW API Library provides a .NET interface to the ProgrammableWeb's API library. This is a personal project built using the ProgrammableWeb internal API and is not provided by the ProgrammableWeb team. It's developed in C# using .NET 4.0 framework.Restbucks on .Net: Implementation of the RestBucks example; from the book "Rest on Practice" on the .Net plattaform.SharePoint Farm Poster: View the entire SharePoint Farm configuration as a single HTML Poster.Simple Silverlight Bounce Effect: Simple bounce effect in Silverlight. A demo of this project can be seen at http://www.voltar.ch/en/results/technologieSistema Gestor Escolar: Aquesta aplicacio permet gestionar activitats pròpies d'una escolaSmart WCF Client Wrapper: This is a smart WCF client wrapper that keeps your code clean, and hides common beginner mistakes from the end user. This code handles - EventHandler Cleanup - Exception Managment - Reliable and efficient reuse of the proxy - Cleanup of the proxy - Clean "using(...)" methodSSIS Extensions - SFTP Task, PGP Task, Zip Task: A set of custom tasks to extend SSIS. Includes a SFTP task, PGP encryption task and zip/unzip task.TFS On The Road: TFS On The Road is a TFS client for Windows Phone 7. With it you can have a good view from your TFS even if you are "on the road". It allows you to access projects, work items(including attachments), changesets, builds, branches, and work item queries.Todo.txt .NET: .NET version of Todo.txt. The project goals are a Windows application as well as a PowerShell commands.

    Read the article

  • CodePlex Daily Summary for Sunday, September 16, 2012

    CodePlex Daily Summary for Sunday, September 16, 2012Popular ReleasesVisual Studio Icon Patcher: VSIP-1.5.1: This fixes a bug in the 1.5 release where it would crash when no language packs were installed for VS2010.sheetengine - Isometric HTML5 JavaScript Display Engine: sheetengine v1.1.0: This release of sheetengine introduces major drawing optimizations. A background canvas is created with the full drawn scenery onto which only the changed parts are redrawn. For example a moving object will cause only its bounding box to be redrawn instead of the full scene. This background canvas is copied to the main canvas in each iteration. For this reason the size of the bounding box of every object needs to be defined and also the width and height of the background canvas. The example...VFPX: Desktop Alerts 1.0.2: This update for the Desktop Alerts contains changes to behavior for setting custom sounds for alerts. I have removed ALERTWAV.TXT from the project, and also removed DA_DEFAULTSOUND from the VFPALERT.H file. The AlertManager class and Alert class both have a "default" cSound of ADDBS(JUSTPATH(_VFP.ServerName))+"alert.wav" --- so, as long as you distribute a sound file with the file name "alert.wav" along with the EXE, that file will be used. You can set your own sound file globally by setti...MCEBuddy 2.x: MCEBuddy 2.2.15: Changelog for 2.2.15 (32bit and 64bit) 1. Added support for %originalfilepath% to get the source file full path. Used for custom commands only. 2. Added support for better parsing of Media Portal XML files to extract ShowName and Episode Name and download additional details from TVDB (like Season No, Episode No etc). 3. Added support for TVDB seriesID in metadata 4. Added support for eMail non blocking UI testCrashReporter.NET : Exception reporting library for C# and VB.NET: CrashReporter.NET 1.2: *Added html mail format which shows hierarchical exception report for better understanding.DotNetNuke Search Engine Sitemaps Provider: Version 02.00.00: New release of the Search Engine Sitemap Providers New version - not backwards compatible with 1.x versions New sandboxing to prevent exceptions in module providers interfering with main provider Now installable using the Host->Extensions page New sitemaps available for Active Forums and Ventrian Property Agent Now derived from DotNetNuke Provider base for better framework integration DotNetNuke minimum compatibility raised to DNN 5.2, .NET to 3.5PDF Viewer Web part: PDF Viewer Web Part: PDF Viewer Web PartMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.67: Fix issue #18629 - incorrectly handling null characters in string literals and not throwing an error when outside string literals. update for Issue #18600 - forgot to make the ///#DEBUG= directive also set a known-global for the given debug namespace. removed the kill-switch for disregarding preprocessor define-comments (///#IF and the like) and created a separate CodeSettings.IgnorePreprocessorDefines property for those who really need to turn that off. Some people had been setting -kil...MPC-BE: Media Player Classic BE 1.0.1.0 build 1122: MPC-BE is a free and open source audio and video player for Windows. MPC-BE is based on the original "Media Player Classic" project (Gabest) and "Media Player Classic Home Cinema" project (Casimir666), contains additional features and bug fixes. Supported Operating Systems: Windows XP SP2, Vista, 7 32bit/64bit System Requirements: An SSE capable CPU The latest DirectX 9.0c runtime (June 2010). Install it regardless of the operating system, they all need it. Web installer: http://www.micro...Preactor Object Model: Visual Studio Template .NET 3.5: Visual Studio Template with all the necessary files to get started with POM. You will still need to Get the Preactor.ObjectModel and Preactor.ObjectModleExtensions libraries from Nuget though. You will also need to sign with assembly with a strong name key.Lakana - WPF Framework: Lakana V2: Lakana V2 contains : - Lakana WPF Forms (with sample project) - Lakana WPF Navigation (with sample project)Microsoft SQL Server Product Samples: Database: OData QueryFeed workflow activity: The OData QueryFeed sample activity shows how to create a workflow activity that consumes an OData resource, and renders entity properties in a Microsoft Excel 2010 worksheet or Microsoft Word 2010 document. Using the sample QueryFeed activity, you can consume any OData resource. The sample activity uses LINQ to project OData metadata into activity designer expression items. By setting activity expressions, a fully qualified OData query string is constructed consisting of Resource, Filter, Or...YTNet: YTNet Version 1.0: YT Net Version 1.0 The first release of the YT Net library. This release supports: - Searching YouTube videos by title - Loading the different versions of a video, ordered by the itag value - Downloading videos The release is well tested witch a bunch of unit tests and two sample applications (A Console and a WPF application).F# 3.0 Sample Pack: FSharp 3.0 Sample Pack for Visual Studio 2012 RTM: F# 3.0 Sample Pack for Visual Studio 2012 RTMANPR MX: ANPR_MX Release 1: ANPR MX Release 1 Features: Correctly detects plate area. Provides potential values for the recognized plate. Allows images 800x600 and below.Cocktail: Cocktail v1.0.1: PrerequisitesVisual Studio 2010 with SP1 (any edition but Express) Optional: Silverlight 4 or 5 Note: Install Silverlight 4 Tools and then the Silverlight 4 Toolkit. Likewise for Silverlight 5 Tools and the Silverlight 5 Toolkit DevForce Express 6.1.8.1 or greater Included in the Cocktail download, DevForce Express requires registration) Important: Install DevForce after all other components. Download contentsDebug and release assemblies API documentation Source code Licens...weber: weber v0.1: first release, creates a basic browser shell and allows user to navigate to web sites.Arduino for Visual Studio: Arduino 1.x for Visual Studio 2012, 2010 and 2008: Register for the forum for more news and updates Version 1209.15 is beta and resolves a number of issues in Visual Studio 2012 and minor debugger fixes for all vs versions. After you have tested a working installation, if you would like to beta the debug tool then email beta at visualmicro.com. Version 1208.19 (click the downloads tab) is considered stable for visual studio 2010 and 2008. Key Features of 1209.10 Support for Visual Studio 2012 (.NET 4.5) Debug tools beta team can re-e...AcDown????? - AcDown Downloader Framework: AcDown????? v4.1: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??...Move Mouse: Move Mouse 2.5.2: FIXED - Minor fixes and improvements.New ProjectsBadoev1Lab: ???? ?? ??????? ?????????Banking.NET: Banking.NET is a .NET 4 based Home - Banking Application.ColdFire Studio Macro Assembler and Simulator: Integrated ColdFire MCUs macro assembler and simulator. Write programs in ColdFire assembly or load binary code and test them in a simulator. Eve Galaxy Map: EGM provides an API for the host application to visualize data across a 3D rendering of the Eve Online universe, similar to the in-game Star Map.Hamaazan (Rated): hamaazan website + robotHBCI.NET: HBCI.NET is a .NET 3.5 Library, which communicates with your bank account via HBCI 2.2, FinTS 3.0 and 4.0Http Explorer: A GUI for crafting and submitting http requests and viewing the resulting response.LINQ for C++: LINQ for C++ is an attempt to bring LINQ-like list manipulation to C++11.mroftalpdxs: I WANT TO TEST THE POSSILITY OF PUBLICATTION OF THE DOCUMMMENET.my-secondlib: This is Win 8 Test Code RepoNET Library to interface with Zephyr Bluetooth Heart Rate Monitor: .NET Library for interfacing to Zephyr Heart Rate MonitorPkuBookStore: PkuBookStoreSmallTune: SmallTune is an audioplayer with a long tradition, being completely rewritten and redesigned by now.Taylor's Professional Services: CIS470 Team B Senior ProjectTFS Test Case Review: Provides a means to review Test Cases created in Microsoft Test Manager in a clean, easy-to-read interface.Visual Studio 2012 all caps menu option: A Visual Studio 2012 add-in that allows the user to turn all caps in menu titles on and off in the Visual Studio options dialog. XNA Map Editor: This is a Map Editor that is still in development. It's a map editor for a 2D Platformer. Grid tile placement,loading tiles and objects Made with XNA and C#.

    Read the article

  • how to solve run time error 'Failed to create writable database file with message 'The operation couldn’t be completed. (Cocoa error 260.)'.'?

    - by user1432045
    I am beginner of iPhone I have created database but that give run time error of Failed to create writable database file with message 'The operation couldn’t be completed my code is -(void)createdatabase { NSFileManager *fileManager=[NSFileManager defaultManager]; NSError *error; NSString *dbPath=[self getDBPath]; BOOL success=[fileManager fileExistsAtPath:dbPath]; if(!success) { NSString *defaultDBPath=[[[NSBundle mainBundle]resourcePath] stringByAppendingPathComponent:@"SQL.sqlite"]; success=[fileManager copyItemAtPath:defaultDBPath toPath:dbPath error:&error]; if(!success) { NSAssert1(0, @"Failed to create writable database file with message '%@'.", [error localizedDescription]); } } } give any suggestion and source code which is apply in my code

    Read the article

  • How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database

    - by Feroz Khan
    How to load a binary file(.bin) of size 6 MB in a varbinary(MAX) column of SQL Server 2005 database using ADO in vc++ application. This is the code I am using to load the file which I used to load a .bmp file BOOL CSaveView::PutECGInDB(CString strFilePath, FieldPtr pFileData) { //Open File CFile fileImage; CFileStatus fileStatus; fileImage.Open(strFilePath,CFile::modeRead); fileImage.GetStatus(fileStatus); //Alocating memory for data ULONG nBytes = (ULONG)fileStatus.m_size; HGLOBAL hGlobal = GlobalAlloc(GPTR,nBytes); LPVOID lpData = GlobalLock(hGlobal); //Putting data into file fileImage.Read(lpData,nBytes); HRESULT hr; _variant_t varChunk; long lngOffset = 0; UCHAR chData; SAFEARRAY FAR *psa = NULL; SAFEARRAYBOUND rgsabound[1]; try { //Create a safe array to store the BYTES rgsabound[0].lLbound = 0; rgsabound[0].cElements = nBytes; psa = SafeArrayCreate(VT_UI1,1,rgsabound); while(lngOffset<(long)nBytes) { chData = ((UCHAR*)lpData)[lngOffset]; hr = SafeArrayPutElement(psa,&lngOffset,&chData); if(hr != S_OK) { return false; } lngOffset++; } lngOffset = 0; //Assign the safe array to a varient varChunk.vt = VT_ARRAY|VT_UI1; varChunk.parray = psa; hr = pFileData-AppendChunk(varChunk); if(hr != S_OK) { return false; } } catch(_com_error &e) { //get info from _com_error _bstr_t bstrSource(e.Source()); _bstr_t bstrDescription(e.Description()); _bstr_t bstrErrorMessage(e.ErrorMessage()); _bstr_t bstrErrorCode(e.Error()); TRACE("Exception thrown for classes generated by #import"); TRACE("\tCode= %08lx\n",(LPCSTR)bstrErrorCode); TRACE("\tCode Meaning = %s\n",(LPCSTR)bstrErrorMessage); TRACE("\tSource = %s\n",(LPCSTR)bstrSource); TRACE("\tDescription = %s\n",(LPCSTR)bstrDescription); } catch(...) { TRACE("Unhandle Exception"); } //Free Memory GlobalUnlock(lpData); return true; } But when I read the same file using Getchunk funcion it gives me all 0,s but the size of the file I get is same as the one uploaded. Your help will be highly appreciated. Thanks in advance.

    Read the article

  • [How-to] Make a working xml file in php (encoding properly) to export data from online database to i

    - by gotye
    Hey guys, I am trying to make an xml file to export my data from my database to my iphone. Each time I create a new post, I need to execute a php file to create a xml file containing the latest post ;) Okay so far ? :D Here is current php code ... but my nsxmlparser gives me an error code (33 - String is not started). I have no idea what kind of php functions I have to use ... <?php // Èdition du dÈbut du fichier XML $xml .= '<?xml version=\"1.0\" encoding=\"UTF-8\"?>'; $sml .= '<channel>'; $xml .= '<title>Infonul</title>'; $xml .= '<link>aaa</link>'; $xml .= '<description>aaa</description>'; // connexion a la base (mettre ‡ jour le mdp) $connect = mysql_connect('...-12','...','...'); /* selection de la base de donnÈe mysql */ mysql_select_db('...'); // selection des 20 derniËres news $res=mysql_query("SELECT u.display_name as author,p.post_date as date,p.comment_count as commentCount, p.post_content as content,p.post_title as title FROM wp_posts p, wp_users u WHERE p.post_status = 'publish' and p.post_type = 'post' and p.post_author = u.id ORDER BY p.id DESC LIMIT 0,20"); // extraction des informations et ajout au contenu while($tab=mysql_fetch_array($res)){ $title=$tab[title]; $author=$tab[author]; $content=$tab[content]; //html stuff $commentCount=$tab[commentCount]; $date=$tab[date]; $xml .= '<item>'; $xml .= '<title>'.$title.'</title>'; $xml .= '<content><![CDATA['.$content.']]></content>'; $xml .= '<date>'.$date.'</date>'; $xml .= '<author>'.$author.'</author>'; $xml .= '<commentCount>'.$commentCount.'</commentCount>'; $xml .= '</item>'; } // Èdition de la fin du fichier XML $xml .= '</channel>'; $xml = utf8_encode($xml); echo $xml; // Ècriture dans le fichier if ($fp = fopen("20news.xml",'w')) { fputs($fp,$xml); fclose($fp); } //mysql_close(); ?> I noticed a few things when i open 20news.xml in my browser : I got squares instead of single quotes ... I can't see <[CDATA[ but ]] is clearly visible ... why ?!? Thanks for any input ;) Gotye.

    Read the article

  • How to store date into Mysql database with play framework in scala?

    - by Rahul Kulhari
    I am working with play framework with scala and what am i doing : login page to login into web app sign up page to register into web app after login i want to store all databases values to user what i want to do: when user register for web app then i want to store user values into database with current time and date but my form is giving error. error: List(FormError(dates,error.required,List())),None) controllers/Application.scala object Application extends Controller { val ta:Form[Keyword] = Form( mapping( "id" -> ignored(NotAssigned:Pk[Long]), "word" -> nonEmptyText, "blog" -> nonEmptyText, "cat" -> nonEmptyText, "score"-> of[Long], "summaryId"-> nonEmptyText, "dates" -> date("yyyy-MM-dd HH:mm:ss") )(Keyword.apply)(Keyword.unapply) ) def index = Action { Ok(html.index(ta)); } def newTask= Action { implicit request => ta.bindFromRequest.fold( errors => {println(errors) BadRequest(html.index(errors))}, keywo => { Keyword.create(keywo) Ok(views.html.data(Keyword.all())) } ) } models/keyword.scala case class Keyword(id: Pk[Long],word: String,blog: String,cat: String,score: Long, summaryId: String,dates: Date ) object Keyword { val keyw = { get[Pk[Long]]("keyword.id") ~ get[String]("keyword.word")~ get[String]("keyword.blog")~ get[String]("keyword.cat")~ get[Long]("keyword.score") ~ get[String]("keyword.summaryId")~ get[Date]("keyword.dates") map { case id~blog~cat~word~score~summaryId~dates => Keyword(id,word,blog,cat,score, summaryId,dates) } } def all(): List[Keyword] = DB.withConnection { implicit c => SQL("select * from keyword").as(Keyword.keyw *) } def create(key: Keyword){DB.withConnection{implicit c=> SQL("insert into keyword values({word},{blog}, {cat}, {score},{summaryId},{dates})").on('word-> key.word,'blog->key.blog, 'cat -> key.cat, 'score-> key.score, 'summaryId -> key.summaryId, 'dates->new Date()).executeUpdate } } views/index.scala.html @(taskForm: Form[Keyword]) @import helper._ @main("Todo list") { @form(routes.Application.newTask) { @inputText(taskForm("word")) @inputText(taskForm("blog")) @inputText(taskForm("cat")) @inputText(taskForm("score")) @inputText(taskForm("summaryId")) <input type="submit"> <a href="">Go Back</a> } } please give me some idea to store date into mysql databse and date is not a field of form

    Read the article

  • how to insert into database from stored procedure in log4net?

    - by Samreen
    I have to log thread context properties like this: string logFilePath = AppDomain.CurrentDomain.BaseDirectory + "log4netconfig.xml"; FileInfo finfo = new FileInfo(logFilePath); log4net.Config.XmlConfigurator.ConfigureAndWatch(finfo); ILog logger = LogManager.GetLogger("Exception.Logging"); log4net.ThreadContext.Properties["MESSAGE"] = exception.Message; log4net.ThreadContext.Properties["MODULE"] = "module1"; log4net.ThreadContext.Properties["COMPONENT"] = "component1"; logger.Debug("test"); and the configuration file is: <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler,Log4net"/> </configSections> <log4net> <logger name="Exception.Logging" level="Debug"> <appender-ref ref="AdoNetAppender"/> </logger> <appender name="AdoNetAppender" type="log4net.Appender.AdoNetAppender"> <connectionString value="Data Source=xe;User ID=test;Password=test;" /> <connectionType value="System.Data.OracleClient.OracleConnection, System.Data.OracleClient, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <bufferSize value="10000"/> <commandText value="Log_Exception_Pkg.Insert_Log" /> <commandType value="StoredProcedure" /> <parameter> <parameterName value="@p_Error_Message" /> <dbType value="String" /> <size value="255" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%property{MESSAGE}"/> </layout> </parameter> <parameter> <parameterName value="@p_Module" /> <dbType value="String" /> <size value="225" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%property{MODULE}"/> </layout> </parameter> <parameter> <parameterName value="@p_Component" /> <dbType value="String" /> <size value="225" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%property{COMPONENT}"/> </layout> </parameter> </appender> </log4net> </configuration> But its not inserting them in the database. How can I get that to work?

    Read the article

  • How to upload a file into database by using Servlet?

    - by user1765496
    Hi all iam working on servlets, so i need to upload a file by using servlet as follows my code. package com.limrasoft.image.servlets; import java.io.*; import javax.servlet.*; import javax.servlet.http.*; import javax.servlet.annotation.*; import java.sql.*; @WebServlet(name="serv1",value="/s1") public class Account extends HttpServlet{ public void doPost(HttpServletRequest req,HttpServletResponse res)throws ServletException,IOException{ try{ Class.forName("oracle.jdbc.driver.OracleDriver"); Connecection con=null; try{ con=DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:xe","system","sajid"); PrintWriter pw=res.getWriter(); res.setContentType("text/html"); String s1=req.getParameter("un"); string s2=req.getParameter("pwd"); String s3=req.getParameter("g"); String s4=req.getParameter("uf"); PreparedStatement ps=con.prepareStatement("insert into account(?,?,?,?)"); ps.setString(1,s1); ps.setString(2,s2); ps.setString(3,s3); File file=new File("+s4+"); FileInputStream fis=new FileInputStream(fis); int len=(int)file.length(); ps.setBinaryStream(4,fis,len); int c=ps.executeUpdate(); if(c==0){pw.println("<h1>Registratin fail");} else{pw.println("<h1>Registration fail");} } finally{if(con!=null)con.close();} } catch(ClassNotFoundException ce){pw.println("<h1>Registration Fail");} catch(SQLException se){pw.println("<h1>Registration Fail");} pw.flush(); pw.close(); } } I have written the above code for file upload into database, but it giving error as "HTTP Status 500 - Servlet3.java (The system cannot find the file specified)" Could you plz help me to do this code,thanks in advanse.

    Read the article

  • What is the best way to connect to an Oracle database to build my own IDE? [closed]

    - by rima
    before answer me plz thinking about the futures of these kind of program and answer me plz. I wanna get some data from oracle server like: 1-get all the function,package,procedure and etc for showing them or drop them & etc... 2-compile my *.sql files,get the result if they have problem & etc... becuz I was beginner in oracle first of all I for solve the second problem I try to connect to sqlPlus by RUN sqlplus and trace the output(I mean,I change the output stream of shell and trace what happend and handle the assigned message to customer. NOW THIS PART SUCCEED. just a little bit I have problem with get all result because the output is asynchronous.any way... [in this case I log in to oracle Server by send argument to the sqlplus by make a process in c#] after that I try to get all function,package or procedure name,but I have problem in speed!so I try to use oracle.DataAccess.dll to connect the database. now I m so confusing about: which way is correct way to build a program that work like Oracle Developer! I do not have any experience for like these program how work. If Your answer is I must use the second way follow this part plz: I search a little bit the Golden,PLedit (Benthic software),I have little bit problem how I must create the connection string?because I thinking about how I can find the host name or port number that oracle work on them?? am I need read the TNSNames.Ora file? IF your answer is I must use the first way follow this part plz: do u have any Idea for how I parse the output?because for example the result of a table is so confusing...[i can handle & program it but I really need someone experience,because the important things to me learn how such software work so nice and with quick response?] All of the has different style in output... If you are not sure Can u help me which book can help me in this way i become expert? becuz for example all the C# write just about how u can connect to DB and the DB books write how u can use this DB program,I looking for a book that give me some Idea how develop an interface for do transaction between these two.not simple send and receive data,for example how write a compiler for them. the language of book is not different for me i know C#,java,VB,sql,Oracle Thanks.

    Read the article

  • Leveraging Logical Standby Databases in Oracle 11g Data Guard

    Oracle Data Guard still offers support for the venerable logical standby database in Oracle Database 11g. This article, investigates how data warehouse and data mart environments can effectively leverage logical standby database features, but simultaneously provide a final destination when failover from a primary database is mandated during disaster recovery.

    Read the article

  • MySQL – Scalability on Amazon RDS: Scale out to multiple RDS instances

    - by Pinal Dave
    Today, I’d like to discuss getting better MySQL scalability on Amazon RDS. The question of the day: “What can you do when a MySQL database needs to scale write-intensive workloads beyond the capabilities of the largest available machine on Amazon RDS?” Let’s take a look. In a typical EC2/RDS set-up, users connect to app servers from their mobile devices and tablets, computers, browsers, etc.  Then app servers connect to an RDS instance (web/cloud services) and in some cases they might leverage some read-only replicas.   Figure 1. A typical RDS instance is a single-instance database, with read replicas.  This is not very good at handling high write-based throughput. As your application becomes more popular you can expect an increasing number of users, more transactions, and more accumulated data.  User interactions can become more challenging as the application adds more sophisticated capabilities. The result of all this positive activity: your MySQL database will inevitably begin to experience scalability pressures. What can you do? Broadly speaking, there are four options available to improve MySQL scalability on RDS. 1. Larger RDS Instances – If you’re not already using the maximum available RDS instance, you can always scale up – to larger hardware.  Bigger CPUs, more compute power, more memory et cetera. But the largest available RDS instance is still limited.  And they get expensive. “High-Memory Quadruple Extra Large DB Instance”: 68 GB of memory 26 ECUs (8 virtual cores with 3.25 ECUs each) 64-bit platform High I/O Capacity Provisioned IOPS Optimized: 1000Mbps 2. Provisioned IOPs – You can get provisioned IOPs and higher throughput on the I/O level. However, there is a hard limit with a maximum instance size and maximum number of provisioned IOPs you can buy from Amazon and you simply cannot scale beyond these hardware specifications. 3. Leverage Read Replicas – If your application permits, you can leverage read replicas to offload some reads from the master databases. But there are a limited number of replicas you can utilize and Amazon generally requires some modifications to your existing application. And read-replicas don’t help with write-intensive applications. 4. Multiple Database Instances – Amazon offers a fourth option: “You can implement partitioning,thereby spreading your data across multiple database Instances” (Link) However, Amazon does not offer any guidance or facilities to help you with this. “Multiple database instances” is not an RDS feature.  And Amazon doesn’t explain how to implement this idea. In fact, when asked, this is the response on an Amazon forum: Q: Is there any documents that describe the partition DB across multiple RDS? I need to use DB with more 1TB but exist a limitation during the create process, but I read in the any FAQ that you need to partition database, but I don’t find any documents that describe it. A: “DB partitioning/sharding is not an official feature of Amazon RDS or MySQL, but a technique to scale out database by using multiple database instances. The appropriate way to split data depends on the characteristics of the application or data set. Therefore, there is no concrete and specific guidance.” So now what? The answer is to scale out with ScaleBase. Amazon RDS with ScaleBase: What you get – MySQL Scalability! ScaleBase is specifically designed to scale out a single MySQL RDS instance into multiple MySQL instances. Critically, this is accomplished with no changes to your application code.  Your application continues to “see” one database.   ScaleBase does all the work of managing and enforcing an optimized data distribution policy to create multiple MySQL instances. With ScaleBase, data distribution, transactions, concurrency control, and two-phase commit are all 100% transparent and 100% ACID-compliant, so applications, services and tooling continue to interact with your distributed RDS as if it were a single MySQL instance. The result: now you can cost-effectively leverage multiple MySQL RDS instance to scale out write-intensive workloads to an unlimited number of users, transactions, and data. Amazon RDS with ScaleBase: What you keep – Everything! And how does this change your Amazon environment? 1. Keep your application, unchanged – There is no change your application development life-cycle at all.  You still use your existing development tools, frameworks and libraries.  Application quality assurance and testing cycles stay the same. And, critically, you stay with an ACID-compliant MySQL environment. 2. Keep your RDS value-added services – The value-added services that you rely on are all still available. Amazon will continue to handle database maintenance and updates for you. You can still leverage High Availability via Multi A-Z.  And, if it benefits youra application throughput, you can still use read replicas. 3. Keep your RDS administration – Finally the RDS monitoring and provisioning tools you rely on still work as they did before. With your one large MySQL instance, now split into multiple instances, you can actually use less expensive, smallersmaller available RDS hardware and continue to see better database performance. Conclusion Amazon RDS is a tremendous service, but it doesn’t offer solutions to scale beyond a single MySQL instance. Larger RDS instances get more expensive.  And when you max-out on the available hardware, you’re stuck.  Amazon recommends scaling out your single instance into multiple instances for transaction-intensive apps, but offers no services or guidance to help you. This is where ScaleBase comes in to save the day. It gives you a simple and effective way to create multiple MySQL RDS instances, while removing all the complexities typically caused by “DIY” sharding andwith no changes to your applications . With ScaleBase you continue to leverage the AWS/RDS ecosystem: commodity hardware and value added services like read replicas, multi A-Z, maintenance/updates and administration with monitoring tools and provisioning. SCALEBASE ON AMAZON If you’re curious to try ScaleBase on Amazon, it can be found here – Download NOW. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #051

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Explanation and Understanding NOT NULL Constraint NOT NULL is integrity CONSTRAINT. It does not allow creating of the row where column contains NULL value. Most discussed questions about NULL is what is NULL? I will not go in depth analysis it. Simply put NULL is unknown or missing data. When NULL is present in database columns, it can affect the integrity of the database. I really do not prefer NULL in the database unless they are absolutely necessary. Three T-SQL Script to Create Primary Keys on Table I have always enjoyed writing about three topics Constraint and Keys, Backup and Restore and Datetime Functions. Primary Keys constraints prevent duplicate values for columns and provides a unique identifier to each column, as well it creates clustered index on the columns. 2008 Get Numeric Value From Alpha Numeric String – UDF for Get Numeric Numbers Only SQL is great with String operations. Many times, I use T-SQL to do my string operation. Let us see User Defined Function, which I wrote a few days ago, which will return only Numeric values from Alpha Numeric values. Introduction and Example of UNION and UNION ALL It is very much interesting when I get requests from blog reader to re-write my previous articles. I have received few requests to rewrite my article SQL SERVER – Union vs. Union All – Which is better for performance? with examples. I request you to read my previous article first to understand what is the concept and read this article to understand the same concept with an example. Downgrade Database for Previous Version The main questions is how they can downgrade the from SQL Server 2005 to SQL Server 2000? The answer is : Not Possible. Get Common Records From Two Tables Without Using Join Following is my scenario, Suppose Table 1 and Table 2 has same column e.g. Column1 Following is the query, 1. Select column1,column2 From Table1 2. Select column1 From Table2 I want to find common records from these tables, but I don’t want to use the Join clause because for that I need to specify the column name for Join condition. Will you help me to get common records without using Join condition? I am using SQL Server 2005. Retrieve – Select Only Date Part From DateTime – Best Practice – Part 2 A year ago I wrote a post about SQL SERVER – Retrieve – Select Only Date Part From DateTime – Best Practice where I have discussed two different methods of getting the date part from datetime. Introduction to CLR – Simple Example of CLR Stored Procedure CLR is an abbreviation of Common Language Runtime. In SQL Server 2005 and later version of it database objects can be created which are created in CLR. Stored Procedures, Functions, Triggers can be coded in CLR. CLR is faster than T-SQL in many cases. CLR is mainly used to accomplish tasks which are not possible by T-SQL or can use lots of resources. The CLR can be usually implemented where there is an intense string operation, thread management or iteration methods which can be complicated for T-SQL. Implementing CLR provides more security to the Extended Stored Procedure. 2009 Comic Slow Query – SQL Joke Before Presentation After Presentation Enable Automatic Statistic Update on Database In one of the recent projects, I found out that despite putting good indexes and optimizing the query, I could not achieve an optimized performance and I still received an unoptimized response from the SQL Server. On examination, I figured out that the culprit was statistics. The database that I was trying to optimize had auto update of the statistics was disabled. Recently Executed T-SQL Query Please refer to blog post  query to recently executed T-SQL query on database. Change Collation of Database Column – T-SQL Script – Consolidating Collations – Extention Script At some time in your DBA career, you may find yourself in a position when you sit back and realize that your database collations have somehow run amuck, or are faced with the ever annoying CANNOT RESOLVE COLLATION message when trying to join data of varying collation settings. 2010 Visiting Alma Mater – Delivering Session on Database Performance and Career – Nirma Institute of Technology Everyone always dreams of visiting their school and college, where they have studied once. It is a great feeling to see the college once again – where you have spent the wonderful golden years of your time. College time is filled with studies, education, emotions and several plans to build a future. I consider myself fortunate as I got the opportunity to study at some of the best places in the world. Change Column DataTypes There are times when I feel like writing that I am a day older in SQL Server. In fact, there are many who are looking for a solution that is simple enough. Have you ever searched online for something very simple. I often do and enjoy doing things which are straight forward and easy to change. 2011 Three DMVs – sys.dm_server_memory_dumps – sys.dm_server_services – sys.dm_server_registry In this blog post we will see three new DMVs which are introduced in Denali. The DMVs are very simple and there is not much to describe them. So here is the simple game. I will be asking a question back to you after seeing the result of the each of the DMV and you help me to complete this blog post. A Simple Quiz – T-SQL Brain Trick If you have some time, I strongly suggest you try this quiz out as it is for sure twists your brain. 2012 List All The Column With Specific Data Types in Database 5 years ago I wrote script SQL SERVER – 2005 – List All The Column With Specific Data Types, when I read it again, it is very much relevant and I liked it. This is one of the script which every developer would like to keep it handy. I have upgraded the script bit more. I have included few additional information which I believe I should have added from the beginning. It is difficult to visualize the final script when we are writing it first time. Find First Non-Numeric Character from String The function PATINDEX exists for quite a long time in SQL Server but I hardly see it being used. Well, at least I use it and I am comfortable using it. Here is a simple script which I use when I have to identify first non-numeric character. Finding Different ColumnName From Almost Identitical Tables Well here is the interesting example of how we can use sys.column catalogue views and get the details of the newly added column. I have previously written about EXCEPT over here which is very similar to MINUS of Oracle. Storing Data and Files in Cloud – Dropbox – Personal Technology Tip I thought long and hard about doing a Personal Technology Tips series for this blog.  I have so many tips I’d like to share.  I am on my computer almost all day, every day, so I have a treasure trove of interesting tidbits I like to share if given the chance.  The only thing holding me back – which tip to share first?  The first tip obviously has the weight of seeming like the most important.  But this would mean choosing amongst my favorite tricks and shortcuts.  This is a hard task. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • September Independent Oracle User Group (IOUG) Regional Events:

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} September 5, 2012 – Denver, CO Oracle 11g Database Upgrade Seminar Join Roy Swonger, Senior Director of software development at Oracle to learn about upgrading to Oracle Database 11g. Topics include: All the required preparatory steps Database upgrade strategies Post-upgrade performance analysis Helpful tips and common pitfalls to watch out for http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=152242&src=7598177&src=7598177&Act=4 September 6, 2012 – Salt Lake City, UT Fall Symposium 2012 Plan to join us for our annual fall event on Sept 6. They day will be filled with learning and networking with tracks focused on Applications, APEX, BI, Development and DBA Topics. This event is free for UTOUG members to attend, but please register. http://www.utoug.org/apex/f?p=972:2:6686308836668467::::P2_EVENT_ID:121 September 6, 2012 – Portland, OR Oracle’s Hands on Workshop Series focused on providing Defense-in-Depth Solutions to secure data at the source, reduce risk and simplify compliance The Oracle Database Security Workshop is a one-day hands-on session for IT Managers, IT Security Architects and Oracle DBAs who are looking for solutions to address their information protection, privacy, and accountability challenges within their Oracle database environment. Most security programs offered today fail toadequately address database security. Customers continue to be challenged tosecure information against loss and protect the integrity of sensitiveinformation like critical financial data, personally identifiable information(PII) and credit card data for PCI compliance. http://nwoug.org/content.aspx?page_id=87&club_id=165905&item_id=241082 September 11, 2012 – Montreal, QC APEXposed! For APEX aficionados – join ODTUG in Montreal, September 11-12 for APEXposed! Topics will include Dynamic Actions, Plug-ins, Tuning, and Building Mobile Apps. The cost is $399 US and early registration ends August 15th. For more information: http://www.odtugapextraining.com  September 11, 2012 – Philadelphia, PA Big Data & What are we still doing wrong with Tom Kyte Tom Kyte is a Senior Technical Architect in Oracle's Server Technology Division. Tom is the Tom behind the AskTom column in Oracle Magazine and is also the author of Expert Oracle Database Architecture (Apress, 2005/2009) among other books Abstract: Big Data The term "big data" draws a lot of attention, but behind the hype there's a simple story. For decades, companies have been making business decisions based on transactional data stored in relational databases. However, beyond that critical data is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. This presentation will take a look at what Big Data is and means - and Oracle's strategy for handling it Abstract: What are we still doing wrong? I've given many best practices presentations in the last 10 years. I've given many worst practices presentations in the last 10 years. I've seen some things change over the last ten years and many other things stay exactly the same. In this talk - we'll be taking a look at the good and the bad - what we do right and what we continue to do wrong over and over again. We'll look at why "Why" is probably the right initial answer to most any question. We'll look at how we get to "Know what we Know", and why that can be both a help and a hindrance. We'll peek at "Best Practices" and tie them into what I term "Worst Practices". In short, a talk on the good and the bad. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} http://ioug.itconvergence.com/pls/apex/f?p=207:27:3669516430980563::NO September 12, 2012- New York, NY NYOUG Fall General Meeting “Trends in Database Administration and Why the Future of Database Administration is the Vdba” http://www.nyoug.org/upcoming_events.htm#General_Meeting1 September 21, 2012 – Cleveland, OH Oracle Database 11g for Developers: What You need to know or Oracle Database 11g New Features for Developers Attendees are introduced to the new and improved features of Oracle 11g (both Oracle 11g R1 and Oracle 11g R2) that directly impact application development. Special emphasis is placed on features that reduce development time, make development simpler, improve performance, or speed deployment. Specific topics include: New SQL functions, virtual columns, result caching, XML improvements, pivot statements, JDBC improvements, and PL/SQL enhancements such as compound triggers. http://www.neooug.org/ September 24, 2012 – Ottawa, ON Introduction to Oracle Spatial The free Oracle Locator functionality, and the Oracle Spatial option which dramatically extends Locator, are very useful, but poorly understood capabilities of the database. In the afternoon we will extend into additional areas selected from: storage and performance; answering business problems with spatial queries; using Oracle Maps in OBIEE; an overview and capabilities of Oracle Topology; under the covers with GeoCoding. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} http://www.oug-ottawa.org/pls/htmldb/f?p=327:27:4209274028390246::NO

    Read the article

  • The APEX of Business Value...or...the Business Value of APEX? Oracle Cloud Takes Oracle APEX to New Heights!

    - by Gene Eun
    The attraction of Oracle Application Express (APEX) has increased tremendously with the recent launch of the Oracle Cloud. APEX already supported departmental development and deployment of business applications with minimal involvement from the IT department. Positioned as the ideal replacement for MS Access, APEX probably has managed better to capture the eye of developers and was used for enterprise application development at least as much as for the kind of tactical applications that Oracle strategically positioned it for. With APEX as PaaS from the Oracle Cloud, a leap is made to a much higher level of business value. Now the IT department is not even needed to make infrastructure available with a database running  on it. All the business needs is a credit card. And the business application that is developed, managed and used from the cloud through a standard browser can now just as easily be accessed by users from around the world as by users from the business department itself. As a bonus – the development of the APEX application is also done in the cloud – with no special demands on the location or the enterprise access privileges of the developers. To sum it up: APEX from Oracle Cloud Database Service get the development environment up and running in minutes no involvement from the internal IT department required (not for infrastructure, platform, or development) superior availability and scalability is offered by Oracle users from anywhere in the world can be invited to access the application developers from anywhere in the world can participate in creating and maintaining the application In addition: because the Oracle Cloud platform is the same as the on-premise platform, you can still decide to move the APEX application between the cloud and the local environment – and back again. The REST-ful services that are available through APEX allow programmatic interaction with the database under the APEX application. That means that this database can be synchronized with on premise databases or data stores in (other) clouds. Through the Oracle Cloud Messaging Service, the APEX application can easily enter into asynchronous conversations with other APEX applications, Fusion Middleware applications (ADF, SOA, BPM) and any other type of REST-enabled application. In my opinion, now, for the first time perhaps, APEX offers the attraction to the business that has been suggested before: because of the cloud, all the business needs is  a credit card (a budget of $175 per month), an internet-connection and a browser. Not like before, with a PC hidden under a desk or a database running somewhere in the data center. No matter how unattended: equipment is needed, power is consumed, the database needs to be kept running and if Oracle Database XE does not suffice, software licenses are required as well. And this set up always has a security challenge associated with it. The cloud fee for the Oracle Cloud Database Service includes infrastructure, power, licenses, availability, platform upgrades, a collection of reusable application components and the development and runtime environments containing the APEX platform. Of course this not only means that business departments can move quickly without having to convince their IT colleagues to move along – it also means that small organizations that do not even have IT colleagues can do the same. Getting tailored applications or applications up and running to get in touch with users and customers all over the world is now within easy reach for small outfits – without any investment. My misunderstanding For a long time, I was under the impression that the essence of APEX was that the business could create applications themselves – meaning that business ‘people’ would actually go into APEX to create the application. To me APEX was too much of a developers’ tool to see that happen – apart from the odd business analyst who missed his or her calling as an IT developer. Having looked at various other cloud based development offerings – including Force.com, Mendix, WaveMaker, WorkXpress, OrangeScape, Caspio and Cordys- I have come to realize my mistake. All these platforms are positioned for 'the business' but require a fair amount of coding and technical expertise. However, they make the business happy nevertheless, because they allow the  business to completely circumvent the IT department. That is the essence. Not having to go through the red tape, not having to wait for IT staff who (justifiably) need weeks or months to provide an environment, not having to deal with administrators (again, justifiably) refusing to take on that 'strange environment'. Being able to think of an initiative and turn into action right away. The business does not have to build the application - it can easily hire some external developers or even that nerdy boy next door. They can get started, get an application up and running and invite users in – especially external users such as customers. They will worry later about upgrades and life cycle management and integration. To get applications up and running quickly and start turning ideas into action and results rightaway. That is the key selling point for all these cloud offerings, including APEX from the Cloud. And it is a compelling story. For APEX probably even more so than for the others. While I consider APEX a somewhat proprietary framework compared with ‘regular’ Java/JEE web development (or even .NET and PHP  development), it is still far more open than most cloud environments. APEX is SQL and PL/SQL based – nothing special about those languages – and can run just as easily on site as in the cloud. It has been around since 2004 (that is not including several predecessors that fed straight into APEX) so it can be considered pretty mature. Oracle as a company seems pretty stable – so investments in its technology are bound to last for some time to come. By the way: neither APEX nor the other Cloud DevaaS offerings are targeted at creating applications with enormous life times. They fit into a trend of agile development and rapid life cycle management, with fairly light weight user interfaces that quickly adapt to taste, technology trends and functional requirements and that are easily replaced. APEX and ADF – a match made in heaven?! (or at least in the sky) Note that using APEX only for cloud based database with REST-ful Services is also a perfectly viable scenario: any UI – mobile or browser based – capable of consuming REST-ful services can be created against such a business tier. Creating an ADF Mobile application for example that runs aginst REST-ful services is a best practice for mobile development. Such REST-ful services can be consumed from any service provider – including the Cloud based APEX powered REST-ful services running against the Oracle Cloud Database Service! The ADF Mobile architecture overview can easily be morphed to fit the APEX services in – allowing for a cloud based mobile app: Want to learn more about Oracle Database Cloud Service or Oracle Cloud, just visit cloud.oracle.com  or oracle.com/cloud. Repost of a blog entry by Rick Greenwald, Director of Product Management, Oracle Database Cloud Service.

    Read the article

  • Backup Azure Tables, schedule Azure scripts&hellip; and more

    - by Herve Roggero
    Well – months of effort are now officially over… or should I say it’s just the beginning?   Enzo Cloud Backup 2.0 (beta) is now officially out!!! This tool will let you do the following: * Backup SQL Database (and SQL Server to a limited extend) * Backup Azure Tables * Restore SQL Backups into another SQL environment * Restore Azure Tables in Azure Storage, or SQL Environment * Manage and schedule database maintenance scripts * Drop database schema containers (with preview) for SaaS environments * Receive alerts (SMTP) when operations complete or fail That’s it at a high level… but you need to see the flexibility around these features. For example you can select a specific backup strategy for Azure Tables allowing faster backup operations when partition keys use GUIDs. You can also call custom stored procedures during the restore operation of Azure Tables, allowing you to transform the data along the way. You can also set a performance threshold during Azure Table backup operations to help you control possible throttling conditions in your Storage Account. Regarding database scripts, you can now define T-SQL scripts and schedule them for execution in a specific order. You can also tell Enzo to execute a pre and post script during Azure Table restore operations against a SQL environment. The backup operation now supports backing up to multiple devices at the same time. So you can execute a backup request to both a local file, and a blob at the same time, guaranteeing that both will contain the exact same data. And due to the level of options that are available, you can save backup definitions for later reuse. The screenshot below backs up Azure Tables to two devices (a blob and a SQL Database). You can also manage your database schemas for SaaS environments that use schema containers to separate customer data. This new edition allows you to see how many objects you have in each schema, backup specific schemas, and even drop all objects in a given schema. For example the screenshot below shows that the EnzoLog database has 4 user-defined schemas, and the AFA schema has 5 tables and 1 module (stored proc, function, view…). Selecting the AFA schema and trying to delete it will prompt another screen to show which objects will be deleted. As you can see, Enzo Cloud Backup provides amazing capabilities that can help you safeguard your data in SQL Database and Azure Tables, and give you advanced management functions for your Azure environment. Download a free trial today at http://www.bluesyntax.net.

    Read the article

  • SQL SERVER – Transaction Log Full – Transaction Log Larger than Data File – Notes from Fields #001

    - by Pinal Dave
    I am very excited to announce a new series on this blog – Notes from Fields. I have been blogging for almost 7 years on this blog and it has been a wonderful experience. Though, I have extensive experience with SQL and Databases, it is always a good idea that we consult experts for their advice and opinion. Following the same thought process, I have started this new series of Notes from Fields. In this series we will have notes from various experts in the database world. My friends at Linchpin People have graciously decided to support me in my new initiation.  Linchpin People are database coaches and wellness experts for a data driven world. In this very first episode of the Notes from Fields series database expert Tim Radney (partner at Linchpin People) explains a very common issue DBA and Developer faces in their career, when database logs fills up your hard-drive or your database log is larger than your data file. Read the experience of Tim in his own words. As a consultant, I encounter a number of common issues with clients.  One of the more common things I encounter is finding a user database in the FULL recovery model that does not make a regular transaction log backups or ever had a transaction log backup. When I find this, usually the transaction log is several times larger than the data file. Finding this issue is very significant to me in that it allows to me to discuss service level agreements with the client. I get to ask questions such as, are nightly full backups sufficient or do they need point in time recovery.  This conversation has now signed with the customer and gets them to thinking about their disaster recovery and high availability solutions. This issue is also very prominent on SQL Server forums and usually has the title of “Help, my transaction log has filled up my disk” or “Help, my transaction log is many times the size of my database”. In cases where the client only needs the previous full nights backup, I am able to change the recovery model to SIMPLE and shrink the transaction log using DBCC SHRINKFILE (2,1) or by specifying the transaction log file name by using DBCC SHRINKFILE (file_name, target_size). When the client needs point in time recovery then in most cases I will still end up switching the client to the SIMPLE recovery model to truncate the transaction log followed by a full backup. I will then schedule a SQL Agent job to make the regular transaction log backups with an interval determined by the client to meet their service level agreements. It should also be noted that typically when I find an overgrown transaction log the virtual log file count is also out of control. I clean up will always take that into account as well.  That is a subject for a future blog post. If your SQL Server is facing any issue we can Fix Your SQL Server. Additional reading: Monitoring SQL Server Database Transaction Log Space Growth – DBCC SQLPERF(logspace)  SQL SERVER – How to Stop Growing Log File Too Big Shrinking Truncate Log File – Log Full Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQLAuthority News – Weekend Experiment with NuoDB – Points to Pondor and Whitepaper

    - by pinaldave
    This weekend I have downloaded the latest beta version of NuoDB. I found it much improved and better UI. I was very much impressed as the installation was very smooth and I was up and running in less than 5 minutes with the product. The tools which are related to the Administration of the NuoDB seems to get makeover during this beta release. As per the claim they support now Solaris platform and have improved the native MacOS installation. I neither have Mac nor Solaris – I wish I would have experimented with the same. I will appreciate if anyone out there can confirm how the installations goes on these platforms. I have previously blogged about my experiment with NuoDB here: SQL SERVER – Weekend Project – Experimenting with ACID Transactions, SQL Compliant, Elastically Scalable Database SQL SERVER – Beginning NuoDB – Who will Benefit and How to Start SQL SERVER – Follow up on Beginning NuoDB – Who will Benefit and How to Start – Part 2 I am very impressed with the product so far and I have decided to understand the product further deep. Here are few of the questions which I am going to try to find answers with regards to NuoDB. Just so it is clear – NuoDB is not NOSQL, matter of the fact, it is following all the ACID properties of the database. If ACID properties are crucial why many NoSQL products are not adhering to it? (There are few out there do follow ACID but not all). I do understand the scalability of the database however does elasticity is crucial for the database and if yes how? (Elasticity is where the workload on the database is heavily fluctuating and the need of more than a single database server is coming up). How NuoDB has built scalable, elastic and 100% ACID compliance database which supports multiple platforms? How is NOSQL compared to NuoDB’s new architecture? In the next coming weeks, I am going to explore above concepts and dive deeper into the understanding of the same. Meanwhile I have read following white paper written by Experts at University of California at Santa Barbara. Very interesting read and great starter on the subject Database Scalability, Elasticity, and Autonomy in the Cloud. Additionally, my questions are also talking about NoSQL, this weekend I have started to learn about NoSQL from Pluralsight‘s online learning library. I will share my experience very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology Tagged: NuoDB

    Read the article

  • Presenting at SQLConnections!

    - by andyleonard
    Introduction This year I'm honored to present at SQLConnections in Orlando 27-30 Mar 2011! My topics are Database Design for Developers, Build Your First SSIS Package, and Introduction to Incremental Loads. Database Design for Developers This interactive session is for software developers tasked with database development. Attend and learn about patterns and anti-patterns of database development, one method for building re-executable Transact-SQL deployment scripts, a method for using SqlCmd to deploy...(read more)

    Read the article

  • Big Data – Buzz Words: What is NewSQL – Day 10 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the relational database. In this article we will take a quick look at the what is NewSQL. What is NewSQL? NewSQL stands for new scalable and high performance SQL Database vendors. The products sold by NewSQL vendors are horizontally scalable. NewSQL is not kind of databases but it is about vendors who supports emerging data products with relational database properties (like ACID, Transaction etc.) along with high performance. Products from NewSQL vendors usually follow in memory data for speedy access as well are available immediate scalability. NewSQL term was coined by 451 groups analyst Matthew Aslett in this particular blog post. On the definition of NewSQL, Aslett writes: “NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL‘ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report. And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL. In other words - NewSQL incorporates the concepts and principles of Structured Query Language (SQL) and NoSQL languages. It combines reliability of SQL with the speed and performance of NoSQL. Categories of NewSQL There are three major categories of the NewSQL New Architecture – In this framework each node owns a subset of the data and queries are split into smaller query to sent to nodes to process the data. E.g. NuoDB, Clustrix, VoltDB MySQL Engines – Highly Optimized storage engine for SQL with the interface of MySQ Lare the example of such category. E.g. InnoDB, Akiban Transparent Sharding – This system automatically split database across multiple nodes. E.g. Scalearc  Summary In simple words – NewSQL is kind of database following relational database principals and provides scalability like NoSQL. Tomorrow In tomorrow’s blog post we will discuss about the Role of Cloud Computing in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • New Exadata public references

    - by Javier Puerta
    The following customers are now public references for Exadata. Show your customers how other companies in their industries are leveraging Exadata to achieve their business objectives. BRITISH TELECOM - Communications - United Kingdom 2x Full Rack + 1x Quarter Rack Exadata Database Machine Oracle University Training Courses Success Story DEUTSCHE BANK - Financial Services - Germany 18x Full Rack Exadata Database Machine Warehouse for Credit Risk Reporting running on Exa Success Story OPENBAAR MINISTERIE - Public Sector - Netherlands 1x Full Rack Exadata Database Machine Datawarehouse usage Success Story ADRIATIC SLOVENICA - Insurance - Slovenia 1x Quarter Rack Exadata Database Machine running on Linux Replacing Oracle DB and Oracle Application Server Success Story More customer success stories at Oracle.com References

    Read the article

< Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >