Search Results

Search found 30279 results on 1212 pages for 'database drift'.

Page 504/1212 | < Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >

  • Expanding the Partner Ecosystem with Third-Party Plug-ins

    - by Joe Diemer
    Oracle Enterprise Manager’s extensibility capabilities are designed to allow customers and partners to adapt Enterprise Manager for management of heterogeneous environments with Plug-ins and Connectors.  Third-party developers continue to take advantage of Oracle Enterprise Manager’s Extensibility Development Kit (EDK) to build plug-ins to Enterprise Manager 12c, such as F5’s BIG IP Plug-in and Entuity’s Eye of the Storm Network Management Plug-In.  Partners can also validate their plug-ins through the Oracle Validated Integration (OVI) program, which assures customers that the plug-in has been tested and is functionally and technically sound, is designed in a reliable and standardized manner, and operates and performs as documented.   Two very recent examples of partners which have beta versions of their plug-ins are Blue Medora's VMware vSphere plug-in and the NetApp Storage plug-in.  VMware vSphere Plug-in by Blue Medora Blue Medora, an Oracle Partner Network (OPN) “Gold” member, which just announced that it is now signing up customers to try a beta version of their new VMware vSphere plug-in for Enterprise Manager 12c.  According to Blue Medora, the vSphere plug-in monitors critical VMware metrics (CPU, Memory, Disk, Network, etc) at the Host, VM, Cluster and Resource Pool levels.  It has minimal performance impact via an “agentless” approach that requires no installation directly on VMware servers.  It has discovery capabilities for VMware Datacenters, ESX Hosts, Clusters, Virtual Machines, and Datastores.  It offers integration of native VMware Events into Enterprise Manager, and it provides over 300 VMware-related health, availability, performance, and configuration metrics.  It comes with more than 30 out-of-the-box pre-defined thresholds and can manage VMware via a series of jobs split between cluster, host and VM target types.The company reports that the Enterprise Manager 12c plug-in supports vSphere versions 4.0, 4.5 and 5.0.  Platforms supported include Linux 64-bit, Windows, AIX and Solaris SPARC and x86.  Information about the plug-in, including how to sign up for the beta, is available at their web site at http://bluemedora.com after selecting the "Products" tab. NetApp Storage Plug-in NetApp believes the combination of storage system monitoring with comprehensive management of Oracle systems with Enterprise Manager will help customers reduce the cost and complexity of managing applications that rely on NetApp storage and Oracle technologies.  So, NetApp built a plug-in and reports that it has comprehensive availability and performance information for NetApp storage systems.  Using the plug-in, Oracle Enterprise Manager customers with NetApp storage solutions can track the association between databases and storage components and thereby respond to faults and IO performance bottlenecks quickly. With the latest configuration management capabilities, one can also perform drift analysis to make sure all storage systems are configured as per established gold standards. The company is also now signing up beta customers, which can be done at the NetApp Communities site at https://communities.netapp.com/groups/netapp-storage-system-plug-in-for-oem12c-beta. Learn More about Enterprise Manager Extensibility More plug-ins from other partners are soon to come, which I'll be reporting on them here.  To learn more about Enterprise Manager and how customers and partners can build plug-ins using the EDK to manage a multi-vendor data center, go to http://oracle.com/enterprisemanager in the Heterogeneous Management solution area.  The site also lists the plug-ins available with information on how to obtain them.  More info about the Oracle Validated Integration program can be found at the OPN Enterprise Manager Knowledge Zone in the "Develop" tab.

    Read the article

  • CodePlex Daily Summary for Sunday, January 23, 2011

    CodePlex Daily Summary for Sunday, January 23, 2011Popular ReleasesCommunity Forums NNTP bridge: Community Forums NNTP Bridge V42: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has added some features / bugfixes: Bugfix: Decoding of Subject now also supports multi-line subjects (occurs only if you have very long subjects with non-ASCII characters)Microsoft All-In-One Code Framework: All-In-One Code Framework 2011-01-23: Improved and Newly Added Examples:For an up-to-date code sample index, please refer to All-In-One Code Framework Sample Catalog. NEW Samples for Azure Name Description Owner CSAzureBingMaps Azure + Bing map sample application (C#) YiLun VBAzureBingMaps Azure + Bing map sample application (VB) Wenchao NEW Samples for ASP.NET Name Description Owner CSASPNETEmailAddressValidator Validate email address in ASP.NET (C#) Jerry VBASPNETEmailAddressValidator...Minecraft Tools: Minecraft Topographical Survey 1.3: MTS requires version 4 of the .NET Framework - you must download it from Microsoft if you have not previously installed it. This version of MTS adds automatic block list updates, so MTS will recognize blocks added in game updates properly rather than drawing them in bright pink. New in this version of MTS: Support for all new blocks added since the Halloween update Auto-update of blockcolors.xml to support future game updates A splash screen that shows while the program searches for upd...StyleCop for ReSharper: StyleCop for ReSharper 5.1.14996.000: New Features: ============= This release is just compiled against the latest release of JetBrains ReSharper 5.1.1766.4 Previous release: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes...TweetSharp: TweetSharp v2.0.0.0 - Preview 9: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Note: This code is currently preview quality. Preview 9 ChangesAdded support for lists and suggested users Fixes based on user feedback Third Party Library VersionsHammock v1.1.6: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comjqGrid ASP.Net MVC Control: Version 1.2.0.0: jqGrid 3.8 support jquery 1.4 support New and exciting features Many bugfixes Complete separation from the jquery, & jqgrid codeMediaScout: MediaScout 3.0 Preview 4: Update ReleaseCoding4Fun Tools: Coding4Fun.Phone.Toolkit v1: Coding4Fun.Phone.Toolkit v1MFCMAPI: January 2011 Release: Build: 6.0.0.1024 Full release notes at SGriffin's blog. If you just want to run the tool, get the executable. If you want to debug it, get the symbol file and the source. The 64 bit build will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit build, regardless of the operating system. Facebook BadgeAutoLoL: AutoLoL v1.5.4: Added champion: Renekton Removed automatic file association Fix: The recent files combobox didn't always open a file when an item was selected Fix: Removing a recently opened file caused an errorDotNetNuke® Community Edition: 05.06.01: Major Highlights Fixed issue to remove preCondition checks when upgrading to .Net 4.0 Fixed issue where some valid domains were failing email validation checks. Fixed issue where editing Host menu page settings assigns the page to a Portal. Fixed issue which caused XHTML validation problems in 5.6.0 Fixed issue where an aspx page in any subfolder was inaccessible. Fixed issue where Config.Touch method signature had an unintentional breaking change in 5.6.0 Fixed issue which caused...MiniTwitter: 1.65: MiniTwitter 1.65 ???? ?? List ????? in-reply-to ???????? ????????????????????????? ?? OAuth ????????????????????????????ASP.net Ribbon: Version 2.1: Tadaaa... So Version 2.1 brings a lot of things... Have a look at the homepage to see what's new. Also, I wanted to (really) improve the Designer. I wanted to add great things... but... it took to much time. And as some of you were waiting for fixes, I decided just to fix bugs and add some features. So have a look at the demo app to see new features. Thanks ! (You can expect some realeses if bugs are not fixed correctly... 2.2, 2.3, 2.4....)iTracker Asp.Net Starter Kit: Version 3.0.0: This is the inital release of the version 3.0.0 Visual Studio 2010 (.Net 4.0) remake of the ITracker application. I connsider this a working, stable application but since there are still some features missing to make it "complete" I'm leaving it listed as a "beta" release. I am hoping to make it feature complete for v3.1.0 but anything is possible.mytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.52.1 beta 2: New MVC3 RTM fix bug: Dropdown select fix bug: Add Store/Department and Add Store/Produser WEB.mytrip.mvc 1.0.52.1 Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) SRC.mytrip.mvc 1.0.52.1 System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.6.1: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager changes: RenderView controller extension works for razor also live demo switched to razorBloodSim: BloodSim - 1.3.3.1: - Priority update to resolve a bug that was causing Boss damage to ignore Blood Shields entirelyRawr: Rawr 4.0.16 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of this release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you have a problem, please follow the Posting Guidelines and put it into the Issue Tracker. W...MvcContrib: an Outer Curve Foundation project: MVC 3 - 3.0.51.0: Please see the Change Log for a complete list of changes. MVC BootCamp Description of the releases: MvcContrib.Release.zip MvcContrib.dll MvcContrib.TestHelper.dll MvcContrib.Extras.Release.zip T4MVC. The extra view engines / controller factories and other functionality which is in the project. This file includes the main MvcContrib assembly. Samples are included in the release. You do not need MvcContrib if you download the Extras.N2 CMS: 2.1.1: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. 2.1.1 Maintenance release List of changes 2.1 Major Changes Support for auto-implemented properties ({get;set;}, based on contribution by And Poulsen) File manager improvements (multiple file upload, resize images to fit) New image gallery Infinite scroll paging on news Content templates First time with N2? Try the demo site Download one of the template packs (above) and open...New ProjectsAutomatic content Publishing for SharePoint 2010 using PowerShell script: While working on SharePoint publishing portals, many times we require the automated publishing of contents. I thought of writing script that has good options to choose from and, will automate the publishing of content.Basic Events: Basic Events makes it easier for developers to use out of the box events with basic properties in their eventargs like string message and datetime.Benedictus 3000: Ici on fait bien les choses. Rien de moins que la totale, et rien de plus. N'essayez pas d'en faire plus que nous, vous allez vous faire mal.CPEBook by FMUG & TPAY: CPEBook by MUG & TPAY Projet dot NET CPEBookDust2: ?.NET???,??ActiveRecord??????????。Dust2????ruby on rails?ActiveRecord???,?????.NET?????C#????。 Dust2????、????、??、??、????? ???????,Dust2???Migrations??!EaseCode: Want acces to paths that require long line codes? we make it 2 words want battery percentage, windows version, username, appdata, program files, etx. you get acces to that all by typing 2 words Enhanced Host File Manager: Enhanced Host File Editor / Manager Intended for web development (developers or designers) to switch between development and live easily, hassle free. Full access to the host file is required. It is assumed that host file is at default location: C:\Windows\System32\drivers\etc\Intervals: This class library, written in C# for .NET 3.5 and 4.0, allows you to work with intervals, with various utility methods that simplify the work. It will allow you to find overlapping regions, create interval slices, do slice calculations, etc.KeyboardMouseHooks C# Library: KeyboardMouseHooks C# classes are part of RamGec Tools collection for .NET developers. It enables you, in a very easy, optimized and OO-way, via C# events system to install and track low level Windows keyboard and mouse hooks.Lightning Talk Countdown Timer: This application is 5 minutes countdown timer for Lightning Talk. ???????????5????????????????。MASCOT2CSV: This simple tool combines input and output data for/to MASCOT protein database into one csv file to make it comparable by human reader.MD5FileCalculator: Windows application showing the MD5 Hash for a fileMux Log Cleaner: A simple text file log cleaner for use with text-based RPGs (MUX, MUSH, MUD, MOO, etc.) which filters out common out of character output and allows custom filters. Requires .NET framework 3.5 (4.0 doesn't seem to work with it presently)MyUtil: ???????Orchard Module Visual Studio Project Template: Visual Studio Project Template installer for Orchard ModulesOutlook Social Connector C# sample with Setup Project: The Outlook 2010 Social Connector allows you to integrate (mash-up) feeds from other Web 2.0 sites. This example comes with a Setup project that will help you publish and deploy your project PAEPing: a simple tool for pingingRegExEditor: RegExEditor makes it easier for .Net developers to design Regular Expressions. It is developed in C#.Shevek: Shevek is following the dotnetslackers.com "Building a StackOverflow inspired Knowledge Exchange" articlesstylecopmaker: stylecopmakerSystem.Json: System.Json is a basic implementation of Json parsing and usage, allowing easy consumption of Json data from any source. System.Json is called such as I feel this is the library that should simply be in .NET for handling Json.VtigercrmNet: ?.NET????vtigercrm????WPFTimer: A simple Timer in WPF.Xi Game Engine for XNA: Xi Game Engine is an evolution of the classic Ox Game Engine for XNA. It is a good foundation for modern 2D and 3D game applications. It features an all-in-one UI, 2D, and 3D editor. It is deeply integrated with game physics using Box2D and BEPU.Yet Another Silverlight Popup Menu: A Silverlight 4 popup menu example that includes separators, images, and finished sufficiently to configure in XAML.

    Read the article

  • Unable to Sign in to the Microsoft Online Services Signin application from Windows 7 client located behind ISA firewall

    - by Ravindra Pamidi
    A while ago i helped a customer troubleshoot authentication problem with Microsoft Online Services Signin application.  This customer was evaluating Microsoft BPOS (Business Productivity Online Services) and was having trouble using the single sign on application behind ISA 2004 firewall.The network structure is fairly simple with single Windows 2003 Active Directory domain and Windows 7 clients. On a successful logon to the Microsoft Online Services Signin application, this application provides single signon functionality to all of Microsoft online services in the BPOS package. Symptoms:When trying to signin it fails with error "The service is currently unavailable. Please try again later. If problems continue, contact your service administrator". If ISA 2004 firewall is removed from the picture the authentication succeeds.Troubleshooting: Enabled ISA Server firewall logging along with Microsoft Network Monitor tool on the Windows 7 Client while reproducing the issue. Analysis of the ISA Server Firewall logs and Microsoft Network capture revealed that the Microsoft Online Services Sign In application when sending request to ISA Server does not send the domain credentials and as a result ISA Server responds with an error code of HTTP 407 Proxy authentication required listing out the supported authentication mechanisms.  The application in question is expected to send the credentials of the domain user in response to this request. However in this case, it fails to send the logged on user's domain credentials. Bit of researching on the Internet revealed that The "Microsoft Online Services Sign In" application by default does not support Outbound Internet Proxy authentication. In order for it to send the logged on user's domain credentials we had to make  changes to its configuration file "SignIn.exe.config" located under "Program Files\Microsoft Online Services\Sign In" folder. Step by Step details to configure the configuration file are documented on Microsoft TechNet website given below.  Configure your outbound authenticating proxy serverhttp://www.microsoft.com/online/help/en-us/helphowto/cc54100d-d149-45a9-8e96-f248ecb1b596.htm After the above problem was addressed we were still not able to use the "Microsoft Online Services Sign In" application and it failed with the same error.  Analysis of another network capture revealed that the application in question is now sending the required credentials and the connection seems to terminate at a later stage. Enabled verbose logging for the "Microsoft Online Services Sign In" application and then reproduced the problem. Analysis of the logs revealed a time difference between the local client and Microsoft Online services server of around seven minutes which is above the acceptable time skew of five minutes. Excerpt from Microsoft Online Services Sign In application verbose log:  1/26/2012 1:57:51 PM Verbose SingleSignOn.GetSSOGenericInterface SSO Interface URL: https://signinservice.apac.microsoftonline.com/ssoservice/UID1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn The security timestamp is invalid because its creation time ('2012-01-26T08:34:52.767Z') is in the future. Current time is '2012-01-26T08:27:52.987Z' and allowed clock skew is '00:05:00'.1/26/2012 1:57:52 PM Exception SSOSignIn.SignIn  Although the Windows 7 Clients successfully synchronized time to the domain controller for the domain, the domain controller was not configured to synchronize time with external NTP servers. This caused a gradual drift in time on the network thus resulting in the above issue. Reconfigured the domain controller holding the PDC FSMO role to synchronize time with external time source ( time.nist.gov ) and edited the system policy on the ISA server firewall to allow NTP traffic to time.nist.gov Configure the time source for the forest:Windows Time Servicehttp://technet.microsoft.com/en-us/library/cc794937(WS.10).aspx Forced synchronization of Windows time using the command w32tm /resync on the domain controller and later on the clients each of which had corrected the seven minutes difference. This resolved the problem with logon to Microsoft Online Services Sign In.

    Read the article

  • Why does Cacti keep waiting for dead poller processes?

    - by Oliver Salzburg
    sorry for the length I am currently setting up a new Debian (6.0.5) server. I put Cacti (0.8.7g) on it yesterday and have been battling with it ever since. Initial issue The initial issue I was observing, was that my graphs weren't updating. So I checked my cacti.log and found this concerning message: POLLER: Poller[0] Maximum runtime of 298 seconds exceeded. Exiting. That can't be good, right? So I went checking and started poller.php myself (via sudo -u www-data php poller.php --force). It will pump out a lot of message (which all look like what I would expect) and then hang for a minute. After that 1 minute, it will loop the following message: Waiting on 1 of 1 pollers. This goes on for 4 more minutes until the process is forcefully ended for running longer than 300s. So far so good I went on for a good hour trying to determine what poller might still be running, until I got to the conclusion that there simply is no running poller. Debugging I checked poller.php to see how that warning is issued and why. On line 368, Cacti will retrieve the number of finished processes from the database and use that value to calculate how many processes are still running. So, let's see that value! I added the following debug code into poller.php: print "Finished: " . $finished_processes . " - Started: " . $started_processes . "\n"; Result This will print the following within seconds of starting poller.php: Finished: 0 - Started: 1 Waiting on 1 of 1 pollers. Finished: 1 - Started: 1 So the values are being read and are valid. Until we get to the part where it keeps looping: Finished: - Started: 1 Waiting on 1 of 1 pollers. Suddenly, the value is gone. Why? Putting var_dump() in there confirms the issue: NULL Finished: - Started: 1 Waiting on 1 of 1 pollers. The return value is NULL. How can that be when querying SELECT COUNT()...? (SELECT COUNT() should always return one result row, shouldn't it?) More debugging So I went into lib\database.php and had a look at that db_fetch_cell(). A bit of testing confirmed, that the result set is actually empty. So I added my own database query code in there to see what that would do: $finished_processes = db_fetch_cell("SELECT count(*) FROM poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'"); print "Finished: " . $finished_processes . " - Started: " . $started_processes . "\n"; $mysqli = new mysqli("localhost","cacti","cacti","cacti"); $result = $mysqli->query("SELECT COUNT(*) FROM poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00';"); $row = $result->fetch_assoc(); var_dump( $row ); This will output Finished: - Started: 1 array(1) { ["COUNT(*)"]=> string(1) "2" } Waiting on 1 of 1 pollers. So, the data is there and can be accessed without any problems, just not with the method Cacti is using? Double-check that! I enabled MySQL logging to make sure I'm not imagining things. Sure enough, when the error message is looped, the cacti.log reads as if it was querying like mad: 06/29/2012 08:44:00 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" 06/29/2012 08:44:01 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" 06/29/2012 08:44:02 PM - CMDPHP: Poller[0] DEVEL: SQL Cell: "SELECT count(*) FROM cacti.poller_time WHERE poller_id=0 AND end_time>'0000-00-00 00:00:00'" But none of these queries are logged my MySQL. Yet, when I add my own database query code, it shows up just fine. What the heck is going on here?

    Read the article

  • One email user keeps disconnecting from our exchange server

    - by Funky Si
    I have one user who keeps reporting that Outlook keeps disconnecting from our email server. All other users are fine. Our email server is running Exchange 2010 and the client is running Outlook 2003. The disconnection only lasts a moment. I have checked for logs on the client and Exchange server and can not see any reason for the disconnect, On the Client I get EventId 26 telling me Outlook has disconnected and reconnected but no reason why. Can anyone give me some suggestions of things to try and track down where the problem could be? --Update-- I have found the following log file C:\Program Files\Microsoft\Exchange Server\V14\Logging\RPC Client Access which suggests that it is a problem with RPC sessions. Excerpt is below 2013-01-31T15:21:24.015Z,6413,15,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,0,00:00:00,"BS=Conn:24,HangingConn:0,AD:$null/$null/0%,CAS:$null/$null/2%,AB:$null/$null/0%,RPC:$null/$null/1%,FC:$null/0,Policy:ClientThrottlingPolicy2,Norm[Resources:(Mdb)Mailbox Database 0765959540(Health:-1%,HistLoad:0),(Mdb)Public Folder Database 1945427388(Health:-1%,HistLoad:0),];GC:6/1/0;", 2013-01-31T15:21:24.015Z,6413,16,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,PublicLogoff,0,00:00:00,LogonId: 1, 2013-01-31T15:21:24.015Z,6413,16,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,,00:00:00,"BS=Conn:24,HangingConn:0,AD:$null/$null/0%,CAS:$null/$null/2%,AB:$null/$null/0%,RPC:$null/$null/1%,FC:$null/0,Policy:ClientThrottlingPolicy2,Norm[Resources:(Mdb)Mailbox Database 0765959540(Health:-1%,HistLoad:0),(Mdb)Public Folder Database 1945427388(Health:-1%,HistLoad:0),];GC:6/1/0;", 2013-01-31T15:21:24.015Z,6417,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,OwnerLogoff,0,00:00:00,LogonId: 0, 2013-01-31T15:21:24.015Z,6417,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,0x6BA (rpc::Exception),00:02:54.7668000,Budget Highs [AD = 0][CAS = 3][RPC = 1] Session Throttled Count = 0; SessionDropped,"RpcEndPoint: [ServerUnavailableException] Connection must be re-established - [SessionDeadException] Connection doesn't have any open logons, but has client activity. This may be masking synchronization stalls. Dropping a connection." 2013-01-31T15:21:24.015Z,6420,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,DelegateLogoff,0,00:00:00,LogonId: 0, 2013-01-31T15:21:24.031Z,6420,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,,00:00:00.0156000,Budget Highs [AD = 0][CAS = 3][RPC = 1] Session Throttled Count = 0, 2013-01-31T15:21:24.031Z,6420,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,Disconnect,0,00:02:54.2364000,, 2013-01-31T15:21:24.031Z,6419,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,OwnerLogoff,0,00:00:00,LogonId: 0, 2013-01-31T15:21:24.031Z,6419,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,,00:00:00,Budget Highs [AD = 0][CAS = 3][RPC = 1] Session Throttled Count = 0, 2013-01-31T15:21:24.031Z,6419,5,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,Disconnect,0,00:02:54.4392000,, 2013-01-31T15:21:24.031Z,6416,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,DelegateLogoff,0,00:00:00,LogonId: 0, 2013-01-31T15:21:24.031Z,6416,7,/o=EUROSAFEUK/ou=first administrative group/cn=Recipients/cn=andy,,OUTLOOK.EXE,11.0.8303.0,Classic,,,ncacn_ip_tcp,,,,00:00:00,Budget Highs [AD = 0][CAS = 3][RPC = 1] Session Throttled Count = 0, Can anyone help point me in the right direction for a solution?

    Read the article

  • opennms postgres connection slow

    - by krisdigitx
    i am running the opennms application server on a physical server and the database on an ESXi VM. Recently the opennms webconsole has been very slow to load as such i deleted most of the events from the database table, now both servers have no load at all, and the psql connection from the application server to the database server is also very fast, but somehow opennms webconsole is still slow. this is the strace from the opennms process id: 18629 futex(0x2aaac77d8a84, FUTEX_WAIT_PRIVATE, 453, NULL <unfinished ...> 3015 futex(0x2aaabc4a2ee4, FUTEX_WAIT_PRIVATE, 323, NULL <unfinished ...> 10863 futex(0x2aaabbebaa94, FUTEX_WAIT_PRIVATE, 395, NULL <unfinished ...> 25260 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10859 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10982 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 3011 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 25260 futex(0x2aaae098fc28, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 10982 futex(0x2aaac0eaf928, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 3011 futex(0x2aaab0cb1728, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 10859 futex(0x2aaac062c328, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 25260 <... futex resumed> ) = 0 10982 <... futex resumed> ) = 0 3011 <... futex resumed> ) = 0 10859 <... futex resumed> ) = 0 25260 futex(0x2aaabc38b6b4, FUTEX_WAIT_PRIVATE, 443, NULL <unfinished ...> 10982 futex(0x2aaabc5d7b94, FUTEX_WAIT_PRIVATE, 99, NULL <unfinished ...> 3011 futex(0x2aaac7c55334, FUTEX_WAIT_PRIVATE, 183, NULL <unfinished ...> 10859 futex(0x2aaabbb8c9d4, FUTEX_WAIT_PRIVATE, 347, NULL <unfinished ...> 10846 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10846 futex(0x2aaae9022428, FUTEX_WAKE_PRIVATE, 1) = 0 10846 futex(0x2aaabe0030b4, FUTEX_WAIT_PRIVATE, 251, NULL <unfinished ...> 20281 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 14100 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 2925 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 10843 <... futex resumed> ) = -1 ETIMEDOUT (Connection timed out) 20281 futex(0x2aaac7e93628, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 14100 futex(0x2aaac04e8c28, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 2925 futex(0x2aaaec085528, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> 10843 futex(0x2aaab20b0528, FUTEX_WAKE_PRIVATE, 1 <unfinished ...> and shows lots of connection timeout??? i think its the connection between the java application and database which is causing issues. any ideas how to troubleshoot this???

    Read the article

  • Long connection times from PHP to MySQL on EC2

    - by Erik Giberti
    I'm having an intermittent issue connecting to a database slave with InnoDB. Intermittently I get connections taking longer than 2 seconds. These servers are hosted on Amazon's EC2. The app server is PHP 5.2/Apache running on Ubuntu. The DB slave is running Percona's XtraDB 5.1 on Ubuntu 9.10. It's using an EBS Raid array for the data storage. We already use skip name resolve and bind to address 0.0.0.0. This is a stub of the PHP code that's failing $tmp = mysqli_init(); $start_time = microtime(true); $tmp-options(MYSQLI_OPT_CONNECT_TIMEOUT, 2); $tmp-real_connect($DB_SERVERS[$server]['server'], $DB_SERVERS[$server]['username'], $DB_SERVERS[$server]['password'], $DB_SERVERS[$server]['schema'], $DB_SERVERS[$server]['port']); if(mysqli_connect_errno()){ $timer = microtime(true) - $start_time; mail($errors_to,'DB connection error',$timer); } There's more than 300Mb available on the DB server for new connections and the server is nowhere near the max allowed (60 of 1,200). Loading on both servers is < 2 on 4 core m1.xlarge instances. Some highlights from the mysql config max_connections = 1200 thread_stack = 512K thread_cache_size = 1024 thread_concurrency = 16 innodb-file-per-table innodb_additional_mem_pool_size = 16M innodb_buffer_pool_size = 13G Any help on tracing the source of the slowdown is appreciated. [EDIT] I have been updating the sysctl values for the network but they don't seem to be fixing the problem. I made the following adjustments on both the database and application servers. net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_fin_timeout = 20 net.ipv4.tcp_keepalive_time = 180 net.ipv4.tcp_max_syn_backlog = 1280 net.ipv4.tcp_synack_retries = 1 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 87380 16777216 [EDIT] Per jaimieb's suggestion, I added some tracing and captured the following data using time. This server handles about 51 queries/second at this the time of day. The connection error was raised once (at 13:06:36) during the 3 minute window outlined below. Since there was 1 failure and roughly 9,200 successful connections, I think this isn't going to produce anything meaningful in terms of reporting. Script: date /root/database_server.txt (time mysql -h database_Server -D schema_name -u appuser -p apppassword -e '') /dev/null 2 /root/database_server.txt Results: === Application Server 1 === Mon Feb 22 13:05:01 EST 2010 real 0m0.008s user 0m0.001s sys 0m0.000s Mon Feb 22 13:06:01 EST 2010 real 0m0.007s user 0m0.002s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Application Server 2 === Mon Feb 22 13:05:01 EST 2010 real 0m0.009s user 0m0.000s sys 0m0.002s Mon Feb 22 13:06:01 EST 2010 real 0m0.009s user 0m0.001s sys 0m0.003s Mon Feb 22 13:07:01 EST 2010 real 0m0.008s user 0m0.000s sys 0m0.001s === Database Server === Mon Feb 22 13:05:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s Mon Feb 22 13:06:01 EST 2010 real 0m0.006s user 0m0.010s sys 0m0.000s Mon Feb 22 13:07:01 EST 2010 real 0m0.016s user 0m0.000s sys 0m0.010s [EDIT] Per a suggestion received on a LinkedIn question, I tried setting the back_log value higher. We had been running the default value (50) and increased it to 150. We also raised the kernel value /proc/sys/net/core/somaxconn (maximum socket connections) to 256 on both the application and database server from the default 128. We did see some elevation in processor utilization as a result but still received connection timeouts.

    Read the article

  • Issues Converting Plain Text Into Microsoft Word Bulleted Lists

    - by user787832
    I'm a programmer. I hate status reports. I found a way to live with it. While I am working in my IDE ( Visual Slickedit ) I keep a plain text file open in one of the file/buffer tabs. As I finish things I just jot down a quick note into that file. At the end of the week that becomes my weekly status report. Example entries: The Datatables.net plugin runs very slowly in IE 8 with more than 2,000 records. I changed the way I did the server side code to process the data to make less work for the plugin to get decent performance for the IE 8 users. I made a class to wrap data from the new data collection objects into the legacy data holder objects. This will let the new database code be backward compatible with the legacy code until we can replace it. I found the bug reported by Jane. The software is fine. The database we use for the test site has data that is corrupted in a way it wouldn't be for production site At the end of the month I go back to each weekly *.txt file and paste all of the entries into a MS Word file for a monthly report. I give the monthly report to a liason to the contracting company who has to compile everyone's monthly reports into a single MS Word 2007 document. His problem, soon to be my problem, comes when he highlights my paragraphs like the ones above to put bullets in front of my paragraphs. When he highlights my notes to put bullets in front of them with MS Word 2007, Word rearranges the text a bit and the new line chars/carriage returns stagger the text so the text is no longer in neat chunks. This: I found the bug reported by Jane. The software is fine. The database we use for the test site has data that is corrupted in a way it wouldn't be for production site Becomes This: I found the bug reported by Jane. The software is fine. The database we use for the test site has data that is corrupted in a way it wouldn't be for production site I tried turning word wrap on in my IDE for the text files I put my status notes in. It just puts some kind of newline character in anyway. Searching/Replacing those chars in the text files has the result of destroying the paragraphs. Once my notes are pasted into MS Word, Word automatically translates them into paragraph breaks. Searching/Replacing them there has similar results. Blank lines separating the notes disappears. One big mess. What I would like is to be able to keep adding my status notes to a text file as I am now, but do something different when I paste the notes into MS Word such that my liason can select the text, hit the bulleting command and NOT have the staggered text as shown above. Any ideas? Thanks much in advance Steve

    Read the article

  • Converting QXmlItem to QtDomElement or similar?

    - by EightyEight
    Hello everyone. I'm parsing a fairly complicated XML file of the following structure: <root> ... ... <item> <subitem id="1"/> <text> text1 </text> </item> <item> <subitem id="2"/> <text> text2 </text> </item> ... <item> ... </item> ...</root> It's pretty crude but you get my drift I hope. I'm primarily interested in "item" nodes. So I wrote the following code (directly out of the Qt's online manual): QXmlQuery query; query.setQuery("//item/"); QXmlResultItems result; query.evaluateTo(&result); QXmlItem item(result.next()); while (!item.isNull()) { if (item.isNode()) { // WHAT DO I DO NOW? } item = result.next(); } Now, QXmlItem appears to represent two concepts, a literal value (like a string) or a Node, (which is what item.isNode() is doing). Unfortunately, I can't grasp how to convert the QXmlItem to something that will query-able again. In particular from the example above I'd like to grab the "id" attribute, and the text element. Can I do this using the XQuery approach, or am I way off base here? Any advice? Thanks!

    Read the article

  • Best Practice: QT4 QList<Mything*>... on Heap, or QList<Mything> using reference?

    - by Mike Crowe
    Hi Folks, Learning C++, so be gentle :)... I have been designing my application primarily using heap variables (coming from C), so I've designed structures like this: QList<Criteria*> _Criteria; // ... Criteria *c = new Criteria(....); _Criteria.append(c); All through my program, I'm passing pointers to specific Criteria, or often the list. So, I have a function declared like this: QList<Criteria*> Decision::addCriteria(int row,QString cname,QString ctype); Criteria * Decision::getCriteria(int row,int col) which inserts a Criteria into a list, and returns the list so my GUI can display it. I'm wondering if I should have used references, somehow. Since I'm always wanting that exact Criteria back, should I have done: QList<Criteria> _Criteria; // .... Criteria c(....); _Criteria.append(c); ... QList<Criteria>& Decision::addCriteria(int row,QString cname,QString ctype); Criteria& Decision::getCriteria(int row,int col) (not sure if the latter line is syntactically correct yet, but you get the drift). All these items are specific, quasi-global items that are the core of my program. So, the question is this: I can certainly allocate/free all my memory w/o an issue in the method I'm using now, but is there are more C++ way? Would references have been a better choice (it's not too late to change on my side). TIA Mike

    Read the article

  • Are there any configurable parameters to the gpsd?

    - by danatel
    I use the gpsd daemon with my application. Sometimes, the gpsd ceases to work with no apparent reason (clean sky). Even the gpsmon monitor shows no fix. Are there any parameters which must be set? Or is it a hardware problem? I am surprissed that many satellites are visible but the "Stat" bitmap does not contain the bit 7 - ephemeris data available. Should i somewhat pre-configure my position to allow for correct ephemeris data? Here is my gpsmon screen: 127.0.0.1:2947:/dev/ttyS3 SiRF binary> ^[[4~ -¦¦¦¦¦¦¦¦¦¦¦ X ¦¦¦¦¦¦ Y ¦¦¦¦¦¦ Z ¦¦¦¦¦¦¦¦¦¦ North ¦¦¦¦ East ¦¦¦¦¦ Alt ¦¦¦¦¦¦¦¦¦¬ -Pos: 3949260 1166016 4856299 m 49.89411° 16.44920° 1379 m - -Vel: 0.0 0.0 0.0 m/s 0.0 0.0 0.0 climb m/s- -Week+TOW:1578+224837.06 Day: 2 14:27:17.06 Heading: 0.0° 0.0 speed m/s- -Skew: -13.025817 TZ: -7200 HDOP: 0.0 M1:00 M2: 00 - -Fix: 0 = - L¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦ Packet type 2 (0x02) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- -¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬-¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ -Ch PRN Az El Stat C/N ? A --Version: - - 0 2 243 19 003f 40.4 -L¦¦¦¦¦¦¦ Packet Type 6 (0x06) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- - 1 10 249 68 003f 43.0 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ - 2 13 90 30 003f 40.9 --SVs: 0 Drift: 96506 Bias: 135976716 - - 3 7 66 67 003f 39.8 --Estimated GPS Time: 224837059 - - 4 5 295 49 003d 39.7 -L¦¦¦¦¦¦¦ Packet type 7 (0x07) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- - 5 8 210 69 003f 41.0 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ - 6 23 96 5 002d 28.0 --Max: 167.570Lat: 132.129Time: 0.075 MS: 02 - - 7 6 43 3 002d 23.1 -L¦¦¦¦¦¦¦ Packet type 9 (0x09) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- - 8 28 163 16 003f 39.8 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ - 9 0 0 0 0000 0.0 --SVs: 11 = 8 10 7 5 13 2 28 23 3 6 4 - -10 3 55 4 002d 24.7 -L¦¦¦¦¦¦¦ Packet type 13 (0x0D) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦- -11 0 0 0 0000 0.0 --¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¬ L¦¦¦ Packet Type 4 (0x04) ¦¦¦--DGPS source: 1 (SBAS) Corrections: 12 - L¦¦¦¦¦¦¦ Packet type 27 (0x1B) ¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦-q

    Read the article

  • How to parse text as JavaScript?

    - by Danjah
    This question of mine (currently unanswered), drove me toward finding a better solution to what I'm attempting. My requirements: chunks of code which can be arbitrarily added into a document, without an identifier: [div class="thing"] [elements... /] [/div] the objects are scanned for and found by an external script: var things = yd.getElementsBy(function(el){ return yd.hasClass('thing'); },null,document ); the objects must be individually configurable, what I have currently is identifier-based: [div class="thing" id="thing0"] [elements... /] [script type="text/javascript"] new Thing().init({ id:'thing0'; }); [/script] [/div] So I need to ditch the identifier (id="thing0") so there are no duplicates when more than one chunk of the same code is added to a page I still need to be able to config these objects individually, without an identifier SO! All of that said, I wondered about creating a dynamic global variable within the script block of each added chunk of code, within its script tag. As each 'thing' is found, I figure it would be legit to grab the innerHTML of the script tag and somehow convert that text into a useable JS object. Discuss. Ok, don't discuss if you like, but if you get the drift then feel free to correct my wayward thinking or provide a better solution - please! d

    Read the article

  • How accurately (in terms of time) does Windows play audio?

    - by MusiGenesis
    Let's say I play a stereo WAV file with 317,520,000 samples, which is theoretically 1 hour long. Assuming no interruptions of the playback, will the file finish playing in exactly one hour, or is there some occasional tiny variation in the playback speed such that it would be slightly more or slightly less (by some number of milliseconds) than one hour? I am trying to synchronize animation with audio, and I am using a System.Diagnostics.Stopwatch to keep the frames matching the audio. But if the playback speed of WAV audio in Windows can vary slightly over time, then the audio will drift out of sync with the Stopwatch-driven animation. Which leads to a second question: it appears that a Stopwatch - while highly granular and accurate for short durations - runs slightly fast. On my laptop, a Stopwatch run for exactly 24 hours (as measured by the computer's system time and a real stopwatch) shows an elapsed time of 24 hours plus about 5 seconds (not milliseconds). Is this a known problem with Stopwatch? (A related question would be "am I crazy?", but you can try it for yourself.) Given its usage as a diagnostics tool, I can see where a discrepancy like this would only show up when measuring long durations, for which most people would use something other than a Stopwatch. If I'm really lucky, then both Stopwatch and audio playback are driven by the same underlying mechanism, and thus will stay in sync with each other for days on end. Any chance this is true?

    Read the article

  • What is an elegant way to solve this max and min problem in Ruby or Python?

    - by ????
    The following can be done by step by step, somewhat clumsy way, but I wonder if there are elegant method to do it. There is a page: http://www.mariowiki.com/Mario_Kart_Wii, where there are 2 tables... there is Mario - 6 2 2 3 - - Luigi 2 6 - - - - - Diddy Kong - - 3 - 3 - 5 [...] The name "Mario", etc are the Mario Kart Wii character names. The numbers are for bonus points for: Speed Weight Acceleration Handling Drift Off-Road Mini-Turbo and then there is table 2 Standard Bike S 39 21 51 51 54 43 48 Out Bullet Bike 53 24 32 35 67 29 67 In Bubble Bike / Jet Bubble 48 27 40 40 45 35 37 In [...] These are also the characteristics for the Bike or Kart. I wonder what's the most elegant solution for finding all the maximum combinations of Speed, Weight, Acceleration, etc, and also for the minimum, either by directly using the HTML on that page or copy and pasting the numbers into a text file. Actually, in that character table, Mario to Bower Jr are all medium characters, Baby Mario to Dry Bones are small characters, and the rest are all big characters, except the small, medium, or large Mii are just as what the name says. Small characters can only ride small bike or small kart, and so forth for medium and large.

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 1

    - by rajbk
    This tutorial walks you through creating an report based on the Northwind sample database. You will add a client report definition file (RDLC), create a dataset for the RDLC, define queries using LINQ to Entities, design the report and add a ReportViewer web control to render the report in a ASP.NET web page. The report will have a chart control. Different results will be generated by changing filter criteria. At the end of the walkthrough, you should have a UI like the following.  From the UI below, a user is able to view the product list and can see a chart with the sum of Unit price for a given category. They can filter by Category and Supplier. The drop downs will auto post back when the selection is changed.  This demo uses Visual Studio 2010 RTM. This post is split into three parts. The last part has the sample code attached. Creating an ASP.NET report using Visual Studio 2010 - Part 2 Creating an ASP.NET report using Visual Studio 2010 - Part 3   Lets start by creating a new ASP.NET empty web application called “NorthwindReports” Creating the Data Access Layer (DAL) Add a web form called index.aspx to the root directory. You do this by right clicking on the NorthwindReports web project and selecting “Add item..” . Create a folder called “DAL”. We will store all our data access methods and any data transfer objects in here.   Right click on the DAL folder and add a ADO.NET Entity data model called Northwind. Select “Generate from database” and click Next. Create a connection to your database containing the Northwind sample database and click Next.   From the table list, select Categories, Products and Suppliers and click next. Our Entity data model gets created and looks like this:    Adding data transfer objects Right click on the DAL folder and add a ProductViewModel. Add the following code. This class contains properties we need to render our report. public class ProductViewModel { public int? ProductID { get; set; } public string ProductName { get; set; } public System.Nullable<decimal> UnitPrice { get; set; } public string CategoryName { get; set; } public int? CategoryID { get; set; } public int? SupplierID { get; set; } public bool Discontinued { get; set; } } Add a SupplierViewModel class. This will be used to render the supplier DropDownlist. public class SupplierViewModel { public string CompanyName { get; set; } public int SupplierID { get; set; } } Add a CategoryViewModel class. public class CategoryViewModel { public string CategoryName { get; set; } public int CategoryID { get; set; } } Create an IProductRepository interface. This will contain the signatures of all the methods we need when accessing the entity model.  This step is not needed but follows the repository pattern. interface IProductRepository { IQueryable<Product> GetProducts(); IQueryable<ProductViewModel> GetProductsProjected(int? supplierID, int? categoryID); IQueryable<SupplierViewModel> GetSuppliers(); IQueryable<CategoryViewModel> GetCategories(); } Create a ProductRepository class that implements the IProductReposity above. The methods available in this class are as follows: GetProducts – returns an IQueryable of all products. GetProductsProjected – returns an IQueryable of ProductViewModel. The method filters all the products based on SupplierId and CategoryId if any. It then projects the result into the ProductViewModel. GetSuppliers() – returns an IQueryable of all suppliers projected into a SupplierViewModel GetCategories() – returns an IQueryable of all categories projected into a CategoryViewModel  public class ProductRepository : IProductRepository { /// <summary> /// IQueryable of all Products /// </summary> /// <returns></returns> public IQueryable<Product> GetProducts() { var dataContext = new NorthwindEntities(); var products = from p in dataContext.Products select p; return products; }   /// <summary> /// IQueryable of Projects projected /// into the ProductViewModel class /// </summary> /// <returns></returns> public IQueryable<ProductViewModel> GetProductsProjected(int? supplierID, int? categoryID) { var projectedProducts = from p in GetProducts() select new ProductViewModel { ProductID = p.ProductID, ProductName = p.ProductName, UnitPrice = p.UnitPrice, CategoryName = p.Category.CategoryName, CategoryID = p.CategoryID, SupplierID = p.SupplierID, Discontinued = p.Discontinued }; // Filter on SupplierID if (supplierID.HasValue) { projectedProducts = projectedProducts.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { projectedProducts = projectedProducts.Where(a => a.CategoryID == categoryID); }   return projectedProducts; }     public IQueryable<SupplierViewModel> GetSuppliers() { var dataContext = new NorthwindEntities(); var suppliers = from s in dataContext.Suppliers select new SupplierViewModel { SupplierID = s.SupplierID, CompanyName = s.CompanyName }; return suppliers; }   public IQueryable<CategoryViewModel> GetCategories() { var dataContext = new NorthwindEntities(); var categories = from c in dataContext.Categories select new CategoryViewModel { CategoryID = c.CategoryID, CategoryName = c.CategoryName }; return categories; } } Your solution explorer should look like the following. Build your project and make sure you don’t get any errors. In the next part, we will see how to create the client report definition file using the Report Wizard.   Creating an ASP.NET report using Visual Studio 2010 - Part 2

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1

    - by rajbk
    The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010. Many other other applications in the works. * This post walks you through how to create an OData feed, define a shape for the data and pre-filter the data using Visual Studio 2010, WCF Data Services and the Entity Framework. A sample project is attached at the bottom of Part 2 of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Create the Web Application File –› New –› Project, Select “ASP.NET Empty Web Application” Add the Entity Data Model Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “ADO.NET Entity Data Model” under "Data”. Name the Model “Northwind” and click “Add”.   In the “Choose Model Contents”, select “Generate Model From Database” and click “Next”   Define a connection to your database containing the Northwind database in the next screen. We are going to expose the Products table through our OData feed. Select “Products” in the “Choose your Database Object” screen.   Click “Finish”. We are done creating our Entity Data Model. Save the Northwind.edmx file created. Add the WCF Data Service Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “WCF Data Service” from the list and call the service “DataService” (creative, huh?). Click “Add”.   Enable Access to the Data Service Open the DataService.svc.cs class. The class is well commented and instructs us on the next steps. public class DataService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Replace the comment that starts with “/* TODO:” with “NorthwindEntities” (the entity container name of the Model we created earlier).  WCF Data Services is initially locked down by default, FTW! No data is exposed without you explicitly setting it. You have explicitly specify which Entity sets you wish to expose and what rights are allowed by using the SetEntitySetAccessRule. The SetServiceOperationAccessRule on the other hand sets rules for a specified operation. Let us define an access rule to expose the Products Entity we created earlier. We use the EnititySetRights.AllRead since we want to give read only access. Our modified code is shown below. public class DataService : DataService<NorthwindEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } We are done setting up our ODataFeed! Compile your project. Right click on DataService.svc and select “View in Browser” to see the OData feed. To view the feed in IE, you must make sure that "Feed Reading View" is turned off. You set this under Tools -› Internet Options -› Content tab.   If you navigate to “Products”, you should see the Products feed. Note also that URIs are case sensitive. ie. Products work but products doesn’t.   Filtering our data OData has a set of system query operations you can use to perform common operations against data exposed by the model. For example, to see only Products in CategoryID 2, we can use the following request: /DataService.svc/Products?$filter=CategoryID eq 2 At the time of this writing, supported operations are $orderby, $top, $skip, $filter, $expand, $format†, $select, $inlinecount. Pre-filtering our data using Query Interceptors The Product feed currently returns all Products. We want to change that so that it contains only Products that have not been discontinued. WCF introduces the concept of interceptors which allows us to inject custom validation/policy logic into the request/response pipeline of a WCF data service. We will use a QueryInterceptor to pre-filter the data so that it returns only Products that are not discontinued. To create a QueryInterceptor, write a method that returns an Expression<Func<T, bool>> and mark it with the QueryInterceptor attribute as shown below. [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.Discontinued == false; } Viewing the feed after compilation will only show products that have not been discontinued. We also confirm this by looking at the WHERE clause in the SQL generated by the entity framework. SELECT [Extent1].[ProductID] AS [ProductID], ... ... [Extent1].[Discontinued] AS [Discontinued] FROM [dbo].[Products] AS [Extent1] WHERE 0 = [Extent1].[Discontinued] Other examples of Query/Change interceptors can be seen here including an example to filter data based on the identity of the authenticated user. We are done pre-filtering our data. In the next part of this post, we will see how to shape our data. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Foot Notes * http://msdn.microsoft.com/en-us/data/aa937697.aspx † $format did not work for me. The way to get a Json response is to include the following in the  request header “Accept: application/json, text/javascript, */*” when making the request. This is easily done with most JavaScript libraries.

    Read the article

  • Michael Crump&rsquo;s notes for 70-563 PRO &ndash; Designing and Developing Windows Applications usi

    - by mbcrump
    TIME TO GO PRO! This is my notes for 70-563 PRO – Designing and Developing Windows Applications using .NET Framework 3.5 I created it using several resources (various certification web sites, msdn, official ms 70-548 book). The reason that I created this review is because a) I am taking the exam. b) MS did not create a book for this exam. Use the(MS 70-548)book. c) To make sure I am familiar with each before the exam. I hope that it provides a good start for your own notes. I hope that someone finds this useful. At least, it will give you a starting point of what to expect to know on the PRO exam. Also, for those wondering, the PRO exam does contains very little code. It is basically all theory. 1. Validation Controls – How to prevent users from entering invalid data on forms. (MaskedTextBox control and RegEx) 2. ServiceController – used to start and control the behavior of existing services. 3. User Feedback (know winforms Status Bar, Tool Tips, Color, Error Provider, Context-Sensitive and Accessibility) 4. Specific (derived) exceptions must be handled before general (base class) exceptions. By moving the exception handling for the base type Exception to after exception handling of ArgumentNullException, all ArgumentNullException thrown by the Helper method will be caught and logged correctly. 5. A heartbeat method is a method exposed by a Web service that allows external applications to check on the status of the service. 6. New users must master key tasks quickly. Giving these tasks context and appropriate detail will help. However, advanced users will demand quicker paths. Shortcuts, accelerators, or toolbar buttons will speed things along for the advanced user. 7. MSBuild uses project files to instruct the build engine what to build and how to build it. MSBuild project files are XML files that adhere to the MSBuild XML schema. The MSBuild project files contain complete file, build action, and dependency information for each individual projects. 8. Evaluating whether or not to fix a bug involves a triage process. You must identify the bug's impact, set the priority, categorize it, and assign a developer. Many times the person doing the triage work will assign the bug to a developer for further investigation. In fact, the workflow for the bug work item inside of Team System supports this step. Developers are often asked to assess the impact of a given bug. This assessment helps the person doing the triage make a decision on how to proceed. When assessing the impact of a bug, you should consider time and resources to fix it, bug risk, and impacts of the bug. 9. In large projects it is generally impossible and unfeasible to fix all bugs because of the impact on schedule and budget. 10. Code reviews should be conducted by a technical lead or a technical peer. 11. Testing Applications 12. WCF Services – application state 13. SQL Server 2005 / 2008 Express Edition – reliable storage of data / Microsoft SQL Server 3.5 Compact Database– used for client computers to retrieve and save data from a shared location. 14. SQL Server 2008 Compact Edition – used for minimum possible memory and can synchronize data with a corporate SQL Server 2008 Database. Supports offline user and minimum dependency on external components. 15. MDI and SDI Forms (specifically IsMDIContainer) 16. GUID – in the case of data warehousing, it is important to define unique keys. 17. Encrypting / Security Data 18. Understanding of Isolated Storage/Proper location to store items 19. LINQ to SQL 20. Multithreaded access 21. ADO.NET Entity Framework model 22. Marshal.ReleaseComObject 23. Common User Interface Layout (ComboBox, ListBox, Listview, MaskedTextBox, TextBox, RichTextBox, SplitContainer, TableLayoutPanel, TabControl) 24. DataSets Class - http://msdn.microsoft.com/en-us/library/system.data.dataset%28VS.71%29.aspx 25. SQL Server 2008 Reporting Services (SSRS) 26. SystemIcons.Shield (Vista UAC) 27. Leverging stored procedures to perform data manipulation for a database schema that can change. 28. DataContext 29. Microsoft Windows Installer Packages, ClickOnce(bootstrapping features), XCopy. 30. Client Application Services – will authenticate users by using the same data source as a ASP.NET web application. 31. SQL Server 2008 Caching 32. StringBuilder 33. Accessibility Guidelines for Windows Applications http://msdn.microsoft.com/en-us/library/ms228004.aspx 34. Logging erros 35. Testing performance related issues. 36. Role Based Security, GenericIdentity and GenericPrincipal 37. System.Net.CookieContainer will store session data for webapps (see isolated storage for winforms) 38. .NET CLR Profiler tool will identify objects that cause performance issues. 39. ADO.NET Synchronization (SyncGroup) 40. Globalization - CultureInfo 41. IDisposable Interface- reports on several questions relating to this. 42. Adding timestamps to determine whether data has changed or not. 43. Converting applications to .NET Framework 3.5 44. MicrosoftReportViewer 45. Composite Controls 46. Windows Vista KNOWN folders. 47. Microsoft Sync Framework 48. TypeConverter -Provides a unified way of converting types of values to other types, as well as for accessing standard values and sub properties. http://msdn.microsoft.com/en-us/library/system.componentmodel.typeconverter.aspx 49. Concurrency control mechanisms The main categories of concurrency control mechanisms are: Optimistic - Delay the checking of whether a transaction meets the isolation rules (e.g., serializability and recoverability) until its end, without blocking any of its (read, write) operations, and then abort a transaction, if the desired rules are violated. Pessimistic - Block operations of a transaction, if they may cause violation of the rules. Semi-optimistic - Block operations in some situations, and do not block in other situations, while delaying rules checking to transaction's end, as done with optimistic. 50. AutoResetEvent 51. Microsoft Messaging Queue (MSMQ) 4.0 52. Bulk imports 53. KeyDown event of controls 54. WPF UI components 55. UI process layer 56. GAC (installing, removing and queuing) 57. Use a local database cache to reduce the network bandwidth used by applications. 58. Sound can easily be annoying and distracting to users, so use it judiciously. Always give users the option to turn sound off. Because a user might have sound off, never convey important information through sound alone.

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • ruby on rails configuration

    - by Themasterhimself
    Im using the following guide for getting started with rails for ubuntu 9.10. http://guides.rails.info/getting_started.html I have installed both ruby and gem. gokul@gokul-laptop:~$ ruby -v ruby 1.8.7 (2009-06-12 patchlevel 174) [i486-linux] gokul@gokul-laptop:~$ gem -v 1.3.6 gokul@gokul-laptop:~$ For rails, gokul@gokul-laptop:~$sudo gem install rails doesnt seem to give any response. so used the synaptic package manager for installing it. And it seems to have installed correctly. gokul@gokul-laptop:~$ rails Usage: /usr/bin/rails /path/to/your/app [options] Options: -r, --ruby=path Path to the Ruby binary of your choice (otherwise scripts use env, dispatchers current path). Default: /usr/bin/ruby1.8 -d, --database=name Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite2/sqlite3/frontbase/ibm_db). Default: sqlite3 -D, --with-dispatchers Add CGI/FastCGI/mod_ruby dispatches code to generated application skeleton Default: false --freeze Freeze Rails in vendor/rails from the gems generating the skeleton Default: false -m, --template=path Use an application template that lives at path (can be a filesystem path or URL). Default: (none) Rails Info: -v, --version Show the Rails version number and quit. -h, --help Show this help message and quit. General Options: -p, --pretend Run but do not make any changes. -f, --force Overwrite files that already exist. -s, --skip Skip files that already exist. -q, --quiet Suppress normal output. -t, --backtrace Debugging: show backtrace on errors. -c, --svn Modify files with subversion. (Note: svn must be in path) -g, --git Modify files with git. (Note: git must be in path) Description: The 'rails' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going. gokul@gokul-laptop:~$ app folder is created with all the proper folders. The problem starts with the following commands... gokul@gokul-laptop:~$ sudo gem install bundler [sudo] password for gokul: Successfully installed bundler-0.9.24 1 gem installed Installing ri documentation for bundler-0.9.24... Installing RDoc documentation for bundler-0.9.24... gokul@gokul-laptop:~$ bundle install Could not locate Gemfile gokul@gokul-laptop:~$ coming to the database, the default sqlite3 seems to have installed correctly. gokul@gokul-laptop:~$ sqlite3 SQLite version 3.6.16 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite The welcome aboard page is not being able to be found at (http://localhost:3000) after executing the following commands... gokul@gokul-laptop:~/Desktop$ rails blog create create app/controllers create app/helpers create app/models create app/views/layouts create config/environments create config/initializers create config/locales create db create doc create lib create lib/tasks create log create public/images create public/javascripts create public/stylesheets create script/performance create test/fixtures create test/functional create test/integration create test/performance create test/unit create vendor create vendor/plugins create tmp/sessions create tmp/sockets create tmp/cache create tmp/pids create Rakefile create README create app/controllers/application_controller.rb create app/helpers/application_helper.rb create config/database.yml create config/routes.rb create config/locales/en.yml create db/seeds.rb create config/initializers/backtrace_silencers.rb create config/initializers/inflections.rb create config/initializers/mime_types.rb create config/initializers/new_rails_defaults.rb create config/initializers/session_store.rb create config/environment.rb create config/boot.rb create config/environments/production.rb create config/environments/development.rb create config/environments/test.rb create script/about create script/console create script/dbconsole create script/destroy create script/generate create script/runner create script/server create script/plugin create script/performance/benchmarker create script/performance/profiler create test/test_helper.rb create test/performance/browsing_test.rb create public/404.html create public/422.html create public/500.html create public/index.html create public/favicon.ico create public/robots.txt create public/images/rails.png create public/javascripts/prototype.js create public/javascripts/effects.js create public/javascripts/dragdrop.js create public/javascripts/controls.js create public/javascripts/application.js create doc/README_FOR_APP create log/server.log create log/production.log create log/development.log create log/test.log gokul@gokul-laptop:~/Desktop$ cd blog gokul@gokul-laptop:~/Desktop/blog$ rake db:create (in /home/gokul/Desktop/blog) gokul@gokul-laptop:~/Desktop/blog$ rails server create create app/controllers create app/helpers create app/models create app/views/layouts create config/environments create config/initializers create config/locales create db create doc create lib create lib/tasks create log create public/images create public/javascripts create public/stylesheets create script/performance create test/fixtures create test/functional create test/integration create test/performance create test/unit create vendor create vendor/plugins create tmp/sessions create tmp/sockets create tmp/cache create tmp/pids create Rakefile create README create app/controllers/application_controller.rb create app/helpers/application_helper.rb create config/database.yml create config/routes.rb create config/locales/en.yml create db/seeds.rb create config/initializers/backtrace_silencers.rb create config/initializers/inflections.rb create config/initializers/mime_types.rb create config/initializers/new_rails_defaults.rb create config/initializers/session_store.rb create config/environment.rb create config/boot.rb create config/environments/production.rb create config/environments/development.rb create config/environments/test.rb create script/about create script/console create script/dbconsole create script/destroy create script/generate create script/runner create script/server create script/plugin create script/performance/benchmarker create script/performance/profiler create test/test_helper.rb create test/performance/browsing_test.rb create public/404.html create public/422.html create public/500.html create public/index.html create public/favicon.ico create public/robots.txt create public/images/rails.png create public/javascripts/prototype.js create public/javascripts/effects.js create public/javascripts/dragdrop.js create public/javascripts/controls.js create public/javascripts/application.js create doc/README_FOR_APP create log/server.log create log/production.log create log/development.log create log/test.log gokul@gokul-laptop:~/Desktop/blog$ hope some one can help me with this...

    Read the article

  • Listing common SQL Code Smells.

    - by Phil Factor
    Once you’ve done a number of SQL Code-reviews, you’ll know those signs in the code that all might not be well. These ’Code Smells’ are coding styles that don’t directly cause a bug, but are indicators that all is not well with the code. . Kent Beck and Massimo Arnoldi seem to have coined the phrase in the "OnceAndOnlyOnce" page of www.C2.com, where Kent also said that code "wants to be simple". Bad Smells in Code was an essay by Kent Beck and Martin Fowler, published as Chapter 3 of the book ‘Refactoring: Improving the Design of Existing Code’ (ISBN 978-0201485677) Although there are generic code-smells, SQL has its own particular coding habits that will alert the programmer to the need to re-factor what has been written. See Exploring Smelly Code   and Code Deodorants for Code Smells by Nick Harrison for a grounding in Code Smells in C# I’ve always been tempted by the idea of automating a preliminary code-review for SQL. It would be so useful to trawl through code and pick up the various problems, much like the classic ‘Lint’ did for C, and how the Code Metrics plug-in for .NET Reflector by Jonathan 'Peli' de Halleux is used for finding Code Smells in .NET code. The problem is that few of the standard procedural code smells are relevant to SQL, and we need an agreed list of code smells. Merrilll Aldrich made a grand start last year in his blog Top 10 T-SQL Code Smells.However, I'd like to make a start by discovering if there is a general opinion amongst Database developers what the most important SQL Smells are. One can be a bit defensive about code smells. I will cheerfully write very long stored procedures, even though they are frowned on. I’ll use dynamic SQL occasionally. You can only use them as an aid for your own judgment and it is fine to ‘sign them off’ as being appropriate in particular circumstances. Also, whole classes of ‘code smells’ may be irrelevant for a particular database. The use of proprietary SQL, for example, is only a ‘code smell’ if there is a chance that the database will have to be ported to another RDBMS. The use of dynamic SQL is a risk only with certain security models. As the saying goes,  a CodeSmell is a hint of possible bad practice to a pragmatist, but a sure sign of bad practice to a purist. Plamen Ratchev’s wonderful article Ten Common SQL Programming Mistakes lists some of these ‘code smells’ along with out-and-out mistakes, but there are more. The use of nested transactions, for example, isn’t entirely incorrect, even though the database engine ignores all but the outermost: but it does flag up the possibility that the programmer thinks that nested transactions are supported. If anything requires some sort of general agreement, the definition of code smells is one. I’m therefore going to make this Blog ‘dynamic, in that, if anyone twitters a suggestion with a #SQLCodeSmells tag (or sends me a twitter) I’ll update the list here. If you add a comment to the blog with a suggestion of what should be added or removed, I’ll do my best to oblige. In other words, I’ll try to keep this blog up to date. The name against each 'smell' is the name of the person who Twittered me, commented about or who has written about the 'smell'. it does not imply that they were the first ever to think of the smell! Use of deprecated syntax such as *= (Dave Howard) Denormalisation that requires the shredding of the contents of columns. (Merrill Aldrich) Contrived interfaces Use of deprecated datatypes such as TEXT/NTEXT (Dave Howard) Datatype mis-matches in predicates that rely on implicit conversion.(Plamen Ratchev) Using Correlated subqueries instead of a join   (Dave_Levy/ Plamen Ratchev) The use of Hints in queries, especially NOLOCK (Dave Howard /Mike Reigler) Few or No comments. Use of functions in a WHERE clause. (Anil Das) Overuse of scalar UDFs (Dave Howard, Plamen Ratchev) Excessive ‘overloading’ of routines. The use of Exec xp_cmdShell (Merrill Aldrich) Excessive use of brackets. (Dave Levy) Lack of the use of a semicolon to terminate statements Use of non-SARGable functions on indexed columns in predicates (Plamen Ratchev) Duplicated code, or strikingly similar code. Misuse of SELECT * (Plamen Ratchev) Overuse of Cursors (Everyone. Special mention to Dave Levy & Adrian Hills) Overuse of CLR routines when not necessary (Sam Stange) Same column name in different tables with different datatypes. (Ian Stirk) Use of ‘broken’ functions such as ‘ISNUMERIC’ without additional checks. Excessive use of the WHILE loop (Merrill Aldrich) INSERT ... EXEC (Merrill Aldrich) The use of stored procedures where a view is sufficient (Merrill Aldrich) Not using two-part object names (Merrill Aldrich) Using INSERT INTO without specifying the columns and their order (Merrill Aldrich) Full outer joins even when they are not needed. (Plamen Ratchev) Huge stored procedures (hundreds/thousands of lines). Stored procedures that can produce different columns, or order of columns in their results, depending on the inputs. Code that is never used. Complex and nested conditionals WHILE (not done) loops without an error exit. Variable name same as the Datatype Vague identifiers. Storing complex data  or list in a character map, bitmap or XML field User procedures with sp_ prefix (Aaron Bertrand)Views that reference views that reference views that reference views (Aaron Bertrand) Inappropriate use of sql_variant (Neil Hambly) Errors with identity scope using SCOPE_IDENTITY @@IDENTITY or IDENT_CURRENT (Neil Hambly, Aaron Bertrand) Schemas that involve multiple dated copies of the same table instead of partitions (Matt Whitfield-Atlantis UK) Scalar UDFs that do data lookups (poor man's join) (Matt Whitfield-Atlantis UK) Code that allows SQL Injection (Mladen Prajdic) Tables without clustered indexes (Matt Whitfield-Atlantis UK) Use of "SELECT DISTINCT" to mask a join problem (Nick Harrison) Multiple stored procedures with nearly identical implementation. (Nick Harrison) Excessive column aliasing may point to a problem or it could be a mapping implementation. (Nick Harrison) Joining "too many" tables in a query. (Nick Harrison) Stored procedure returning more than one record set. (Nick Harrison) A NOT LIKE condition (Nick Harrison) excessive "OR" conditions. (Nick Harrison) User procedures with sp_ prefix (Aaron Bertrand) Views that reference views that reference views that reference views (Aaron Bertrand) sp_OACreate or anything related to it (Bill Fellows) Prefixing names with tbl_, vw_, fn_, and usp_ ('tibbling') (Jeremiah Peschka) Aliases that go a,b,c,d,e... (Dave Levy/Diane McNurlan) Overweight Queries (e.g. 4 inner joins, 8 left joins, 4 derived tables, 10 subqueries, 8 clustered GUIDs, 2 UDFs, 6 case statements = 1 query) (Robert L Davis) Order by 3,2 (Dave Levy) MultiStatement Table functions which are then filtered 'Sel * from Udf() where Udf.Col = Something' (Dave Ballantyne) running a SQL 2008 system in SQL 2000 compatibility mode(John Stafford)

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL Monitor’s data repository

    - by Chris Lambrou
    As one of the developers of SQL Monitor, I often get requests passed on by our support people from customers who are looking to dip into SQL Monitor’s own data repository, in order to pull out bits of information that they’re interested in. Since there’s clearly interest out there in playing around directly with the data repository, I thought I’d write some blog posts to start to describe how it all works. The hardest part for me is knowing where to begin, since the schema of the data repository is pretty big. Hmmm… I guess it’s tricky for anyone to write anything but the most trivial of queries against the data repository without understanding the hierarchy of monitored objects, so perhaps my first post should start there. I always imagine that whenever a customer fires up SSMS and starts to explore their SQL Monitor data repository database, they become immediately bewildered by the schema – that was certainly my experience when I did so for the first time. The following query shows the number of different object types in the data repository schema: SELECT type_desc, COUNT(*) AS [count] FROM sys.objects GROUP BY type_desc ORDER BY type_desc;  type_desccount 1DEFAULT_CONSTRAINT63 2FOREIGN_KEY_CONSTRAINT181 3INTERNAL_TABLE3 4PRIMARY_KEY_CONSTRAINT190 5SERVICE_QUEUE3 6SQL_INLINE_TABLE_VALUED_FUNCTION381 7SQL_SCALAR_FUNCTION2 8SQL_STORED_PROCEDURE100 9SYSTEM_TABLE41 10UNIQUE_CONSTRAINT54 11USER_TABLE193 12VIEW124 With 193 tables, 124 views, 100 stored procedures and 381 table valued functions, that’s quite a hefty schema, and when you browse through it using SSMS, it can be a bit daunting at first. So, where to begin? Well, let’s narrow things down a bit and only look at the tables belonging to the data schema. That’s where all of the collected monitoring data is stored by SQL Monitor. The following query gives us the names of those tables: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' ORDER BY sch.name, obj.name; This query still returns 110 tables. I won’t show them all here, but let’s have a look at the first few of them:  name 1data.Cluster_Keys 2data.Cluster_Machine_ClockSkew_UnstableSamples 3data.Cluster_Machine_Cluster_StableSamples 4data.Cluster_Machine_Keys 5data.Cluster_Machine_LogicalDisk_Capacity_StableSamples 6data.Cluster_Machine_LogicalDisk_Keys 7data.Cluster_Machine_LogicalDisk_Sightings 8data.Cluster_Machine_LogicalDisk_UnstableSamples 9data.Cluster_Machine_LogicalDisk_Volume_StableSamples 10data.Cluster_Machine_Memory_Capacity_StableSamples 11data.Cluster_Machine_Memory_UnstableSamples 12data.Cluster_Machine_Network_Capacity_StableSamples 13data.Cluster_Machine_Network_Keys 14data.Cluster_Machine_Network_Sightings 15data.Cluster_Machine_Network_UnstableSamples 16data.Cluster_Machine_OperatingSystem_StableSamples 17data.Cluster_Machine_Ping_UnstableSamples 18data.Cluster_Machine_Process_Instances 19data.Cluster_Machine_Process_Keys 20data.Cluster_Machine_Process_Owner_Instances 21data.Cluster_Machine_Process_Sightings 22data.Cluster_Machine_Process_UnstableSamples 23… There are two things I want to draw your attention to: The table names describe a hierarchy of the different types of object that are monitored by SQL Monitor (e.g. clusters, machines and disks). For each object type in the hierarchy, there are multiple tables, ending in the suffixes _Keys, _Sightings, _StableSamples and _UnstableSamples. Not every object type has a table for every suffix, but the _Keys suffix is especially important and a _Keys table does indeed exist for every object type. In fact, if we limit the query to return only those tables ending in _Keys, we reveal the full object hierarchy: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' AND obj.name LIKE '%_Keys' ORDER BY sch.name, obj.name;  name 1data.Cluster_Keys 2data.Cluster_Machine_Keys 3data.Cluster_Machine_LogicalDisk_Keys 4data.Cluster_Machine_Network_Keys 5data.Cluster_Machine_Process_Keys 6data.Cluster_Machine_Services_Keys 7data.Cluster_ResourceGroup_Keys 8data.Cluster_ResourceGroup_Resource_Keys 9data.Cluster_SqlServer_Agent_Job_History_Keys 10data.Cluster_SqlServer_Agent_Job_Keys 11data.Cluster_SqlServer_Database_BackupType_Backup_Keys 12data.Cluster_SqlServer_Database_BackupType_Keys 13data.Cluster_SqlServer_Database_CustomMetric_Keys 14data.Cluster_SqlServer_Database_File_Keys 15data.Cluster_SqlServer_Database_Keys 16data.Cluster_SqlServer_Database_Table_Index_Keys 17data.Cluster_SqlServer_Database_Table_Keys 18data.Cluster_SqlServer_Error_Keys 19data.Cluster_SqlServer_Keys 20data.Cluster_SqlServer_Services_Keys 21data.Cluster_SqlServer_SqlProcess_Keys 22data.Cluster_SqlServer_TopQueries_Keys 23data.Cluster_SqlServer_Trace_Keys 24data.Group_Keys The full object type hierarchy looks like this: Cluster Machine LogicalDisk Network Process Services ResourceGroup Resource SqlServer Agent Job History Database BackupType Backup CustomMetric File Table Index Error Services SqlProcess TopQueries Trace Group Okay, but what about the individual objects themselves represented at each level in this hierarchy? Well that’s what the _Keys tables are for. This is probably best illustrated by way of a simple example – how can I query my own data repository to find the databases on my own PC for which monitoring data has been collected? Like this: SELECT clstr._Name AS cluster_name, srvr._Name AS instance_name, db._Name AS database_name FROM data.Cluster_SqlServer_Database_Keys db JOIN data.Cluster_SqlServer_Keys srvr ON db.ParentId = srvr.Id -- Note here how the parent of a Database is a Server JOIN data.Cluster_Keys clstr ON srvr.ParentId = clstr.Id -- Note here how the parent of a Server is a Cluster WHERE clstr._Name = 'dev-chrisl2' -- This is the hostname of my own PC ORDER BY clstr._Name, srvr._Name, db._Name;  cluster_nameinstance_namedatabase_name 1dev-chrisl2SqlMonitorData 2dev-chrisl2master 3dev-chrisl2model 4dev-chrisl2msdb 5dev-chrisl2mssqlsystemresource 6dev-chrisl2tempdb 7dev-chrisl2sql2005SqlMonitorData 8dev-chrisl2sql2005TestDatabase 9dev-chrisl2sql2005master 10dev-chrisl2sql2005model 11dev-chrisl2sql2005msdb 12dev-chrisl2sql2005mssqlsystemresource 13dev-chrisl2sql2005tempdb 14dev-chrisl2sql2008SqlMonitorData 15dev-chrisl2sql2008master 16dev-chrisl2sql2008model 17dev-chrisl2sql2008msdb 18dev-chrisl2sql2008mssqlsystemresource 19dev-chrisl2sql2008tempdb These results show that I have three SQL Server instances on my machine (a default instance, one named sql2005 and one named sql2008), and each instance has the usual set of system databases, along with a database named SqlMonitorData. Basically, this is where I test SQL Monitor on different versions of SQL Server, when I’m developing. There are a few important things we can learn from this query: Each _Keys table has a column named Id. This is the primary key. Each _Keys table has a column named ParentId. A foreign key relationship is defined between each _Keys table and its parent _Keys table in the hierarchy. There are two exceptions to this, Cluster_Keys and Group_Keys, because clusters and groups live at the root level of the object hierarchy. Each _Keys table has a column named _Name. This is used to uniquely identify objects in the table within the scope of the same shared parent object. Actually, that last item isn’t always true. In some cases, the _Name column is actually called something else. For example, the data.Cluster_Machine_Services_Keys table has a column named _ServiceName instead of _Name (sorry for the inconsistency). In other cases, a name isn’t sufficient to uniquely identify an object. For example, right now my PC has multiple processes running, all sharing the same name, Chrome (one for each tab open in my web-browser). In such cases, multiple columns are used to uniquely identify an object within the scope of the same shared parent object. Well, that’s it for now. I’ve given you enough information for you to explore the _Keys tables to see how objects are stored in your own data repositories. In a future post, I’ll try to explain how monitoring data is stored for each object, using the _StableSamples and _UnstableSamples tables. If you have any questions about this post, or suggestions for future posts, just submit them in the comments section below.

    Read the article

  • Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

    - by Anand Akela
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Contributed by Sunil Kunisetty and Daniel Chan Introduction and ArchitectureAs more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources. Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features: Monitor EBS, EC2 and RDS instances on Amazon Web Services Gather performance metrics and configuration details for AWS instances Raise alerts and violations based on thresholds set on monitoring Generate reports based on the gathered data Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API. This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture: Amazon Elastic Block Store (EBS) Amazon Elastic Compute Cloud (EC2) Amazon Relational Database Service (RDS) Here are a few key features: Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS. Critical configuration information. Custom Home Pages with charts and AWS configuration information. Generate incidents based on thresholds set on monitoring data. Discovery and Monitoring AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources AWS Resource Type Target Type Resource specific properties EBS Resource Amazon EBS Service CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port EC2 Resource Amazon EC2 Service CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port RDS Resource Amazon RDS Service CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port Proxy server and port are optional and are only needed if the agent is within the firewall. Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances. ./emcli add_target \       -name="<target name>" \       -type="AmazonEC2Service" \       -host="<host>" \       -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \     -subseparator=properties="=" ./emcli set_monitoring_credential \                 -set_name="AWSKeyCredentialSet"  \                 -target_name="<target name>"  \                 -target_type="AmazonEC2Service" \                 -cred_type="AWSKeyCredential"  \                 -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>" Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’. Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types. Resource Type Configuration Properties Metrics EBS Resource Volume Id, Volume Type, Device Name, Size, Availability Zone Response: Status Utilization: QueueLength, IdleTime Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency EC2 Resource Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone Response: Status CPU Utilization: CPU Utilization Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput RDS Resource Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone Response: Status Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage Custom Home Pages As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots. EBS Instance Home Page: EC2 Instance Home Page: RDS Instance Home Page: Further Reading: 1)      AWS Plugin download 2)      Installation and  Read Me. 3)      Screenwatch on SlideShare 4)      Extensibility Programmer's Guide 5)      Amazon Web Services

    Read the article

  • Thank You for a Great Welcome for Oracle GoldenGate 11g Release 2

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Calibri; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Yesterday morning we had two launch webcasts for Oracle GoldenGate 11g Release 2. I had the pleasure to present, as well as moderate the Q&A panels in both of these webcasts. Both events had hundreds of live attendees, sending us over 150 questions. Even though we left 30 minutes for Q&A, it was not nearly enough time to address for all the insightful questions our audience sent. Our product management team and I really appreciate the interaction we had yesterday and we are starting to respond back with outstanding questions today. Oracle GoldenGate’s new release launch also had great welcome from the media. You can find the links for various articles on the new release below: ITBusinessEdge Oracle Embraces Cross-Platform Data Integration Information Week: Oracle Real-Time Advance Taps Compressed Data Integration Developer News, Oracle GoldenGate Adds Deeper Oracle Integration, Extends Real-Time Performance CIO, Oracle GoldenGate Buddies Up with Sibling Software DBTA, Real-Time Data Integration: Oracle GoldenGate 11g Release 2 Now Available CBR Oracle unveils GoldenGate 11g Release 2 real-time data integration application In this blog, I want to address some of the frequently asked questions that came up during the webcasts. You can find the top questions and their answers along with related resources below. We will continue to address frequently asked questions via future blogs. Q: Will the new Integrated Capture for Oracle Database replace the Classic Capture? If not, which one do I use when? A: No, Classic Capture will be around for long time. Core platform specific features, bug fixes, and patches will be available for both Capture processes.Oracle Database specific features will be only available in the Integrated Capture. The Integrated Capture for Oracle Database is an option for users that need to capture data from compressed tables or need support for XML data types, XA on RAC. Users who don’t leverage these features should continue to use our Classic Capture. For more information on Oracle GoldenGate 11g Release 2 I recommend to check out the White paper: Oracle GoldenGate 11gR2 New Features as well as other technical white papers we have on OTN.                                                         For those of you coming to OpenWorld, please attend the related session: Extracting Data in Oracle GoldenGate Integrated Capture Mode, Monday Oct 1st 1:45pm Moscone South – 102 to learn more about this new feature. Q: What is new in Conflict Detection and Resolution? And how does it work? A: There are now pre-built functions to identify the conditions under which an error occurs and how to handle the record when the condition occurs. Error conditions handled include inserts into a target table where the row already exists, updates or deletes to target table rows that exist, but the original source data (before columns) do not match the existing data in the target row, and updates or deletes where the row does not exist in the target database table.Foreach of these conditions a method to handle the error is specified.  Please check out our recent blog on this topic and the White paper: Oracle GoldenGate 11gR2 New Features white paper.  Also, for those attending OpenWorld please attend the session: Best Practices for Conflict Detection and Resolution in Oracle GoldenGate for Active/Active-  Wednesday Oct 3rd  3:30pm Mascone 3000 Q: Does Oracle GoldenGate Veridata and the Management Pack require additional licenses, or is it incorporated with the GoldenGate license? A: Oracle GoldenGate Veridata and Oracle Management Pack for Oracle GoldenGate are additional products and require separate licenses. Please check out Oracle's price list here. Q: Does GoldenGate - Oracle Enterprise Manager Plug-in require additional license? A: Oracle Enterprise Manager Plug-in is included in the Oracle Management Pack for Oracle GoldenGate license, which is separate from Oracle GoldenGate license. There is no separate license for the Enterprise Manager Plug-in by itself. Oracle GoldenGate Monitor, Oracle GoldenGate Director, and Enterprise Manager Plug-in are included in the Management Pack for Oracle GoldenGate license. Please check out Management Pack for Oracle GoldenGate data sheet for more info on this product bundle. Q: Is Oracle GoldenGate replacing Oracle Streams product? A: Oracle GoldenGate is the strategic data replication product. Therefore, Oracle Streams will continue to be supported, but will not be actively enhanced. Rather, the best elements of Oracle Streams will be added to Oracle GoldenGate. Conflict management is one of them and with the latest release Oracle GoldenGate has a more advanced conflict management offering. Current customers depending on Oracle Streams will continue to be fully supported. Q: How is Oracle GoldenGate different than Oracle Data Integrator? A: Oracle Data Integrator is designed for fast bulk data movement and transformation between heterogeneous systems, while GoldenGate is designed for real-time movement of transactions between heterogeneous systems. These two products are completely complementary where GoldenGate provides low-impact real-time change data capture and delivery to a staging area on the target. And Oracle Data Integrator transforms this data and loads the DW tables. In fact, Oracle Data Integrator integrates with GoldenGate to use GoldenGate’s Capture process as one option for its CDC mechanism. We have several customers that deployed GoldenGate and ODI together to feed real-time data to their data warehousing solutions. Please also check out Oracle Data Integrator Changed Data Capture with Oracle GoldenGate Data Sheet (PDF). Thank you again very much for welcoming Oracle GoldenGate 11g Release 2 and stay in touch with us for more exciting news, updates, and events.

    Read the article

  • Pre-rentrée Oracle Open World 2012 : à vos agendas

    - by Eric Bezille
    A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût. Stratégie et Roadmaps Oracle Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les équipes d'intégration matériels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle’s SPARC Server Strategy Update", avec Masood Heydari, responsable des développements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des développements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des développement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du développement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'expériences et témoignages Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les témoignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les bénéfices non seulement métiers, mais également projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'expérience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le témoignage de Carte Wright, Database Engineer à CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le témoignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'expérience des experts Oracle ayant participé à la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", animé par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le témoignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le témoignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les témoignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers’ View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le témoignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure à Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle’s Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les équipes de développement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm Testez et évaluez les solutions Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseillé d'avoir suivi le "Hands-on-Labs" précédent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle’s Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle’s Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

    Read the article

< Previous Page | 500 501 502 503 504 505 506 507 508 509 510 511  | Next Page >