Search Results

Search found 54 results on 3 pages for 'numa'.

Page 2/3 | < Previous Page | 1 2 3  | Next Page >

  • Vouchers grátis para exames de implementação (SOA, E2.0, etc)

    - by pfolgado
    Gostaria de receber 'vouchers' grátis para os exames de Implementação? É fácil! Registe-se numa das Comunidades de Parceiros de EMEA. A maioria destas Comunidades oferecem aos seus membros 'vouchers' grátis para os exames dos produtos cobertos por essascomunidades. Por exemplo, os membros da Comunidade Parceiros de SOA podem obter 'vouchers' grátis para os exames de implementação de SOA e BPM. Para mais informação sobre as comunidades de Parceiros Oracle ver: Tópico Contacto Applications & Systems Management Javier Puerta Business Intelligence & Enterprise Performance Management Mike Hallett Communications Paul Thompson CRM On Demand Paul Thompson Enterprise 2.0 (previously "Content Management") Hans Blaas Exadata Javier Puerta Healthcare Paul Thompson Identity Management & Security Wolfgang Ehrenthaler Manufacturing, Retail, Distribution and Life Science (MRD/LS) Paul Thompson Public Sector Paul Thompson SOA / Integration Jürgen Kress

    Read the article

  • How does _mm_mwait works?

    - by osgx
    Hello How does _mm_mwait from pmmintrin.h works? (I mean not the asm for it, but action and how this action is taken in NUMA systems. The store monitoring is easy to implement only on bus-based SMP systems with snooping of bus.) What processors does implement it? Is it used in some spinlocks?

    Read the article

  • atomic operation cost

    - by osgx
    Hello What is the cost of the atomic operation? How much cycles does it consume? Will it pause other processors on SMP or NUMA, or will it block memory accesses? Will it flush reorder buffer in out-of-order CPU? What effects will be on the cache? Thanks.

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • What's new in Solaris 11.1?

    - by Karoly Vegh
    Solaris 11.1 is released. This is the first release update since Solaris 11 11/11, the versioning has been changed from MM/YY style to 11.1 highlighting that this is Solaris 11 Update 1.  Solaris 11 itself has been great. What's new in Solaris 11.1? Allow me to pick some new features from the What's New PDF that can be found in the official Oracle Solaris 11.1 Documentation. The updates are very numerous, I really can't include all.  I. New AI Automated Installer RBAC profiles have been introduced to enable delegation of installation tasks. II. The interactive installer now supports installing the OS to iSCSI targets. III. ASR (Auto Service Request) and OCM (Oracle Configuration Manager) have been enabled by default to proactively provide support information and create service requests to speed up support processes. This is optional and can be disabled but helps a lot in supportcases. For further information, see: http://oracle.com/goto/solarisautoreg IV. The new command svcbundle helps you to create SMF manifests without having to struggle with XML editing. (btw, do you know the interactive editprop subcommand in svccfg? The listprop/setprop subcommands are great for scripting and automating, but for an interactive property editing session try, for example, this: svccfg -s svc:/application/pkg/system-repository:default editprop )  V. pfedit: Ever wondered how to delegate editing permissions to certain files? It is well known "sudo /usr/bin/vi /etc/hosts" is not the right way, for sudo elevates the complete vi process to admin levels, and the user can "break" out of the session as root with simply starting a shell from that vi. Now, the new pfedit command provides a solution exactly to this challenge - an auditable, secure, per-user configurable editing possibility. See the pfedit man page for examples.   VI. rsyslog, the popular logging daemon (filters, SSL, formattable output, SQL collect...) has been included in Solaris 11.1 as an alternative to syslog.  VII: Zones: Solaris Zones - as a major Solaris differentiator - got lots of love in terms of new features: ZOSS - Zones on Shared Storage: Placing your zones to shared storage (FC, iSCSI) has never been this easy - via zonecfg.  parallell updates - with S11's bootenvironments updating zones was no problem and meant no downtime anyway, but still, now you can update them parallelly, a way faster update action if you are running a large number of zones. This is like parallell patching in Solaris 10, but with all the IPS/ZFS/S11 goodness.  per-zone fstype statistics: Running zones on a shared filesystems complicate the I/O debugging, since ZFS collects all the random writes and delivers them sequentially to boost performance. Now, over kstat you can find out which zone's I/O has an impact on the other ones, see the examples in the documentation: http://docs.oracle.com/cd/E26502_01/html/E29024/gmheh.html#scrolltoc Zones got RDSv3 protocol support for InfiniBand, and IPoIB support with Crossbow's anet (automatic vnic creation) feature.  NUMA I/O support for Zones: customers can now determine the NUMA I/O topology of the system from within zones.  VIII: Security got a lot of attention too:  Automated security/audit reporting, with builtin reporting templates e.g. for PCI (payment card industry) audits.  PAM is now configureable on a per-user basis instead of system wide, allowing different authentication requirements for different users  SSH in Solaris 11.1 now supports running in FIPS 140-2 mode, that is, in a U.S. government security accredited fashion.  SHA512/224 and SHA512/256 cryptographic hash functions are implemented in a FIPS-compliant way - and on a T4 implemented in silicon! That is, goverment-approved cryptography at HW-speed.  Generally, Solaris is currently under evaluation to be both FIPS and Common Criteria certified.  IX. Networking, as one of the core strengths of Solaris 11, has been extended with:  Data Center Bridging (DCB) - not only setups where network and storage share the same fabric (FCoE, anyone?) can have Quality-of-Service requirements. DCB enables peers to distinguish traffic based on priorities. Your NICs have to support DCB, see the documentation, and additional information on Wikipedia. DataLink MultiPathing, DLMP, enables link aggregation to span across multiple switches, even between those of different vendors. But there are essential differences to the good old bandwidth-aggregating LACP, see the documentation: http://docs.oracle.com/cd/E26502_01/html/E28993/gmdlu.html#scrolltoc VNIC live migration is now supported from one physical NIC to another on-the-fly  X. Data management:  FedFS, (Federated FileSystem) is new, it relies on Solaris 11's NFS referring mechanism to join separate shares of different NFS servers into a single filesystem namespace. The referring system has been there since S11 11/11, in Solaris 11.1 FedFS uses a LDAP - as the one global nameservice to bind them all.  The iSCSI initiator now uses the T4 CPU's HW-implemented CRC32 algorithm - thus improving iSCSI throughput while reducing CPU utilization on a T4 Storage locking improvements are now RAC aware, speeding up throughput with better locking-communication between nodes up to 20%!  XI: Kernel performance optimizations: The new Virtual Memory subsystem ("VM2") scales now to 100+ TB Memory ranges.  The memory predictor monitors large memory page usage, and adjust memory page sizes to applications' needs OSM, the Optimized Shared Memory allows Oracle DBs' SGA to be resized online XII: The Power Aware Dispatcher in now by default enabled, reducing power consumption of idle CPUs. Also, the LDoms' Power Management policies and the poweradm settings in Solaris 11 OS will cooperate. XIII: x86 boot: upgrade to the (Grand Unified Bootloader) GRUB2. Because grub2 differs in the configuration syntactically from grub1, one shall not edit the new grub configuration (grub.cfg) but use the new bootadm features to update it. GRUB2 adds UEFI support and also support for disks over 2TB. XIV: Improved viewing of per-CPU statistics of mpstat. This one might seem of less importance at first, but nowadays having better sorting/filtering possibilities on a periodically updated mpstat output of 256+ vCPUs can be a blessing. XV: Support for Solaris Cluster 4.1: The What's New document doesn't actually mention this one, since OSC 4.1 has not been released at the time 11.1 was. But since then it is available, and it requires Solaris 11.1. And it's only a "pkg update" away. ...aand I seriously need to stop here. There's a lot I missed, Edge Virtual Bridging, lofi tuning, ZFS sharing and crypto enhancements, USB3.0, pulseaudio, trusted extensions updates, etc - but if I mention all those then I effectively copy the What's New document. Which I recommend reading now anyway, it is a great extract of the 300+ new projects and RFE-followups in S11.1. And this blogpost is a summary of that extract.  For closing words, allow me to come back to Request For Enhancements, RFEs. Any customer can request features. Open up a Support Request, explain that this is an RFE, describe the feature you/your company desires to have in S11 implemented. The more SRs are collected for an RFE, the more chance it's got to get implemented. Feel free to provide feedback about the product, as well as about the Solaris 11.1 Documentation using the "Feedback" button there. Both the Solaris engineers and the documentation writers are eager to hear your input.Feel free to comment about this post too. Except that it's too long ;)  wbr,charlie

    Read the article

  • SQL Server 2012 Memory Manager KB articles

    - by SQLOS Team
    Since the release of SQL Server 2012 with a redesigned memory manager, a steady stream of KB articles have been produced by CSS to provide guidance on the new or changed options, as well as fixes that have been published..   How has memory sizing changed in SQL 2012? 2663912 Memory configuration and sizing considerations in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2663912     Setting "locked pages" to avoid SQL Server memory pages getting swapped has been simplified, particularly for Standard Edition, the details can be found here: 2659143 How to enable the "locked pages" feature in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2659143   Note the following deprecation (particularly relevant for 32-bit installations): 2644592 The "AWE enabled" SQL Server feature is deprecated - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2644592   Note the following fixes available: 2708594 FIX: Locked page allocations are enabled without any warning after you upgrade to SQL Server 2012 - http://support.microsoft.com/kb/2708594/EN-US 2688697 FIX: Out-of-memory error when you run an instance of SQL Server 2012 on a computer that uses NUMA - http://support.microsoft.com/kb/2688697/EN-US Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • SQL Server 2012 Memory Manager KB articles

    - by SQLOS Team
    Since the release of SQL Server 2012 with a redesigned memory manager, a steady stream of KB articles have been produced by CSS to provide guidance on the new or changed options, as well as fixes that have been published..   How has memory sizing changed in SQL 2012? 2663912 Memory configuration and sizing considerations in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2663912     Setting "locked pages" to avoid SQL Server memory pages getting swapped has been simplified, particularly for Standard Edition, the details can be found here: 2659143 How to enable the "locked pages" feature in SQL Server 2012 - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2659143   Note the following deprecation (particularly relevant for 32-bit installations): 2644592 The "AWE enabled" SQL Server feature is deprecated - http://support.microsoft.com/default.aspx?scid=kb;EN-US;2644592   Note the following fixes available: 2708594 FIX: Locked page allocations are enabled without any warning after you upgrade to SQL Server 2012 - http://support.microsoft.com/kb/2708594/EN-US 2688697 FIX: Out-of-memory error when you run an instance of SQL Server 2012 on a computer that uses NUMA - http://support.microsoft.com/kb/2688697/EN-US Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Oracle RDBMS Server 11gR2 Pre-Install RPM for Oracle Linux 6 has been released

    - by Lenz Grimmer
    Now that the certification of the Oracle Database 11g R2 with Oracle Linux 6 and the Unbreakable Enterprise Kernel has been announced, we are glad to announce the availability of oracle-rdbms-server-11gR2-preinstall, the Oracle RDBMS Server 11gR2 Pre-install RPM package (formerly known as oracle-validated). Designed specifically for Oracle Linux 6, this RPM aids in the installation of the Oracle Database. In order to install the Oracle Database 11g R2 on Oracle Linux 6, your system needs to meet a few prerequisites, as outlined in the Linux Installation Guides. Using the Oracle RDBMS Server 11gR2 Pre-install RPM, you can complete most of the pre-installation configuration tasks. which is now available from the Unbreakable Linux Network, or via the Oracle public yum repository. The pre-install package is available for x86_64 only. Specifically, the package: Causes the download and installation of various software packages and specific versions needed for database installation, with package dependencies resolved via yum Creates the user oracle and the groups oinstall and dba, which are the defaults used during database installation Modifies kernel parameters in /etc/sysctl.conf to change settings for shared memory, semaphores, the maximum number of file descriptors, and so on Sets hard and soft shell resource limits in /etc/security/limits.conf, such as the number of open files, the number of processes, and stack size to the minimum required based on the Oracle Database 11g Release 2 Server installation requirements Sets numa=off in the kernel boot parameters for x86_64 machines Please see the release announcement for further details and instructions. Also take a look at Ginny Henningsen's "How I Simplified Oracle Database Installation on Oracle Linux" article on the Oracle Technology Network for a general description on how to perform the installation of the Oracle Database on Oracle Linux. While the article refers to Oracle Linux 5 and the former "oracle-validated" package, the steps for Oracle Linux 6 are still very similar (we're looking into updating that article for Oracle Linux 6).

    Read the article

  • Announcing Unbreakable Enterprise Kernel Release 3 for Oracle Linux

    - by Lenz Grimmer
    We are excited to announce the general availability of the Unbreakable Enterprise Kernel Release 3 for Oracle Linux 6. The Unbreakable Enterprise Kernel Release 3 (UEK R3) is Oracle's third major supported release of its heavily tested and optimized Linux kernel for Oracle Linux 6 on the x86_64 architecture. UEK R3 is based on mainline Linux version 3.8.13. Some notable highlights of this release include: Inclusion of DTrace for Linux into the kernel (no longer a separate kernel image). DTrace for Linux now supports probes for user-space statically defined tracing (USDT) in programs that have been modified to include embedded static probe points Production support for Linux containers (LXC) which were previously released as a technology preview Btrfs file system improvements (subvolume-aware quota groups, cross-subvolume reflinks, btrfs send/receive to transfer file system snapshots or incremental differences, file hole punching, hot-replacing of failed disk devices, device statistics) Improved support for Control Groups (cgroups)  The ext4 file system can now store the content of a small file inside the inode (inline_data) TCP fast open (TFO) can speed up the opening of successive TCP connections between two endpoints FUSE file system performance improvements on NUMA systems Support for the Intel Ivy Bridge (IVB) processor family Integration of the OpenFabrics Enterprise Distribution (OFED) 2.0 stack, supporting a wide range of Infinband protocols including updates to Oracle's Reliable Datagram Sockets (RDS) Numerous driver updates in close coordination with our hardware partners UEK R3 uses the same versioning model as the mainline Linux kernel version. Unlike in UEK R2 (which identifies itself as version "2.6.39", even though it is based on mainline Linux 3.0.x), "uname" returns the actual version number (3.8.13). For further details on the new features, changes and any known issues, please consult the Release Notes. The Unbreakable Enterprise Kernel Release 3 and related packages can be installed using the yum package management tool on Oracle Linux 6 Update 4 or newer, both from the Unbreakable Linux Network (ULN) and our public yum server. Please follow the installation instructions in the Release Notes for a detailed description of the steps involved. The kernel source tree will also available via the git source code revision control system from https://oss.oracle.com/git/?p=linux-uek3-3.8.git If you would like to discuss your experiences with Oracle Linux and UEK R3, we look forward to your feedback on our public Oracle Linux Forum.

    Read the article

  • Top Partners 2010 Specialization Awards

    - by Paulo Folgado
    Portugal Top Partners 2010 Specialization Awards Be Recognized Caro Parceiro, Vão ter lugar mais uma vez os prémios Top Partners, que anualmente visam reconhecer as realizações dos parceiros Oracle em termos de negócio. Este ano, sob o signo de Partner Specialization Awards, pretendemos, a par com os resultados de negócio, premiar igualmente o esforço dos nossos parceiros em termos de especialização e de desenvolvimento da sua auto-suficiência. Outras das inovações deste ano é o processo ser baseado numa entrega de candidaturas dos parceiros, e de estas serem avaliadas, com base em critérios definidos, por um júri incluindo entidades externas. Categorias dos Prémios: ·       Database Partner of the Year ·       Middleware Partner of the Year ·       Applications Partner of the Year ·       ISV Partner of the Year ·       Midsize Partner of the Year ·       Industry Partner of the Year ·       Accelerate Partner of the Year   Um dos objectivos dos Top Partners 2010 Specialization Awards é destacar e incentivar a cooperação entre a Oracle e os seus parceiros. Por esta razão, para além dos vários critérios específicos, os parceiros têm de cumprir um conjunto de critérios gerais: ·       Cooperação com a Oracle ·       Investimento no desenvolvimento dos conhecimentos em Oracle ·       Aproximação conjunta ao mercado ·       Utilização activa dos recursos e ferramentas do Oracle PartnerNetwork através do Portal OPN   Os Top Partners 2010 Specialization Awards estão abertos a todos os parceiros em Portugal que estejam registados no OPN Specialized em uma ou mais especializações. As candidaturas podem ser enviadas até 31 de Maio de 2010. Be recognized! Para mais informação clique aqui

    Read the article

  • White Paper on Analysis Services Tabular Large-scale Solution #ssas #tabular

    - by Marco Russo (SQLBI)
    Since the first beta of Analysis Services 2012, I worked with many companies designing and implementing solutions based on Analysis Services Tabular. I am glad that Microsoft published a white paper about a case-study using one of these scenarios: An Analysis Services Case Study: Using Tabular Models in a Large-scale Commercial Solution. Alberto Ferrari is the author of the white paper and many people contributed to it. The final result is a very technical document based on a case study, which provides a level of detail that I don’t see often in other case studies (which are usually more marketing-oriented). This white paper has the following structure: Requirements (data model, capacity planning, client tool) Options considered (SQL Server Columnstore Indexes, SSAS Multidimensional, SSAS Tabular) Data Model optimizations (memory compression, query performance, scalability) Partitioning and Processing strategy for near real-time latency Hardware selection (NUMA analysis, Azure VM tests) Scalability tests (estimation of maximum users per node) If you are in charge of evaluating Tabular as analytical engine, or if you have to design your solution based on Tabular, this white paper is a must read. But if you just want to increase your knowledge of Analysis Services, you will find a lot of useful technical information. That said, my favorite quote of the document is the following one, funny but true: […] After several trials, the clear winner was a video gaming machine that one guy on the team used at home. That computer outperformed any available server, running twice as fast as the server-class machines we had in house. At that point, it was clear that the criteria for choosing the server would have to be expanded a bit, simply because it would have been impossible to convince the boss to build a cluster of gaming machines and trust it to serve our customers.  But, honestly, if a business has the flexibility to buy gaming machines (assuming the machines can handle capacity) – do this. Owen Graupman, inContact I want to write a longer discussion about how companies are adopting Tabular in scenarios where it is the hidden engine of a more complex solution (and not the classical “BI system”), because it is more frequent than you might expect (and has several advantages over many alternative approaches).

    Read the article

  • VirtualBox 4.2.14 is now available

    - by user12611829
    The VirtualBox development team has just released version 4.2.14, and it is now available for download. This is a maintenance release for version 4.2 and contains quite a few fixes. Here is the list from the official Changelog. VMM: another TLB invalidation fix for non-present pages VMM: fixed a performance regression (4.2.8 regression; bug #11674) GUI: fixed a crash on shutdown GUI: prevent stuck keys under certain conditions on Windows hosts (bugs #2613, #6171) VRDP: fixed a rare crash on the guest screen resize VRDP: allow to change VRDP parameters (including enabling/disabling the server) if the VM is paused USB: fixed passing through devices on Mac OS X host to a VM with 2 or more virtual CPUs (bug #7462) USB: fixed hang during isochronous transfer with certain devices (4.1 regression; Windows hosts only; bug #11839) USB: properly handle orphaned URBs (bug #11207) BIOS: fixed function for returning the PCI interrupt routing table (fixes NetWare 6.x guests) BIOS: don't use the ENTER / LEAVE instructions in the BIOS as these don't work in the real mode as set up by certain guests (e.g. Plan 9 and QNX 4) DMI: allow to configure DmiChassisType (bug #11832) Storage: fixed lost writes if iSCSI is used with snapshots and asynchronous I/O (bug #11479) Storage: fixed accessing certain VHDX images created by Windows 8 (bug #11502) Storage: fixed hang when creating a snapshot using Parallels disk images (bug #9617) 3D: seamless + 3D fixes (bug #11723) 3D: version 4.2.12 was not able to read saved states of older versions under certain conditions (bug #11718) Main/Properties: don't create a guest property for non-running VMs if the property does not exist and is about to be removed (bug #11765) Main/Properties: don't forget to make new guest properties persistent after the VM was terminated (bug #11719) Main/Display: don't lose seamless regions during screen resize Main/OVF: don't crash during import if the client forgot to call Appliance::interpret() (bug #10845) Main/OVF: don't create invalid appliances by stripping the file name if the VM name is very long (bug #11814) Main/OVF: don't fail if the appliance contains multiple file references (bug #10689) Main/Metrics: fixed Solaris file descriptor leak Settings: limit depth of snapshot tree to 250 levels, as more will lead to decreased performance and may trigger crashes VBoxManage: fixed setting the parent UUID on diff images using sethdparentuuid Linux hosts: work around for not crashing as a result of automatic NUMA balancing which was introduced in Linux 3.8 (bug #11610) Windows installer: force the installation of the public certificate in background (i.e. completely prevent user interaction) if the --silent command line option is specified Windows Additions: fixed problems with partial install in the unattended case Windows Additions: fixed display glitch with the Start button in seamless mode for some themes Windows Additions: Seamless mode and auto-resize fixes Windows Additions: fixed trying to to retrieve new auto-logon credentials if current ones were not processed yet Windows Additions installer: added the /with_wddm switch to select the experimental WDDM driver by default Linux Additions: fixed setting own timed out and aborted texts in information label of the lightdm greeter Linux Additions: fixed compilation against Linux 3.2.0 Ubuntu kernels (4.2.12 regression as a side effect of the Debian kernel build fix; bug #11709) X11 Additions: reduced the CPU load of VBoxClient in drag'and'drop mode OS/2 Additions: made the mouse wheel work (bug #6793) Guest Additions: fixed problems copying and pasting between two guests on an X11 host (bug #11792) The full changelog can be found here. You can download binaries for Solaris, Linux, Windows and MacOS hosts at http://www.virtualbox.org/wiki/Downloads Technocrati Tags: Oracle Virtualization VirtualBox

    Read the article

  • Setting up MongoDB in High Performance Computing LSF linux cluster

    - by Dnaiel
    I am trying to run mongo in a LSF cluster computing environment where I have no admin control. Our sysadmin installed mongodb, but it is not running. Any ideas on what should I ask the server admin to do for it to run? Or if I could run it locally? [node1382]allelix> mongod --dbpath /users/dnaiel/ma/mongodb/ Tue Oct 2 21:33:48 [initandlisten] MongoDB starting : pid=22436 port=27017 dbpath=/seq/epigenome01/allelix/ma/mongodb/ 64-bit host=node1382 Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] ** WARNING: You are running on a NUMA machine. Tue Oct 2 21:33:48 [initandlisten] ** We suggest launching mongod like this to avoid performance problems: Tue Oct 2 21:33:48 [initandlisten] ** numactl --interleave=all mongod [other options] Tue Oct 2 21:33:48 [initandlisten] Tue Oct 2 21:33:48 [initandlisten] db version v2.2.0, pdfile version 4.5 Tue Oct 2 21:33:48 [initandlisten] git version: f5e83eae9cfbec7fb7a071321928f00d1b0c5207 Tue Oct 2 21:33:48 [initandlisten] build info: Linux ip-10-2-29-40 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_49 Tue Oct 2 21:33:48 [initandlisten] options: { dbpath: "/users/dnaiel/ma/mongodb/" } Tue Oct 2 21:33:48 [initandlisten] journal dir=users/dnaiel/ma/mongodb/journal Tue Oct 2 21:33:48 [initandlisten] recover begin Tue Oct 2 21:33:48 [initandlisten] info no lsn file in journal/ directory Tue Oct 2 21:33:48 [initandlisten] recover lsn: 0 Tue Oct 2 21:33:48 [initandlisten] recover /seq/epigenome01/allelix/ma/mongodb/journal/j._0 Tue Oct 2 21:33:48 [initandlisten] recover cleaning up Tue Oct 2 21:33:48 [initandlisten] removeJournalFiles Tue Oct 2 21:33:48 [initandlisten] recover done Tue Oct 2 21:33:48 [websvr] admin web console waiting for connections on port 28017 Tue Oct 2 21:33:48 [initandlisten] waiting for connections on port 27017 It basically waits forever and cannot start mongodb. These servers are not webservers but they do have network access, it's a cloud computing LSF environment system. Any advice would be welcome, thanks in advance.

    Read the article

  • SQL Server 2005 SE SP3 on Windows Server 2008 R2 x64 premature query disconnections

    - by southernpost
    New Dell PowerEdge R910, 4x8 Intel X7560, 192GB RAM, hardware NUMA, local RAID, Broadcom NetExtreme II multiport NIC, unteamed, TCP Offload disabled, RSS disabled, NetDMA disabled, Hyperthreading disabled. SQL Server 2005 SE x64 SP3 on Windows Server 2008 R2 EE x64. No other apps on server. Max Mem = 180GB, Max DOP = 4. Existing Windows Server 2003 R2 EE x64 app server connecting to Dell via firewall using SQL Authenticated logins. Symptoms: Intermittent errors at the app server: A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) Findings: Running queries from SSMS located on another machine within the same domain as the SQL Server run without error. SQLIO showed good performance. Windows and SQL logs show no related messages. Microsoft reveiwed PssDiag trace and stated that "We are not seeing timeouts from SQL Side. The queries bring run against the database are timing out within 9secs. This is a database connectivity error." "we can also see from the AttnSeq column that we are also not seeing any Attentions from the SQL Side.". Dell has confirmed that we are using the latest Broadcom drivers.

    Read the article

  • Reminder: Benefícios da Virtualização para ISVs - 14/Dez/10, Porto

    - by Paulo Folgado
    Esta formação aborda as principais dificuldades com que os Independent Software Vendors (ISVs) se confrontam quando têm de escolher as plataformas sobre as quais irão certificar, instalar e suportar as suas aplicações, e como o Oracle VM (e o Oracle Enterprise Linux) os podem ajudar a ultrapassar essas dificuldades. O modelo de negócio clássico de um ISV - desenvolver uma solução aplicacional para resolver um determinado problema de negócio, analizar o mercado para determinar quais os sistemas operativos e o hardware que os clientes do seu mercado alvo usam, e decidir suportar as plataformas hardware e software que 80% dos seus clientes do seu mercado alvo usam (e tratar como excepções outras configurações que lhe sejam solicitadas por alguns clientes importantes) - funcionou bem no anos 80 e princípios dos anos 90, quando havia uma menor diversidade de plataformas. Contudo, com o aparecimentos nos últimos anos de múltiplas versões de sistemas operativos e de "sabores" Linux, este modelo começou a tornar-se um pesadelo. Cada cliente tem a sua plataforma de eleição e espera dos ISV que suportem essas suas opções, o que constitui um sorvedouro dos recursos e dos custos dos ISVs. As tecnologias de virtualização da Oracle, ao permitirem "simular" uma determinada configuração de hardware, fazendo com que o sistema operativo "pense" que está correr numa configuração de hardware pré-definida e normalizada, na qual correm as aplicações, constituem um veículo excelente para os ISVs que procuram uma solução simples, fácil de instalar e fácil de suportar para instalação das suas aplicações, permitindo obter grandes economias de custos em termos de desenvolvimento, teste e suporte dessas aplicações. Quem deve assistir? Esta formação dirige-se sobretudo a quem que tomar decisões sobre as plataformas tecnológicas que o ISV tem de suportar, assim como a quem lida com a estrutura de custos da suas operações, com uma visão dos custos associados ao desenvolvimento, certificação, instalação e suporte de múltiplas plataformas. Se quer saber mais sobre o Oracle VM e como ele pode ajudar a reduzir drasticamente os sues custos, não perca esta formação. AGENDA: 09:00 Welcome & Introduction  ISV Partner View... Why Use Virtualization?   The ISV Deployment Dilemma: The Problem of Supporting Multiple Platforms  How can Virtualization Help?  The use of Templates What is a Template?  How are Templates Created?  Customer's Point of View  Assembly Builder  Weblogic Virtual Edition Managing Oracle VM Best Practices for Virtualizing Oracle Database 11g  Managing Virtual Environments  Coffee Break   Oracle Complete and Integrated Virtualization Portfolio From Datacenter to Desktop  The Next Generation Virtualization  Private Cloud with Middleware Virtualization  Benefits of Using Oracle VM (and Oracle Enterprise Linux) Support Advantages  Production Ready Virtual Machines  Licensing Terms  Partner Resources and OPN Benefits  12:45 Q&A and Wrap-up  Data: 14 de Dezembro - 09h00 / 13h00Local: Oracle Portugal, Av. da Boavista, 1837- Edifício Burgo - Escritório 13.4, 4100-133 PORTO Audiência: Responsáveis de Desenvolvimento, de Tecnologia e Serviços dos parceiros ISV da Oracle Formação realizada pela Altimate

    Read the article

  • Top 10 Linked Blogs of 2010

    - by Bill Graziano
    Each week I send out a SQL Server newsletter and include links to interesting blog posts.  I’ve linked to over 500 blog posts so far in 2010.  Late last year I started storing those links in a database so I could do a little reporting.  I tend to link to posts related to the OLTP engine.  I also try to link to the individual blogger in the group blogs.  Unfortunately that wasn’t possible for the SQLCAT and CSS blogs.  I also have a real weakness for posts related to PASS. These are the top 10 blogs that I linked to during the year ordered by the number of posts I linked to. Paul Randal – Paul writes extensively on the internals of the relational engine.  Lots of great posts around transactions, transaction log, disaster recovery, corruption, indexes and DBCC.  I also linked to many of his SQL Server myths posts. Glenn Berry – Glenn writes very interesting posts on how hardware affects SQL Server.  I especially like his posts on the various CPU platforms.  These aren’t necessarily topics that I’m searching for but I really enjoy reading them. The SQLCAT Team – This Microsoft team focuses on the largest and most interesting SQL Server installations.  The regularly publish white papers and best practices. SQL Server CSS Team – These are the top engineers from the Microsoft Customer Service and Support group.  These are the folks you finally talk to after your case has been escalated about 20 times.  They write about the interesting problems they find. Brent Ozar – The posts I linked to mostly focused on the relational engine: CPU, NUMA, SSD drives, performance monitoring, etc.  But Brent writes about a real variety of topics including blogging, social networking, speaking, the MCM, SQL Azure and anything else that seems to strike his fancy.  His posts are always well written and though provoking. Jeremiah Peschka – A number of Jeremiah’s posts weren’t about SQL Server.  He’s very active in the “NoSQL” area and I linked to a number of those posts.  I think it’s important for people to know what other technologies are out there. Brad McGehee – Brad writes about being a DBA including maintenance plans, DBA checklists, compression and audit. Thomas LaRock – I linked to a variety of posts from PBM to networking to 24 Hours of PASS to TDE.  Just a real variety of topics.  Tom always writes with an interesting style usually mixing in a movie theme and/or bacon. Aaron Bertrand – Many of my links this year were Denali features.  He also had a great series on bad habits to kick. Michael J. Swart – This last one surprised me.  There are some well known SQL Server bloggers below Michael on this list.  I linked to posts on indexes, hierarchies, transactions and I/O performance and a variety of other engine related posts.  All are interesting and well thought out.  Many of his non-SQL posts are also very good.  He seems to have an interest in puzzles and other brain teasers.  Michael, I won’t be surprised again!

    Read the article

  • About the K computer

    - by nospam(at)example.com (Joerg Moellenkamp)
    Okay ? after getting yet another mail because of the new #1 on the Top500 list, I want to add some comments from my side: Yes, the system is using SPARC processor. And that is great news for a SPARC fan like me. It is using the SPARC VIIIfx processor from Fujitsu clocked at 2 GHz. No, it isn't the only one. Most people are saying there are two in the Top500 list using SPARC (#77 JAXA and #1 K) but in fact there are three. The Tianhe-1 (#2 on the Top500 list) super computer contains 2048 Galaxy "FT-1000" 1 GHz 8-core processors. Don't know it? The FeiTeng-1000 ? this proc is a 8 core, 8 threads per core, 1 ghz processor made in China. And it's SPARC based. By the way ? this sounds really familiar to me ? perhaps the people just took the opensourced UltraSPARC-T2 design, because some of the parameters sound just to similar. However it looks like that Tianhe-1 is using the SPARCs as input nodes and not as compute notes. No, I don't see it as the next M-series processor. Simple reason: You can't create SMP systems out of them ? it simply hasn't the functionality to do so. Even when there are multiple CPUs on a single board, they are not connected like an SMP/NUMA machine to a shared memory machine ? they are connected with the cluster interconnect (in this case the Tofu interconnect) and work like a large cluster. Yes, it has a lot of oomph in Linpack ? however I assume a lot came from the extensions to the SPARCv9 standard. No, Linpack has no relevance for any commercial workload ? Linpack is such a special load, that even some HPC people are arguing that it isn't really a good benchmark for HPC. It's embarrassingly parallel, it can work with relatively small interconnects compared to the interconnects in SMP systems (however we get in spheres SMP interconnects where a few years ago). Amdahl isn't hitting that hard when running Linpack. Yes, it's a good move to use SPARC. At some time in the last 10 years, there was an interesting twist in perception: SPARC was considered as proprietary architecture and x86 was the open architecture. However it's vice versa ? try to create a x86 clone and you have a lot of intellectual property problems, create a SPARC clone and you have to spend 100 bucks or so to get the specification from the SPARC Foundation and develop your own SPARC processor. Fujitsu is doing this for a long time now. So they had their own processor, their own know-how. So why was SPARC a good choice? Well ? essentially Fujitsu can do what they want with their core as it is their core, for example adding the extensions to the SPARCv9 chipset ? getting Intel to create extensions to x86 to help you with your product is a little bit harder. So Fujitsu could do they needed to do with their processor in order to create such a supercomputer. No, the K is really using no FPGA or GPU as accelerators. The K is really using the CPU at doing this job. Yes, it has a significantly enhanced FPU capable to execute 8 instructions in parallel. No, it doesn't run Solaris. Yes, it uses Linux. No, it doesn't hurt me ... as my colleague Roland Rambau (he knows a lot about HPC) said once to me ... it doesn't matter which OS is staying out of the way of the workload in HPC.

    Read the article

  • Linux RAID-0 performance doesn't scale up over 1 GB/s

    - by wazoox
    I have trouble getting the max throughput out of my setup. The hardware is as follow : dual Quad-Core AMD Opteron(tm) Processor 2376 16 GB DDR2 ECC RAM dual Adaptec 52245 RAID controllers 48 1 TB SATA drives set up as 2 RAID-6 arrays (256KB stripe) + spares. Software : Plain vanilla 2.6.32.25 kernel, compiled for AMD-64, optimized for NUMA; Debian Lenny userland. benchmarks run : disktest, bonnie++, dd, etc. All give the same results. No discrepancy here. io scheduler used : noop. Yeah, no trick here. Up until now I basically assumed that striping (RAID 0) several physical devices should augment performance roughly linearly. However this is not the case here : each RAID array achieves about 780 MB/s write, sustained, and 1 GB/s read, sustained. writing to both RAID arrays simultaneously with two different processes gives 750 + 750 MB/s, and reading from both gives 1 + 1 GB/s. however when I stripe both arrays together, using either mdadm or lvm, the performance is about 850 MB/s writing and 1.4 GB/s reading. at least 30% less than expected! running two parallel writer or reader processes against the striped arrays doesn't enhance the figures, in fact it degrades performance even further. So what's happening here? Basically I ruled out bus or memory contention, because when I run dd on both drives simultaneously, aggregate write speed actually reach 1.5 GB/s and reading speed tops 2 GB/s. So it's not the PCIe bus. I suppose it's not the RAM. It's not the filesystem, because I get exactly the same numbers benchmarking against the raw device or using XFS. And I also get exactly the same performance using either LVM striping and md striping. What's wrong? What's preventing a process from going up to the max possible throughput? Is Linux striping defective? What other tests could I run?

    Read the article

  • How to get the best LINPACK result and conquer the Top500?

    - by knweiss
    Given a large Linux HPC cluster with hundreds/thousands of nodes. What are your best practices to get the best possible LINPACK benchmark (HPL) result to submit for the Top500 supercomputer list? To give you an idea what kind of answers I would appreciate here are some sub-questions (with links): How to you tune the parameters (N, NB, P, Q, memory-alignment, etc) for the HPL.dat file (without spending too much time trying each possible permutation - esp with large problem sizes N)? Are there any Top500 submission rules to be aware of? What is allowed, what isn't? Which MPI product, which version? Does it make a difference? Any special host order in your MPI machine file? Do you use CPU pinning? How to you configure your interconnect? Which interconnect? Which BLAS package do you use for which CPU model? (Intel MKL, AMD ACML, GotoBLAS2, etc.) How do you prepare for the big run (on all nodes)? Start with small runs on a subset of nodes and then scale up? Is it really necessary to run LINPACK with a big run on all of the nodes (or is extrapolation allowed)? How do you optimize for the latest Intel/AMD CPUs? Hyperthreading? NUMA? Is it worth it to recompile the software stack or do you use precompiled binaries? Which settings? Which compiler optimizations, which compiler? (What about profile-based compilation?) How to get the best result given only a limited amount of time to do the benchmark run? (You can block a huge cluster forever) How do you prepare the individual nodes (stopping system daemons, freeing memory, etc)? How do you deal with hardware faults (ruining a huge run)? Are there any must-read documents or websites about this topic? E.g. I would love to hear about some background stories of some of the current Top500 systems and how they did their LINPACK benchmark. I deliberately don't want to mention concrete hardware details or discuss hardware recommendations because I don't want to limit the answers. However, feel free to mention hints e.g. for specific CPU models.

    Read the article

  • 2xAMD Opteron 6128 with libvirt, Physical CPU 13 doesn't exist

    - by yak
    I need help with libvirt(?) problem. Server specs: ProLiant DL165 G7 2x AMD Opteron(tm) Processor 6128 System: Debian GNU/Linux testing (wheezy) 3.2.0-3-amd64 libvirt 0.9.12-5 kvm 1:1.1.2+dfsg-2 $ grep processor /proc/cpuinfo | wc -l 16 $ virsh nodeinfo setlocale: No such file or directory CPU model: x86_64 CPU(s): 16 CPU frequency: 800 MHz CPU socket(s): 2 Core(s) per socket: 4 Thread(s) per core: 1 NUMA cell(s): 1 Memory size: 66114200 KiB $ virsh capabilities .. <topology> <cells num='4'> <cell id='0'> <cpus num='4'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> <cpu id='3'/> </cpus> </cell> <cell id='1'> <cpus num='4'> <cpu id='4'/> <cpu id='5'/> <cpu id='6'/> <cpu id='7'/> </cpus> </cell> <cell id='2'> <cpus num='4'> <cpu id='12'/> <cpu id='13'/> <cpu id='14'/> <cpu id='15'/> </cpus> </cell> <cell id='3'> <cpus num='4'> <cpu id='8'/> <cpu id='9'/> <cpu id='10'/> <cpu id='11'/> </cpus> </cell> </cells> </topology> .. $ virsh vcpupin vm 0 13,12,11,10,9,8,7,6,5 error: Physical CPU 13 doesn't exist. error: cpulist: Invalid format. Question? Why my VM Guests use only first 8 CPUs and next 8 are idling? $ for host in virsh list | awk '{print $2}'; do virsh vcpuinfo $host; done | grep ^CPU: | sort | uniq CPU: 0 CPU: 1 CPU: 2 CPU: 3 CPU: 4 CPU: 5 CPU: 6 CPU: 7 Any ideas how to change it?

    Read the article

  • CLR via C# 3rd Edition is out

    - by Abhijeet Patel
    Time for some book news update. CLR via C#, 3rd Edition seems to have been out for a little while now. The book was released in early Feb this year, and needless to say my copy is on it’s way. I can barely wait to dig in and chew on the goodies that one of the best technical authors and software professionals I respect has in store. The 2nd edition of the book was an absolute treat and this edition promises to be no less. Here is a brief description of what’s new and updated from the 2nd edition. Part I – CLR Basics Chapter 1-The CLR’s Execution Model Added about discussion about C#’s /optimize and /debug switches and how they relate to each other. Chapter 2-Building, Packaging, Deploying, and Administering Applications and Types Improved discussion about Win32 manifest information and version resource information. Chapter 3-Shared Assemblies and Strongly Named Assemblies Added discussion of TypeForwardedToAttribute and TypeForwardedFromAttribute. Part II – Designing Types Chapter 4-Type Fundamentals No new topics. Chapter 5-Primitive, Reference, and Value Types Enhanced discussion of checked and unchecked code and added discussion of new BigInteger type. Also added discussion of C# 4.0’s dynamic primitive type. Chapter 6-Type and Member Basics No new topics. Chapter 7-Constants and Fields No new topics. Chapter 8-Methods Added discussion of extension methods and partial methods. Chapter 9-Parameters Added discussion of optional/named parameters and implicitly-typed local variables. Chapter 10-Properties Added discussion of automatically-implemented properties, properties and the Visual Studio debugger, object and collection initializers, anonymous types, the System.Tuple type and the ExpandoObject type. Chapter 11-Events Added discussion of events and thread-safety as well as showing a cool extension method to simplify the raising of an event. Chapter 12-Generics Added discussion of delegate and interface generic type argument variance. Chapter 13-Interfaces No new topics. Part III – Essential Types Chapter 14-Chars, Strings, and Working with Text No new topics. Chapter 15-Enums Added coverage of new Enum and Type methods to access enumerated type instances. Chapter 16-Arrays Added new section on initializing array elements. Chapter 17-Delegates Added discussion of using generic delegates to avoid defining new delegate types. Also added discussion of lambda expressions. Chapter 18-Attributes No new topics. Chapter 19-Nullable Value Types Added discussion on performance. Part IV – CLR Facilities Chapter 20-Exception Handling and State Management This chapter has been completely rewritten. It is now about exception handling and state management. It includes discussions of code contracts and constrained execution regions (CERs). It also includes a new section on trade-offs between writing productive code and reliable code. Chapter 21-Automatic Memory Management Added discussion of C#’s fixed state and how it works to pin objects in the heap. Rewrote the code for weak delegates so you can use them with any class that exposes an event (the class doesn’t have to support weak delegates itself). Added discussion on the new ConditionalWeakTable class, GC Collection modes, Full GC notifications, garbage collection modes and latency modes. I also include a new sample showing how your application can receive notifications whenever Generation 0 or 2 collections occur. Chapter 22-CLR Hosting and AppDomains Added discussion of side-by-side support allowing multiple CLRs to be loaded in a single process. Added section on the performance of using MarshalByRefObject-derived types. Substantially rewrote the section on cross-AppDomain communication. Added section on AppDomain Monitoring and first chance exception notifications. Updated the section on the AppDomainManager class. Chapter 23-Assembly Loading and Reflection Added section on how to deploy a single file with dependent assemblies embedded inside it. Added section comparing reflection invoke vs bind/invoke vs bind/create delegate/invoke vs C#’s dynamic type. Chapter 24-Runtime Serialization This is a whole new chapter that was not in the 2nd Edition. Part V – Threading Chapter 25-Threading Basics Whole new chapter motivating why Windows supports threads, thread overhead, CPU trends, NUMA Architectures, the relationship between CLR threads and Windows threads, the Thread class, reasons to use threads, thread scheduling and priorities, foreground thread vs background threads. Chapter 26-Performing Compute-Bound Asynchronous Operations Whole new chapter explaining the CLR’s thread pool. This chapter covers all the new .NET 4.0 constructs including cooperative cancelation, Tasks, the aralle class, parallel language integrated query, timers, how the thread pool manages its threads, cache lines and false sharing. Chapter 27-Performing I/O-Bound Asynchronous Operations Whole new chapter explaining how Windows performs synchronous and asynchronous I/O operations. Then, I go into the CLR’s Asynchronous Programming Model, my AsyncEnumerator class, the APM and exceptions, Applications and their threading models, implementing a service asynchronously, the APM and Compute-bound operations, APM considerations, I/O request priorities, converting the APM to a Task, the event-based Asynchronous Pattern, programming model soup. Chapter 28-Primitive Thread Synchronization Constructs Whole new chapter discusses class libraries and thread safety, primitive user-mode, kernel-mode constructs, and data alignment. Chapter 29-Hybrid Thread Synchronization Constructs Whole new chapter discussion various hybrid constructs such as ManualResetEventSlim, SemaphoreSlim, CountdownEvent, Barrier, ReaderWriterLock(Slim), OneManyResourceLock, Monitor, 3 ways to solve the double-check locking technique, .NET 4.0’s Lazy and LazyInitializer classes, the condition variable pattern, .NET 4.0’s concurrent collection classes, the ReaderWriterGate and SyncGate classes.

    Read the article

  • DBCC CHECKDB on VVLDB and latches (Or: My Pain is Your Gain)

    - by Argenis
      Does your CHECKDB hurt, Argenis? There is a classic blog series by Paul Randal [blog|twitter] called “CHECKDB From Every Angle” which is pretty much mandatory reading for anybody who’s even remotely considering going for the MCM certification, or its replacement (the Microsoft Certified Solutions Master: Data Platform – makes my fingers hurt just from typing it). Of particular interest is the post “Consistency Options for a VLDB” – on it, Paul provides solid, timeless advice (I use the word “timeless” because it was written in 2007, and it all applies today!) on how to perform checks on very large databases. Well, here I was trying to figure out how to make CHECKDB run faster on a restored copy of one of our databases, which happens to exceed 7TB in size. The whole thing was taking several days on multiple systems, regardless of the storage used – SAS, SATA or even SSD…and I actually didn’t pay much attention to how long it was taking, or even bothered to look at the reasons why - as long as it was finishing okay and found no consistency errors. Yes – I know. That was a huge mistake, as corruption found in a database several days after taking place could only allow for further spread of the corruption – and potentially large data loss. In the last two weeks I increased my attention towards this problem, as we noticed that CHECKDB was taking EVEN LONGER on brand new all-flash storage in the SAN! I couldn’t really explain it, and were almost ready to blame the storage vendor. The vendor told us that they could initially see the server driving decent I/O – around 450Mb/sec, and then it would settle at a very slow rate of 10Mb/sec or so. “Hum”, I thought – “CHECKDB is just not pushing the I/O subsystem hard enough”. Perfmon confirmed the vendor’s observations. Dreaded @BlobEater What was CHECKDB doing all the time while doing so little I/O? Eating Blobs. It turns out that CHECKDB was taking an extremely long time on one of our frankentables, which happens to be have 35 billion rows (yup, with a b) and sucks up several terabytes of space in the database. We do have a project ongoing to purge/split/partition this table, so it’s just a matter of time before we deal with it. But the reality today is that CHECKDB is coming to a screeching halt in performance when dealing with this particular table. Checking sys.dm_os_waiting_tasks and sys.dm_os_latch_stats showed that LATCH_EX (DBCC_OBJECT_METADATA) was by far the top wait type. I remembered hearing recently about that wait from another post that Paul Randal made, but that was related to computed-column indexes, and in fact, Paul himself reminded me of his article via twitter. But alas, our pathologic table had no non-clustered indexes on computed columns. I knew that latches are used by the database engine to do internal synchronization – but how could I help speed this up? After all, this is stuff that doesn’t have a lot of knobs to tweak. (There’s a fantastic level 500 talk by Bob Ward from Microsoft CSS [blog|twitter] called “Inside SQL Server Latches” given at PASS 2010 – and you can check it out here. DISCLAIMER: I assume no responsibility for any brain melting that might ensue from watching Bob’s talk!) Failed Hypotheses Earlier on this week I flew down to Palo Alto, CA, to visit our Headquarters – and after having a great time with my Monkey peers, I was relaxing on the plane back to Seattle watching a great talk by SQL Server MVP and fellow MCM Maciej Pilecki [twitter] called “Masterclass: A Day in the Life of a Database Transaction” where he discusses many different topics related to transaction management inside SQL Server. Very good stuff, and when I got home it was a little late – that slow DBCC CHECKDB that I had been dealing with was way in the back of my head. As I was looking at the problem at hand earlier on this week, I thought “How about I set the database to read-only?” I remembered one of the things Maciej had (jokingly) said in his talk: “if you don’t want locking and blocking, set the database to read-only” (or something to that effect, pardon my loose memory). I immediately killed the CHECKDB which had been running painfully for days, and set the database to read-only mode. Then I ran DBCC CHECKDB against it. It started going really fast (even a bit faster than before), and then throttled down again to around 10Mb/sec. All sorts of expletives went through my head at the time. Sure enough, the same latching scenario was present. Oh well. I even spent some time trying to figure out if NUMA was hurting performance. Folks on Twitter made suggestions in this regard (thanks, Lonny! [twitter]) …Eureka? This past Friday I was still scratching my head about the whole thing; I was ready to start profiling with XPERF to see if I could figure out which part of the engine was to blame and then get Microsoft to look at the evidence. After getting a bunch of good news I’ll blog about separately, I sat down for a figurative smack down with CHECKDB before the weekend. And then the light bulb went on. A sparse column. I thought that I couldn’t possibly be experiencing the same scenario that Paul blogged about back in March showing extreme latching with non-clustered indexes on computed columns. Did I even have a non-clustered index on my sparse column? As it turns out, I did. I had one filtered non-clustered index – with the sparse column as the index key (and only column). To prove that this was the problem, I went and setup a test. Yup, that'll do it The repro is very simple for this issue: I tested it on the latest public builds of SQL Server 2008 R2 SP2 (CU6) and SQL Server 2012 SP1 (CU4). First, create a test database and a test table, which only needs to contain a sparse column: CREATE DATABASE SparseColTest; GO USE SparseColTest; GO CREATE TABLE testTable (testCol smalldatetime SPARSE NULL); GO INSERT INTO testTable (testCol) VALUES (NULL); GO 1000000 That’s 1 million rows, and even though you’re inserting NULLs, that’s going to take a while. In my laptop, it took 3 minutes and 31 seconds. Next, we run DBCC CHECKDB against the database: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; This runs extremely fast, as least on my test rig – 198 milliseconds. Now let’s create a filtered non-clustered index on the sparse column: CREATE NONCLUSTERED INDEX [badBadIndex] ON testTable (testCol) WHERE testCol IS NOT NULL; With the index in place now, let’s run DBCC CHECKDB one more time: DBCC CHECKDB('SparseColTest') WITH NO_INFOMSGS, ALL_ERRORMSGS; In my test system this statement completed in 11433 milliseconds. 11.43 full seconds. Quite the jump from 198 milliseconds. I went ahead and dropped the filtered non-clustered indexes on the restored copy of our production database, and ran CHECKDB against that. We went down from 7+ days to 19 hours and 20 minutes. Cue the “Argenis is not impressed” meme, please, Mr. LaRock. My pain is your gain, folks. Go check to see if you have any of such indexes – they’re likely causing your consistency checks to run very, very slow. Happy CHECKDBing, -Argenis ps: I plan to file a Connect item for this issue – I consider it a pretty serious bug in the engine. After all, filtered indexes were invented BECAUSE of the sparse column feature – and it makes a lot of sense to use them together. Watch this space and my twitter timeline for a link.

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 1 2 3  | Next Page >