Search Results

Search found 56 results on 3 pages for 'bol'.

Page 2/3 | < Previous Page | 1 2 3  | Next Page >

  • SQL Server 2008 R2 Installation and the Phantom of SQL Server 2005 Express

    - by Davide Mauri
    Today I’ve happy started to install SQL Server 2008R2 on my development machine, which has this software installed Windows Server 2008 R2 Standard SQL Server 2008 SP1 CU5 Visual Studio 2008 SP1 BOL October 2009 AdventuresWorks2008 Databases SR4 Visual Studio 2010 RTM So, all the basic standard stuff. SQL Server 2008 R2 installation went smooth ‘till somewhere in the middle, where the rule engine checks that software pre-requisite are satisfied before starting to copy files. Here I had this @][@@[?!?! error: “The SQL Server 2005 Express Tools are installed. To continue, remove the SQL Server 2005 Express Tools.” Fun enough, I don’t have and I’ve never had SQL Server 2005 Express on my machine. Armed with patience I analyzed the install log here C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\yyyymmdd_hhmmss\Detail.txt and I’ve found that the rule “Sql2005SsmsExpressFacet” is the one in charge of this check and it look for existance of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\Tools\ShellSEM (on x86) HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Microsoft SQL Server\90\Tools\ShellSEM (on x64) In my registry I’ve found that key existsing, due to the installation of the uber-cool Red-Gate SQL Search. I removed the registry key and here it is! SQL Server 2008 R2 is installing while I’m writing this post. A note to Microsoft: can you please add more detailed information on the setup while such error happens. Just saying “you have SQL Server 2005 Express installed” is not enough. Please show us what the rule look for and why it has failed directly in the Detailed Report, so that we don’t have to spend time to look for the needle in the logs. Thanks! :) PS I did a side-by-side installation with the existing SQL Server 2008 instance. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Making a Statement: How to retrieve the T-SQL statement that caused an event

    - by extended_events
    If you’ve done any troubleshooting of T-SQL, you know that sooner or later, probably sooner, you’re going to want to take a look at the actual statements you’re dealing with. In extended events we offer an action (See the BOL topic that covers Extended Events Objects for a description of actions) named sql_text that seems like it is just the ticket. Well…not always – sounds like a good reason for a blog post. When is a statement not THE statement? The sql_text action returns the same information that is returned from DBCC INPUTBUFFER, which may or may not be what you want. For example, if you execute a stored procedure, the sql_text action will return something along the lines of “EXEC sp_notwhatiwanted” assuming that is the statement you sent from the client. Often times folks would like something more specific, like the actual statements that are being run from within the stored procedure or batch. Enter the stack Extended events offers another action, this one with the descriptive name of tsql_stack, that includes the sql_handle and offset information about the statements being run when an event occurs. With the sql_handle and offset values you can retrieve the specific statement you seek using the DMV dm_exec_sql_statement. The BOL topic for dm_exec_sql_statement provides an example for how to extract this information, so I’ll cover the gymnastics required to get the sql_handle and offset values out of the tsql_stack data collected by the action. I’m the first to admit that this isn’t pretty, but this is what we have in SQL Server 2008 and 2008 R2. We will be making it easier to get statement level information in the next major release of SQL Server. The sample code For this example I have a stored procedure that includes multiple statements and I have a need to differentiate between those two statements in my tracing. I’m going to track two events: module_end tracks the completion of the stored procedure execution and sp_statement_completed tracks the execution of each statement within a stored procedure. I’m adding the tsql_stack events (since that’s the topic of this post) and the sql_text action for comparison sake. (If you have questions about creating event sessions, check out Pedro’s post Introduction to Extended Events.) USE AdventureWorks2008GO -- Test SPCREATE PROCEDURE sp_multiple_statementsASSELECT 'This is the first statement'SELECT 'this is the second statement'GO -- Create a session to look at the spCREATE EVENT SESSION track_sprocs ON SERVERADD EVENT sqlserver.module_end (ACTION (sqlserver.tsql_stack, sqlserver.sql_text)),ADD EVENT sqlserver.sp_statement_completed (ACTION (sqlserver.tsql_stack, sqlserver.sql_text))ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS)GO -- Start the sessionALTER EVENT SESSION track_sprocs ON SERVERSTATE = STARTGO -- Run the test procedureEXEC sp_multiple_statementsGO -- Stop collection of events but maintain ring bufferALTER EVENT SESSION track_sprocs ON SERVERDROP EVENT sqlserver.module_end,DROP EVENT sqlserver.sp_statement_completedGO Aside: Altering the session to drop the events is a neat little trick that allows me to stop collection of events while keeping in-memory targets such as the ring buffer available for use. If you stop the session the in-memory target data is lost. Now that we’ve collected some events related to running the stored procedure, we need to do some processing of the data. I’m going to do this in multiple steps using temporary tables so you can see what’s going on; kind of like having to “show your work” on a math test. The first step is to just cast the target data into XML so I can work with it. After that you can pull out the interesting columns, for our purposes I’m going to limit the output to just the event name, object name, stack and sql text. You can see that I’ve don a second CAST, this time of the tsql_stack column, so that I can further process this data. -- Store the XML data to a temp tableSELECT CAST( t.target_data AS XML) xml_dataINTO #xml_event_dataFROM sys.dm_xe_sessions s INNER JOIN sys.dm_xe_session_targets t    ON s.address = t.event_session_addressWHERE s.name = 'track_sprocs' SELECT * FROM #xml_event_data -- Parse the column data out of the XML blockSELECT    event_xml.value('(./@name)', 'varchar(100)') as [event_name],    event_xml.value('(./data[@name="object_name"]/value)[1]', 'varchar(255)') as [object_name],    CAST(event_xml.value('(./action[@name="tsql_stack"]/value)[1]','varchar(MAX)') as XML) as [stack_xml],    event_xml.value('(./action[@name="sql_text"]/value)[1]', 'varchar(max)') as [sql_text]INTO #event_dataFROM #xml_event_data    CROSS APPLY xml_data.nodes('//event') n (event_xml) SELECT * FROM #event_data event_name object_name stack_xml sql_text sp_statement_completed NULL <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="4" offsetStart="94" offsetEnd="172" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements sp_statement_completed NULL <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="6" offsetStart="174" offsetEnd="-1" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements module_end sp_multiple_statements <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="0" offsetStart="0" offsetEnd="0" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements After parsing the columns it’s easier to see what is recorded. You can see that I got back two sp_statement_completed events, which makes sense given the test procedure I’m running, and I got back a single module_end for the entire statement. As described, the sql_text isn’t telling me what I really want to know for the first two events so a little extra effort is required. -- Parse the tsql stack information into columnsSELECT    event_name,    object_name,    frame_xml.value('(./@level)', 'int') as [frame_level],    frame_xml.value('(./@handle)', 'varchar(MAX)') as [sql_handle],    frame_xml.value('(./@offsetStart)', 'int') as [offset_start],    frame_xml.value('(./@offsetEnd)', 'int') as [offset_end]INTO #stack_data    FROM #event_data        CROSS APPLY    stack_xml.nodes('//frame') n (frame_xml)    SELECT * from #stack_data event_name object_name frame_level sql_handle offset_start offset_end sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 94 172 sp_statement_completed NULL 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 174 -1 sp_statement_completed NULL 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 module_end sp_multiple_statements 1 0x03000500D0057C1403B79600669D00000100000000000000 0 0 module_end sp_multiple_statements 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 Parsing out the stack information doubles the fun and I get two rows for each event. If you examine the stack from the previous table, you can see that each stack has two frames and my query is parsing each event into frames, so this is expected. There is nothing magic about the two frames, that’s just how many I get for this example, it could be fewer or more depending on your statements. The key point here is that I now have a sql_handle and the offset values for those handles, so I can use dm_exec_sql_statement to get the actual statement. Just a reminder, this DMV can only return what is in the cache – if you have old data it’s possible your statements have been ejected from the cache. “Old” is a relative term when talking about caches and can be impacted by server load and how often your statement is actually used. As with most things in life, your mileage may vary. SELECT    qs.*,     SUBSTRING(st.text, (qs.offset_start/2)+1,         ((CASE qs.offset_end          WHEN -1 THEN DATALENGTH(st.text)         ELSE qs.offset_end         END - qs.offset_start)/2) + 1) AS statement_textFROM #stack_data AS qsCROSS APPLY sys.dm_exec_sql_text(CONVERT(varbinary(max),sql_handle,1)) AS st event_name object_name frame_level sql_handle offset_start offset_end statement_text sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 94 172 SELECT 'This is the first statement' sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 174 -1 SELECT 'this is the second statement' module_end sp_multiple_statements 1 0x03000500D0057C1403B79600669D00000100000000000000 0 0 C Now that looks more like what we were after, the statement_text field is showing the actual statement being run when the sp_statement_completed event occurs. You’ll notice that it’s back down to one row per event, what happened to frame 2? The short answer is, “I don’t know.” In SQL Server 2008 nothing is returned from dm_exec_sql_statement for the second frame and I believe this to be a bug; this behavior has changed in the next major release and I see the actual statement run from the client in frame 2. (In other words I see the same statement that is returned by the sql_text action  or DBCC INPUTBUFFER) There is also something odd going on with frame 1 returned from the module_end event; you can see that the offset values are both 0 and only the first letter of the statement is returned. It seems like the offset_end should actually be –1 in this case and I’m not sure why it’s not returning this correctly. This behavior is being investigated and will hopefully be corrected in the next major version. You can workaround this final oddity by ignoring the offsets and just returning the entire cached statement. SELECT    event_name,    sql_handle,    ts.textFROM #stack_data    CROSS APPLY sys.dm_exec_sql_text(CONVERT(varbinary(max),sql_handle,1)) as ts event_name sql_handle text sp_statement_completed 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' sp_statement_completed 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' module_end 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' Obviously this gives more than you want for the sp_statement_completed events, but it’s the right information for module_end. I leave it to you to determine when this information is needed and use the workaround when appropriate. Aside: You might think it’s odd that I’m showing apparent bugs with my samples, but you’re going to see this behavior if you use this method, so you need to know about it.I’m all about transparency. Happy Eventing- Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Kipróbálható az ingyenes új Oracle Data Miner 11gR2 grafikus workflow-val

    - by Fekete Zoltán
    Oracle Data Mining technológiai információs oldal. Oracle Data Miner 11g Release 2 - Early Adopter oldal. Megjelent, letöltheto és kipróbálható az Oracle Data Mining, az Oracle adatbányászat új grafikus felülete, az Oracle Data Miner 11gR2. Az Oracle Data Minerhez egyszeruen az SQL Developer-t kell letöltenünk, mivel az adatbányászati felület abból indítható. Az Oracle Data Mining az Oracle adatbáziskezelobe ágyazott adatbányászati motor, ami az Oracle Database Enterprise Edition opciója. Az adatbányászat az adattárházak elemzésének kifinomult eszköze és folyamata. Az Oracle Data Mining in-database-mining elonyeit felvonultatja: - nincs felesleges adatmozgatás, a teljes adatbányászati folyamatban az adatbázisban maradnak az adatok - az adatbányászati modellek is az Oracle adatbázisban vannak - az adatbányászati eredmények, cluster adatok, döntések, valószínuségek, stb. szintén az adatbázisban keletkeznek, és ott közvetlenül elemezhetoek Az új ingyenes Data Miner felület "hatalmas gazdagodáson" ment keresztül az elozo verzióhoz képest. - grafikus adatbányászati workflow szerkesztés és futtatás jelent meg! - továbbra is ingyenes - kibovült a felület - új elemzési lehetoségekkel bovült - az SQL Developer 3.0 felületrol indítható, ez megkönnyíti az adatbányászati funkciók meghívását az adatbázisból, ha épp nem a grafikus felületetet szeretnénk erre használni Az ingyenes Data Miner felület az Oracle SQL Developer kiterjesztéseként érheto el, így az elemzok közvetlenül dolgozhatnak az adatokkal az adatbázisban és a Data Miner grafikus felülettel is, építhetnek és kiértékelhetnek, futtathatnak modelleket, predikciókat tehetnek és elemezhetnek, támogatást kapva az adatbányászati módszertan megvalósítására. A korábbi Oracle Data Miner felület a Data Miner Classic néven fut és továbbra is letöltheto az OTN-rol. Az új Data Miner GUI-ból egy képernyokép: Milyen feladatokra ad megoldási lehetoséget az Oracle Data Mining: - ügyfél viselkedés megjövendölése, prediktálása - a "legjobb" ügyfelek eredményes megcélzása - ügyfél megtartás, elvándorlás kezelés (churn) - ügyfél szegmensek, klaszterek, profilok keresése és vizsgálata - anomáliák, visszaélések felderítése - stb.

    Read the article

  • 1. születésnap - új tartalom

    - by Lajos Sárecz
    Kb. 1 éve, június elején kezdtem a blogot írni. Az elmúlt egy év alatt 3395 egyedi látogatója volt a blognak, a látogatók alkalmanként átlagosan 59 másodpercet töltöttek el itt. Azt is el kell ismerni, hogy elég nagy a visszafordulási arány (83,5%), azaz az idetévedok kevesebb mint ötöde találja hasznosnak a tartalmat. Ennek persze lehet egy fontos oka, hogy az IT kifejezésekre keresések miatt külföldiek számára is feljön a keresokben a blogom, így csak megnyitva a blogot észlelik, hogy nem értenek magyarul :-). Jól mutatja az alábbi térkép is, hogy a világ számos pontjáról idetalálnak (van aki egyébként használ valami webes fordító megoldást a nyelvi akadály leküzdésére), de az is látszik hogy a célközönséget is sikerül elérni, azaz a látogatók zöme Magyarországról érkezik (3505 látogatás az összes 5082-bol). Érdekes, hogy a legtöbb látogató a "vb tippjáték" (110 látogató) kifejezésre talált ide, ezt követi az "oracle junior képzés" (42), majd a "kyte" (40) (nem biztos hogy Tom Kyte-ra gondoltak :-)). Csupán a 4. helyen áll az elso igazi IT keresés: "oracle workflow" (37), majd "oracle enterprise manager" (33). A napi csúcslátogatást (76) is a vb tippjátékra keresoknek köszönheti a blog... Az 1. születésnap örömére próbáltam valami újítást csinálni. A fejléc alatt látható új menüsorba próbálom összeszedni azokat a hasznos oldalakat, amelyek kapcsolódnak a blog témájához, ám esetleg nem annyira egyszeru rátalálni a weben. Tervezem még ezt tovább bovíteni, esetleg egy külön oldalt is létrehozni erre a célra a blog mögött. Remélem ezzel segítem a blog olvasók munkáját.

    Read the article

  • AdvanceTimePolicy and Point Event Streams In StreamInsight.

    There are a number of ways to issues CTIs (Current Time Increments) into your StreamInsight streams but a quite useful way is to do it declaratively on your source factory like this public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(InputConfig configInfo, EventShape eventShape) {     return new AdapterAdvanceTimeSettings(         new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.FromTicks(-1)),         AdvanceTimePolicy.Adjust); } This will issue a CTI after every event and allows no delay (for delayed events) by stamping the CTI with the timestamp of the last event minus 1 tick. The very last statement "AdvanceTimePolicy.Adjust" tells the adapter what to do with events that violate the policy (arrive late).  From BOL "Events that violate the inserted CTI are moved in time if their lifetime overlaps with the CTI timestamp. That is, the start timestamp of the events is set to the most recent CTI timestamp, which renders those events valid. If both start and end time of an event fall before the CTI timestamp, then the event is dropped." This means that if you are using this method of inserting CTIs for a Point event stream and have specified "AdvanceTimePolicy.Adjust" for the violation policy, this setting will be ignored and instead it will use "AdvanceTimePolicy.Drop" because a Point event can never straddle a CTI.

    Read the article

  • Is this table replicated?

    - by fatherjack
    Another in the potentially quite sporadic series of I need to do ... but I cant find it on the internet. I have a table that I think might be involved in replication but I don't know which publication its in... We know the table name - 'MyTable' We have replication running on our server and its replicating our database, or part of it - 'MyDatabase'. We need to know if the table is replicated and if so which publication is going to need to be reviewed if we make changes to the table. How? USE MyDatabase GO /* Lots of info about our table but not much that's relevant to our current requirements*/ SELECT * FROM sysobjects WHERE NAME = 'MyTable' -- mmmm, getting there /* To quote BOL - "Contains one row for each merge article defined in the local database. This table is stored in the publication database.replication" interesting column is [pubid] */ SELECT * FROM dbo.sysmergearticles AS s WHERE NAME = 'MyTable' -- really close now /* the sysmergepublications table - Contains one row for each merge publication defined in the database. This table is stored in the publication and subscription databases. so this would be where we get the publication details */ SELECT * FROM dbo.sysmergepublications AS s WHERE s.pubid = '2876BBD8-3D4E-4ED8-88F3-581A659E8144' -- DONE IT. /* Combine the two tables above and we get the information we need */ SELECT s.[name] AS [Publication name] FROM dbo.sysmergepublications AS s INNER JOIN dbo.sysmergearticles AS s2 ON s.pubid = s2.pubid WHERE s2.NAME = 'MyTable' So I now know which

    Read the article

  • CodePlex Daily Summary for Tuesday, March 16, 2010

    CodePlex Daily Summary for Tuesday, March 16, 2010New ProjectsAAPL-MySQL (a MySql implementation of the Agile ADO.Net Persistence Layer): Using the code conventions and code of the Agile ADO.Net Persistence Layer to build a MySql implementationAddress Book: Address Book is simple and easy to use application for storing and editing contacts. It has many features like search and grouping. It's developed ...Airplanes: Airplanes GameAJAX Fax document viewer: The AJAX Fax document viewer is an ASP.net 4.0 project which uses a Seadragon control to display Tiff/Tif files inside the browser. Since it's base...AxeFrog Core: This project contains the foundational code used in several other projects.Bolão: O objetivo deste projeto é estimular a troca de informações e experiências sobre arquitetura e desenvolvimento de software na prática, através do d...Caramel Engine: This is designed to be a logic engine for developing a wide variety of games. To include card logic, dice logic, territories, players, and even man...DotNetNuke® Skin BrandWorks: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by M Perakakis & M Siganou (e-bilab). A very minimal look sk...DotNetNuke® Skin Reasonable: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Ralph Williams of Architech Solutions. A clean, classy, professi...DotNetNuke® Skin Seasons: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Atul Handa and Kevin McCusker of Mercer. This skin is a generic ...File Eraser: File Eraser make it esasier for IT Administrator or Advanced Users to administer files and eliminate long addresses, even UNC. It's developed in VB...GreenEyes: My project for IT personalHulu Launcher: Hulu Launcher is a simple Windows Media Center add-in that attempts to launch Hulu Desktop and manage the windows as seamlessly as possible.Imenik: Imenik makes it easier for users to organise their contacts!KharaPOS: KharaPOS is an Exemplar application for Silverlight 4, WCF RIA Services and .NET Framework 4.Mantas Cryptography: Pequena biblioteca de criptografia com suporte aos algorítmos DES, RC2, Rexor e TripleDES. Gera hashes HMAC-MD5, HMAC-RIPEMD160, HMAC-SHA (SHA1, S...MapWindow3D: A C# DirectX library that extends the MapWindow 6.0 geospatial software library by adding an external map. The map supports rotation and tilting i...Microsoft Silverlight Analytics Framework: Extensible Web Analytics Framework for Microsoft Silverlight Applications.Moq Examples: Unit tests demonstrating Moq features.Ncqrs: Framework that helps you to create Command-Query Responsibility Segregation based applications easily.NoLedge - Notetaking and knowledge database: NoLedge is an easy knowledge gathering and notetaking freeware with a simple interface. You can link a note with different titles and can retrieve ...Numina Application Framework: The framework is a set of tools that help facilitate authentication, authorization, and access control. It is much more than a SSO. It is a central...OData SDK for PHP: OData SDK for PHP is a library to facilitate the connection with OData Services.patterns & practices - Windows Azure Guidance: p&p site for Windows Azure related guidance.projecteuler.net: Exploring projecteuler.net using F#ResumeTracker: Small and easy to use tool that helps to track your job applications. To report bugs or for suggestions please email kirunchik@gmail.com.Selection Maker: Have you ever create a collection of music files? Imagine you just want to pick best of them by your choice. you should go to several folder,play...ShureValidation: Multilingual model validation library. Supports fluent validations, attribute validations and own custom validations.Simple Phonebook: Simple phonebook allows you to store contacts. It also allows you to export the contacts to .txt or .csv. This application is written in C#, but it...SoftUpd: A usefull library which provides an Update feature to any .Net software.sTASKedit: This program can modify and export Perfect World tasks.data files...Tigie: Tigie is a simple CMS system for basic website. It's simple, easy to customize. you'll have a very basic cms to start with and expand for it. All c...toapp: ap hwUltiLogger: UltiLogger is a fast, lightweight logging library programmed in C#. It is meant as a fast, easy, and efficient way for developers to access a relia...Unnme: UnnmeVisual Studio 2008 NUnit snippets: A simple set of useful NUnit snippets, for Visual studio 2008.webdama: italian checkers game c#XBrowser - Headless Browser for .Net: XBrowser is a "headless" web browser written for .Net applications using C#. It is designed to allow automated, remote controlled and test-based br...XML Integrator: XML integration, collaborative toolNew ReleasesAddress Book: Address Book: Address BookAddress Book: Address Book - Source: Address Book source code.AJAX Fax document viewer: AJAXTiff_Source 1.0: Source project for the AJAX Tiff viewer v1.0. Written in Visual Studio 2010 RC using ASP.net 4.0ASP.Net Routing Configuration: mal.Web.Routing v1.0.0.0: mal.Web.Routing v1.0.0.0ASP.NET Wiki Control: Release 1.0: Includes VS2010 Solution and Project files but targets the 3.5 framework so can still be used with VS2008 if new project files are created.BingPaper: Beta: BingPaper Beta Release This Beta release contains quite a few improvements: This Beta release contains a complate overhaul of the replacement tok...Bolão: teste: testeDotNetNuke® Skin Reasonable: Reasonable Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Ralph Williams of Architech Solutions. A clean, classy, professi...DotNetNuke® Skin Seasons: Seasons Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Atul Handa and Kevin McCusker of Mercer. This skin is a generic ...dylan.NET: dylan.NET v. 9.2: In TFS Serverdylan.NET Apps: dylan.NET Apps v. 1.1: First version of dnu.dll together with v.9.2 of dylan.NETFamily Tree Analyzer: Version 1.1.0.0: Version 1.1.0.0 Census report now shows in bold those individuals you KNOW to be alive at the date of the census. Direct Ancestors on census repor...GLB Virtual Player Builder: 0.4.1: Minor change to reset non-major/minor attr to 8.Hulu Launcher: HuluLauncher Release 1.0.1.1: HuluLauncher Release 1.0.1.1 is the initial, barely-tested release of this Windows Media Center add-in. It should work in Vista Media Center and 7 ...Imenik: Imenik: Imenik is now available!jQuery Library for SharePoint Web Services: SPServices 0.5.3: NOTE: While I work on new releases, I post alpha versions. Usually the alpha versions are here to address a particular need. I DO NOT recommend usi...KeelKit: KeelKit 1.0.3800: 更新内容如下: 优化了DBHelper的一些机制 修正一些BUG 支持Mysql PHP代理,使得能通过Web代理的方式远程访问数据库服务器 添加Model实例化方法,支持所有非自动计算字段的参数实例化、支持所有非空字段实例化 添加Model中的常量,使用这些常量可以获得表名称。 添加了自...Managed Extensibility Framework (MEF) Contrib: MefContrib 0.9.0.0: Updated to MEF Preview 9 Moved MefContrib.Extensions.Generics to MefContrib.Hosting.Generics Moved MefContrib.Extensions.Generics.Tests to MefC...MooiNooi MVC2LINQ2SQL Web Databinder: MooiNooi MVC2LINQ2SQL Web Databinder v0.1.1: Repaired a problem with collections... only index number under 10 were allowed... Please send me your comments and rate the project. Sorry.Mouse Gestures for .NET: Mouse Gestures 1.0: Improved version of the component + sample application. Version 1.0 is not backward compatible.MoviesForMyBlog: MoviesForMyBlog V1.0: This is version 1.0Nito.KitchenSink: Version 1: The first release of Nito.KitchenSink, which uses Nito.Linq 0.1. Please report any issues via the Issue Tracker.Nito.LINQ: Beta (v0.1): This is the first official public release of Nito.Linq. This release only supports .NET 3.5 SP1 with the Microsoft Rx libraries. The documentation...Nito.LINQ: Beta (v0.2): Added ListSource.Generate overloads that take a delegate for counting the list elements.OData SDK for PHP: OData SDK for PHP: This is an updated version of the ADO.NET Data Services toolkit for PHP. It includes full support for OData Protocol V2.0 specification, better pro...Orchard Project: Orchard Latest Build (0.1.2010.0312): This is a technical preview release of Orchard. Our intent with this release is to demonstrate representative experiences for our audiences (end-us...patterns & practices – Enterprise Library: Enterprise Library 5.0 - Beta2: This is a preliminary release of the code and documentation that may potentially be incomplete, and may change prior to the final release of Enterp...patterns & practices - Unity: Unity 2.0 - Beta2: This is a preliminary release of the code and documentation that may potentially be incomplete, and may change prior to the final release of Unity ...patterns & practices - Windows Azure Guidance: Code drop - 1: This initial version is the before the cloud baseline application, so you won’t find anything related to Windows Azure here. Next iteration, we'l...Rawr: Rawr 2.3.12: - First, a note about Rawr3. Rawr3 has been in development for quite a while now, and we know that everyone's eager to get it. It's been held back ...ResumeTracker: Resume Tracker v1.0: First release.Selection Maker: Selection Maker 1.0: This is just the first release of this programSevenZipSharp: SevenZipSharp 0.61: Added: Windows Mobile support bool Check() method for Extractor to test archives integrity FileExtractionFinished now returns FileInfoEventArgs...Silverlight 3.0 Advanced ToolTipService: Advanced ToolTipService v2.0.1: This release is compiled against the Silverlight 3.0 runtime. A demonstration on how to set the ToolTip content to a property of the DataContext o...Silverlight Flow Layouts library: SL and WPF Flow Layouts library March 2010: This release indtroduces some bug fixes, performance improvements, Silverlight 4 RC and WPF 4.0 RC support. Flow Layouts Library is a control libra...Simple Phonebook: SimplePhonebook Visual Studio 2010 Solution: Ovo je cijeli projekt u kojem se nalaze svi source fileovi koje sam koristio u izradi ove aplikacije. Za pokretanje je potreban Visual Studio 2010....Simple Phonebook: SimplePhonebook.rar: U ovoj .rar datoteci nalaze se izvršni fileovi. _ In this .rar file you can find .exe file needed for executing the application.SLARToolkit - Silverlight Augmented Reality Toolkit: SLARToolkit 1.0.1.0: Updated to Silverlight 4 Release Candidate. Introduces the new GenericMarkerDetector which uses the IXrgbReader interface. See the Marker Detecto...sPWadmin: pwAdmin v1.0: Fixed: Templates can now be saved server restart persistant (wait at least 60 seconds between saving and restarting)SQL Director for Dependencies & Indexes: SDD CTP 1.0: SQL Director for Dependencies allows you to view dependencies between tables, views, function, stored procedures and jobs. Newest Testing build, f...SqlCeViewer: SeasonStar Database Management 0.7.0.2: Update the user interface to help user understand clearly how to use .UltiLogger: Initial alpha release: Important! This is not a feature-complete release! It contains the logging priorities, and an interface for building logging systems from. THERE IS...Visual Studio 2008 NUnit snippets: Version 1.0: First stable release.Zeta Resource Editor: Source Code Release 2010-03-16: New source code. Binary setup is also available.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitASP.NET Ajax LibraryWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMost Active ProjectsLINQ to TwitterOData SDK for PHPRawrN2 CMSpatterns & practices – Enterprise LibraryDirectQBlogEngine.NETMapWindow6SharePoint Team-MailerNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • SSRS 2008 Installation Guidelines/Best Practices?

    - by Brad Bowman
    I know I have seen recommendations for installing SSRS 2005 and it stated that you should separate SSRS from the DB Engine that hosts the data sources for your reports, that you should not install them on the same server. Is there any documentation for SSRS 2008 that provides guidelines/best practices for installation? I am assuming that the same holds true for SSRS 2008 as it did for SSRS 2005 but have not seen it specifically stated anywhere. We have a project that will utilize SSRS 2008 and there is a difference of opinion on where to install it, on it's own dedicated server or on the same SQL Server as the report data sources. If there is a link to any documentation that could be provided it would be gratefully appreciated, all I am finding besides what is in BOL refers to 2005. Thanks in advance!

    Read the article

  • How to export more than 1MB in XML format using sqlcmd and without an input file?

    - by jon
    Hello, In SQL Server 2008, I want to export the result of a stored procedure to a file using sqlcmd utility. Now the end of my stored procedure is a select statement with a "for xml path.." clause at the end. I read on BOL that if I don't want my output truncated when reaching 1MB file size, I have to use this :XML ON command, but it should be placed on its own line, before calling the stored procedure. Does any of you experts know if it is possible to do that without specifying an input file for sqlcmd? (I'm calling sqlcmd like this: exec master..xp_cmdshell 'sqlcmd -Q"exec storedProcedureName @param1=value1, @param2=value2" -o c:\exportResults.xml -h-1 -E', but "storedProcedureName" and its parameters can change, which would mean 1 input file per passed parameters to sqlcmd) Also, it seems that I can't use bcp instead of sqlcmd because my stored procedure is creating a temporary table and performing DML statements on it? Thanks a lot

    Read the article

  • mongoid with rails - Database should be a Mongo::DB, not NilClass"

    - by Adam T
    Greetings I am trying to get Mongoid to work with my Rails app and I am getting an error: "Mongoid::Errors::InvalidDatabase in 'Shipment bol should be unique' Database should be a Mongo::DB, not NilClass" I have created the mongoid.yml file in my config directory and have mongodb running as a daemon. The config file is like so: defaults: &defaults host: localhost development: <<: *defaults database: ship-it-development test: <<: *defaults database: ship-it-test production: <<: *defaults host: <%= ENV['MONGOID_HOST'] % port: <%= ENV['MONGOID_PORT'] % database: <%= ENV['MONGOID_DATABASE'] % All of my specs fail with the above error. I am using rails 2.3.8. Anyone have ideas?

    Read the article

  • Interpreting type codes in sys.objects in SQL Server

    - by fatcat1111
    On SQL Server, the sys.objects table includes "Type" and "Type_Desc" attributes. For example: SELECT DISTINCT [Type], Type_Desc FROM Sys.Objects ORDER BY [Type] Returns: C CHECK_CONSTRAINT D DEFAULT_CONSTRAINT F FOREIGN_KEY_CONSTRAINT FN SQL_SCALAR_FUNCTION FS CLR_SCALAR_FUNCTION IT INTERNAL_TABLE P SQL_STORED_PROCEDURE PK PRIMARY_KEY_CONSTRAINT S SYSTEM_TABLE SQ SERVICE_QUEUE TR SQL_TRIGGER U USER_TABLE UQ UNIQUE_CONSTRAINT V VIEW Is there a comprehensive list of these types somewhere? There isn't a constraint on sys.objects that points me to table of these, and sys.types contains data types. I've searched SQL BOL but haven't found it. Any help would be appreciated. EDIT: Some DBs use only a subset of these types. For example, if I have a database with no views, when I query Sys.Objects as above, there are no "V" rows in the results. I am looking for a list of all possible types and descriptions used by SQL Server.

    Read the article

  • SQL SERVER – FT_IFTS_SCHEDULER_IDLE_WAIT – Full Text – Wait Type – Day 13 of 28

    - by pinaldave
    In the last few days during this series, I got many question about this Wait type. It would be great if you read my original related wait stats query in the first post because I have filtered it out in WHERE clause. However, I still get questions about this being one of the most wait types they encounter. The truth is, this is a background task processing and it really does not matter and it should be filtered out. There are many new Wait types related to Full Text Search that are introduced in SQL Server 2008. If you run the following query, you will be able to find them in the list. Currently there is not enough information for all of them available on BOL or any other place. But don’t worry; I will write an in-depth article when I learn more about them. SELECT * FROM sys.dm_os_wait_stats WHERE wait_type LIKE 'FT_%' The result set will contain following rows. FT_RESTART_CRAWL FT_METADATA_MUTEX FT_IFTSHC_MUTEX FT_IFTSISM_MUTEX FT_IFTS_RWLOCK FT_COMPROWSET_RWLOCK FT_MASTER_MERGE FT_IFTS_SCHEDULER_IDLE_WAIT We have understood so far that there is not much information available. But the problem is when you have this Wait type, what should you do?  The answer is to filter them out for the moment (i.e, do not pay attention on them) and focus on other pressing issues in wait stats or performance tuning. Here are two of my informal suggestions, which are totally independent from wait stats: Turn off the Full Text Search service in your system if you are  not necessarily using it on your server. Learn proper Full Text Search methodology. You can get Michael Coles’ book: Pro Full-Text Search in SQL Server 2008. Now I invite you to speak out your suggestions or any input regarding Full Text-related best practices and wait stats issue. Please leave a comment. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – Dedicated Access Control for SQL Server Express Edition – An error occurred while obtaining the dedicated administrator connection (DAC) port.

    - by pinaldave
    Recently I had faced very interesting situation. Due to some reason we were not able to login into the production server for one of client. The reason for the same was that server was very busy, we had to login into the system and bring server to normal situation. When all the attempts failed, I decided to login using Dedicated Administrator Connection (DAC). However when I attempted to connect using DAC it threw following error for me. C:\Users\pinald>sqlcmd -A -d master -S .\SQLEXPRESS Sqlcmd: Error: Microsoft SQL Server Native Client 11.0 : SQL Server Network Interfaces: An error occurred while obtaining the dedicated administrator connection (DAC) port. Make sure that SQL Browser is running, or check the error log for t he port number [xFFFFFFFF]. .Sqlcmd: Error: Microsoft SQL Server Native Client 11.0 : Login timeout expired.Sqlcmd: Error: Microsoft SQL Server Native Client 11.0 : A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online. I was bit taken a back as I knew that my commands are correct to login and if DAC does not work, there should be some serious reason for it. When inquired further about the SQL Server version I learned that it was SQL Server Express version deployed. To conserve resources, SQL Server Express does not listen on the DAC port. There is an additional step to be done if SQL Server Express has to be used with DAC. Enable TRACEFLAG on SQL Server Express will enable the connection by DAC possible. Here is the quick methods how one can enable DAC on SQL Server Express. Go to Start >> All Program >>Microsoft SQL Server (your version) >> Configuration Tools >> SQL Server Configuration Manager. Click on SQL Server Services >> Select your SQL Server Express version >> Right Click Properties >> select Startup Parameters Once on the Startup Parameter add the Startup parameter which is TRACEFLAG -T7806. Click on OK and RESTART SQL Server Express edition. Now once again try to connect to SQL Server Express edition and it will work just fine. This is absolutely documented method on BOL and SQL Server Express needs to be restarted. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Error Messages, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Server Express

    Read the article

  • Megjelent az Oracle Enterprise Manager 11g

    - by Lajos Sárecz
    Megjelent és letöltheto az Oracle Enterprise Manager 11gR1. Remélem az olvasók közül többen látták már a bejelentés videóját. Aki lemaradt volna róla bepótolhatja itt. Az új verzió számos újdonságot tartalmaz, melyek röviden összefoglalva az alábbiak: Eloször is az új verzió célja az alkalmazások üzemeltetése üzleti szemszögbol (Business Driven Application Management). A hagyományos komponens alapú rendszer menedzsment megközelítése egyszeruen nem állja meg a helyét az mai IT világában. Ezzel szemben a Business Driven Application Management segíti az üzemeltetoket, hogy az üzletet a leheto leghatékonyabban szolgálják ki, aminek eredményeképp az üzleti eredmények a legoptimálisabban alakulhatnak. A Business Driven Application Management elérése érdekében a 11gR1 a 10g-ben már elérheto végfelhasználói élmény monitorozásra épít. Ezzel a képességgel az IT sokkal jobban megérti, hogy a felhasználók hogyan használják az alkalmazásokat és eközben mit tapasztalnak. Ezt kiegészítve az új verzió fejlettebb üzleti tranzakció menedzsment képességekkel rendelkezik, ezáltal könnyebbé és gyorsabbá teszi a felhasználók számára problémát jelento tranzakciók javítását. Másodsorban az új verzió az alkalmazástól a diszk rétegig (Integrated Application-to-Disk Management) támogatja az üzemeltetést. Mivel minden egyes réteg képes befolyásolni a felhasználói élményt, ezért amint a felhasználókat érinto probléma detektálásra kerül, szükséges az érintett réteg részletes diagnosztikája, elemzése. Az Oracle Enterprise Manager 11gR1 natív támogatást ad az Oracle Database 11gR2, Exadata V2 és Fusion Middleware 11g termékekre. Az összetett alkalmazások menedzsmentjét támogató JVM Diagnosztika és Composite Application Monitoring and Modeler immár szerves részét képezik az új verziónak. Sot, az Enterprise Manager Grid Control és az Enterprise Manager Ops Center elso integrációs lépcsoje is elkészült, így a hardver szintu események is központilag monitorozhatók a Grid Control felületérol. Végül pedig az új verzió integrált rendszer menedzsement és támogatás (Integrated Systems Management and Support) képességgel rendelkezik. Ez azt jelenti, hogy az Oracle Enterprise Manager 11g integrálja a probléma diagnosztizálás és megoldás munkafolyamatát azáltal, hogy közvetlenül a Grid Control-ból lehetségessé válik a My Oracle Support szolgáltatásainak igénybevétele, mint például a szükséges patch-ek letöltése, service request nyitása, stb. Az Oracle support személyzete pedig az Enterprise Manager konfiguráció kezelési képességei révén azonnal információt gyujthetnek a környezetrol, hogy felgyorsuljon a probléma megoldás ideje. Ez a szoros integráció az Oracle Enterprise Manager és a My Oracle Support között segítheti ügyfeleinket a leghatékonyabb Oracle üzemeltetés elérésében. A következo hetekben igyekszem további részletekkel szolgálni, amint én is megismerem az új verzió képességeit. Az új verzió jelenleg 32 és 64 bites Linuxra érheto el. További portolások a következo hetekben várhatók.

    Read the article

  • Increasing deadlocks with NoLock

    - by Dave Ballantyne
    One on my personnel pet issues is with inappropriate use of the NOLOCK hint (and read uncommitted) .  Dont get me wrong, I have used it in exceptional circumstances , but as a general statement it is a bad thing.  Mostly , when NOLOCK, is used the discussion is around a single statement,  “it runs faster with nolock for XYZ reason”,  however ,IMO, this is quite a shorted sighted view.  What about the Transaction ? What about other concurrent users ?  What is good for one statement in isolation , does not mean that it is good for the system as a whole.  I have seen on a number of occasions deadlocks happen, when tasks that would of(and should of) be blocked continue to execute, only for a deadlock to occur at a later data writing (INSERT,UPDATE,DELETE) statement.  Writers will block writers regardless of isolation level. By Way of (fairly contrived ) example , lets generate some dummy tables and populate with some data drop table a go drop table b go Create Table a ( col1 integer ) go insert into a values(1) insert into a values(2) go Create Table b ( col1 integer ) go insert into b values(1) insert into b values(2) go   Now make two connections. In connection one execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from a In connection two execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from b Right now the ‘select from a’ in connection two is being blocked by the ‘delete from a’ in connection one.  This is ,IMO, quite a healthy and natural thing to be happening , some see this as a ‘slow down’, a drop in performance.  So, lets reach for our ‘NOLOCK’ magic pill.  Cancel the blocked query and ROLLBACK both transactions, then in connection one execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from b and then in connection two execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from a We have now solved out performance problem , no more blocking.  Lets finish the work required by the transaction, in connection one , execute delete from a Oh, ‘ performance problem’ again , its now being blocked. Still, lets complete the work in connection two…. delete from b DEADLOCK!!  It is important to be clear about the role of the select statements.  They do not participate within the deadlock, but are preventing code executing that would of.   Additionally, without the select readers to block, a deadlock would occur on the deletes with READ COMMITTED. Naturally, other isolation levels will exhibit different behaviour as to where and when they will and wont block,  and I would encourage you to read BOL and satisfy yourself that you really do NEED to NOLOCK.

    Read the article

  • SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database

    - by pinaldave
    While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. Before we continue the resolution, let us understand what CXPACKET Wait Stats are. The official definition suggests that CXPACKET Wait Stats occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if a conflict concerning this wait type develops into a problem. (from BOL) In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Note that CXPACKET Wait is done by completed thread and not the one which are unfinished. “Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is also unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query.” Now let us see what the best practices to reduce the CXPACKET Wait Stats are. The suggestions, with which you will find that if you search online through the browser, would play a major role as and might be asked about their jobs In addition, might tell you that you should set ‘maximum degree of parallelism’ to 1. I do agree with these suggestions, too; however, I think this is not the final resolutions. As soon as you set your entire query to run on single CPU, you will get a very bad performance from the queries which are actually performing okay when using parallelism. The best suggestion to this is that you set ‘the maximum degree of parallelism’ to a lower number or 1 (be very careful with this – it can create more problems) but tune the queries which can be benefited from multiple CPU’s. You can use query hint OPTION (MAXDOP 0) to run the server to use parallelism. Here is the two-quick script which helps to resolve these issues: Change MAXDOP at Server Level EXEC sys.sp_configure N'max degree of parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Run Query with all the CPU (using parallelism) USE AdventureWorks GO SELECT * FROM Sales.SalesOrderDetail ORDER BY ProductID OPTION (MAXDOP 0) GO Below is the blog post which will help you to find all the parallel query in your server. SQL SERVER – Find Queries using Parallelism from Cached Plan Please note running Queries in single CPU may worsen your performance and it is not recommended at all. Infect this can be very bad advise. I strongly suggest that you identify the queries which are offending and tune them instead of following any other suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Expanding Views – Contest Win Joes 2 Pros Combo (USD 198) – Day 4 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following key word will force the query to use indexes created on views? a) ENCRYPTION b) SCHEMABINDING c) NOEXPAND d) CHECK OPTION Query Hints: BIG HINT POST Usually, the assumption is that Index on the table will use Index on the table and Index on view will be used by view. However, that is the misconception. It does not happen this way. In fact, if you notice the image, you will find the both of them (table and view) use both the index created on the table. The index created on the view is not used. The reason for the same as listed in BOL. The cost of using the indexed view may exceed the cost of getting the data from the base tables, or the query is so simple that a query against the base tables is fast and easy to find. This often happens when the indexed view is defined on small tables. You can use the NOEXPAND hint if you want to force the query processor to use the indexed view. This may require you to rewrite your query if you don’t initially reference the view explicitly. You can get the actual cost of the query with NOEXPAND and compare it to the actual cost of the query plan that doesn’t reference the view. If they are close, this may give you the confidence that the decision of whether or not to use the indexed view doesn’t matter. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 4. SQL Joes 2 Pros Development Series – Structured Error Handling SQL Joes 2 Pros Development Series – SQL Server Error Messages SQL Joes 2 Pros Development Series – Table-Valued Functions SQL Joes 2 Pros Development Series – Table-Valued Store Procedure Parameters SQL Joes 2 Pros Development Series – Easy Introduction to CHECK Options SQL Joes 2 Pros Development Series – Introduction to Views SQL Joes 2 Pros Development Series – All about SQL Constraints Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Can I select 0 columns in SQL Server?

    - by Woody Zenfell III
    I am hoping this question fares a little better than the similar Create a table without columns. Yes, I am asking about something that will strike most as pointlessly academic. It is easy to produce a SELECT result with 0 rows (but with columns), e.g. SELECT a = 1 WHERE 1 = 0. Is it possible to produce a SELECT result with 0 columns (but with rows)? e.g. something like SELECT NO COLUMNS FROM Foo. (This is not valid T-SQL.) I came across this because I wanted to insert several rows without specifying any column data for any of them. e.g. (SQL Server 2005) CREATE TABLE Bar (id INT NOT NULL IDENTITY PRIMARY KEY) INSERT INTO Bar SELECT NO COLUMNS FROM Foo -- Invalid column name 'NO'. -- An explicit value for the identity column in table 'Bar' can only be specified when a column list is used and IDENTITY_INSERT is ON. One can insert a single row without specifying any column data, e.g. INSERT INTO Foo DEFAULT VALUES. One can query for a count of rows (without retrieving actual column data from the table), e.g. SELECT COUNT(*) FROM Foo. (But that result set, of course, has a column.) I tried things like INSERT INTO Bar () SELECT * FROM Foo -- Parameters supplied for object 'Bar' which is not a function. -- If the parameters are intended as a table hint, a WITH keyword is required. and INSERT INTO Bar DEFAULT VALUES SELECT * FROM Foo -- which is a standalone INSERT statement followed by a standalone SELECT statement. I can do what I need to do a different way, but the apparent lack of consistency in support for degenerate cases surprises me. I read through the relevant sections of BOL and didn't see anything. I was surprised to come up with nothing via Google either.

    Read the article

  • Why function Ellipse(...) are needed twice here to draw an ellipse?

    - by John Son
    MFC: I read this code which is to draw an ellipse (not solid interior), but I cannot understand why function "pDC-Ellipse(...)" is needed twice here? CDC *pDC=GetDC(); CPen pen; CBrush brush; getpen(pen,pDC,col,bol); if(do_what>=DRAW_LINE&&do_what<=DRAW_RRECT){ p->p[0]=start; p->p[1]=end; if(sol==1){ getbrush(brush,pDC,col); } if(do_what==DRAW_LINE){ pDC->MoveTo(start); pDC->LineTo(end); } else if(do_what==DRAW_ELLIPSE||do_what==DRAW_CIRCLE){ pDC->SetROP2(R2_NOT); assist=start; if(do_what==DRAW_CIRCLE){ assist.y=end.y-end.x+start.x; } pDC->Ellipse(start.x,assist.y,end.x,end.y); pDC->SetROP2(R2_COPYPEN); if(sol==0){ pDC->SelectStockObject(NULL_BRUSH); } if(do_what==DRAW_CIRCLE){ assist.y=point.y-point.x+start.x; } pDC->Ellipse(start.x,assist.y,point.x,point.y); end=point; } } If I remove the first one, the ellipse will be black solid inside. If I remove the second one, the ellipse will never be drawn but disappears when left mouse button is up. the dialog: when moving mouse: mouse moving when mouse button pops: mouse button pops Besides, what color of CBrush is if I use "CBrush brush; pDC-Ellipse(start.x,assist.y,end.x,end.y);"

    Read the article

  • php, how to get a array varible value ?

    - by NovaYear
    $lang['profil_basic_medeni'] = array( 1 => 'Bekâr', 2 => 'Evli', 3 => 'Nisanli', 4 => 'Iliskide', 5 => 'Ayrilmis', 6 => 'Bosanmis' ); $lang['profil_basic_sac'] = array( 1 => 'Normal', 2 => 'Kisa', 3 => 'Orta', 4 => 'Uzun', 5 => 'Fönlü', 6 => 'Saçsiz (Dazlak)', 7 => 'Karisik/Daginik', 8 => 'Her Zaman Bol Jöleli :)' ); function sGetVAL($item,$valno) { $sonuc = $lang[$item][$valno]; return $sonuc; } $tempVAL1 = sGetVAL('profil_basic_medeni','3'); // return null //or $tempVAL2 = sGetVAL('profil_basic_sac','7'); // return null $tempVAL1 or $tempVAL2 always return null. why ? how to fix function sGetVAL ???

    Read the article

  • SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2

    - by pinaldave
    Social media has created an Always Connected World for us. Recently I enrolled myself to learn new technologies as a student. I had decided to focus on learning and decided not to stay connected on the internet while I am in the learning session. On the second day of the event after the learning was over, I noticed lots of notification from my friend on my various social media handle. He had connected with me on Twitter, Facebook, Google+, LinkedIn, YouTube as well SMS, WhatsApp on the phone, Skype messages and not to forget with a few emails. I right away called him up. The problem was very unique – let us hear the problem in his own words. “Pinal – we are in big trouble we are not able to figure out what is going on. Our product details table is continuously loosing rows. Lots of rows have disappeared since morning and we are unable to find why the rows are getting deleted. We have made sure that there is no DELETE command executed on the table as well. The matter of the fact, we have removed every single place the code which is referencing the table. We have done so many crazy things out of desperation but no luck. The rows are continuously deleted in a random pattern. Do you think we have problems with intrusion or virus?” After describing the problems he had pasted few rants about why I was not available during the day. I think it will be not smart to post those exact words here (due to many reasons). Well, my immediate reaction was to get online with him. His problem was unique to him and his team was all out to fix the issue since morning. As he said he has done quite a lot out in desperation. I started asking questions from audit, policy management and profiling the data. Very soon I realize that I think this problem was not as advanced as it looked. There was no intrusion, SQL Injection or virus issue. Well, long story short first - It was a very simple issue of foreign key created with ON UPDATE CASCADE and ON DELETE CASCADE.  CASCADE allows deletions or updates of key values to cascade through the tables defined to have foreign key relationships that can be traced back to the table on which the modification is performed. ON DELETE CASCADE specifies that if an attempt is made to delete a row with a key referenced by foreign keys in existing rows in other tables, all rows containing those foreign keys are also deleted. ON UPDATE CASCADE specifies that if an attempt is made to update a key value in a row, where the key value is referenced by foreign keys in existing rows in other tables, all of the foreign key values are also updated to the new value specified for the key. (Reference: BOL) In simple words – due to ON DELETE CASCASE whenever is specified when the data from Table A is deleted and if it is referenced in another table using foreign key it will be deleted as well. In my friend’s case, they had two tables, Products and ProductDetails. They had created foreign key referential integrity of the product id between the table. Now the as fall was up they were updating their catalogue. When they were updating the catalogue they were deleting products which are no more available. As the changes were cascading the corresponding rows were also deleted from another table. This is CORRECT. The matter of the fact, there is no error or anything and SQL Server is behaving how it should be behaving. The problem was in the understanding and inappropriate implementations of business logic.  What they needed was Product Master Table, Current Product Catalogue, and Product Order Details History tables. However, they were using only two tables and without proper understanding the relation between them was build using foreign keys. If there were only two table, they should have used soft delete which will not actually delete the record but just hide it from the original product table. This workaround could have got them saved from cascading delete issues. I will be writing a detailed post on the design implications etc in my future post as in above three lines I cannot cover every issue related to designing and it is also not the scope of the blog post. More about designing in future blog posts. Once they learn their mistake, they were happy as there was no intrusion but trust me sometime we are our own enemy and this is a great example of it. In tomorrow’s blog post we will go over their code and workarounds. Feel free to share your opinions, experiences and comments. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Coping with infrastructure upgrades

    - by Fatherjack
    A common topic for questions on SQL Server forums is how to plan and implement upgrades to SQL Server. Moving from old to new hardware or moving from one version of SQL Server to another. There are other circumstances where upgrades of other systems affect SQL Server DBAs. For example, where I work at the moment there is an Microsoft Exchange (email) server upgrade in progress. It it being handled by a different team so I’m not wholly sure on the details but we are in a situation where there are currently 2 Exchange email servers – the old one and the new one. Users mail boxes are being transferred in a planned process but as we approach the old server being turned off we have to also make sure that our SQL Servers get updated to use the new SMTP server for all of the SQL Agent notifications, SSIS packages etc. My servers have a number of profiles so that various jobs can send emails on behalf of various departments and different systems. This means there are lots of places that the old server name needs to be replaced by the new one. Anyone who has set up DBMail and enjoyed the click-tastic odyssey of screens to create Profiles and Accounts and so on and so forth ought to seek some professional help in my opinion. It’s a nightmare of back and forth settings changes and it stinks. I wasn’t looking forward to heading into this mess of a UI and changing the old Exchange server name for the new one on all my SQL Instances for all of the accounts I have set up. So I did what any Englishmen with a shed would do, I decided to take it apart and see if I can fix it another way. I took a guess that we are going to be working in MSDB and Books OnLine was remarkably helpful and amongst a lot of information told me about a couple of procedures that can be used to interrogate DBMail settings. USE [msdb] -- It's where all the good stuff is kept GO EXEC dbo.sysmail_help_profile_sp; EXEC dbo.sysmail_help_account_sp; Both of these procedures take optional parameters with the same name – ID and Name. If you provide an ID or a name then the results you get back are for that specific Profile or Account. Otherwise you get details of all Profiles and Accounts on the server you are connected to. As you can see (click for a bigger image), the Account has the SMTP server information in the servername column. We want to change that value to NewSMTP.Contoso.com. Now it appears that the procedure we are looking at gets it’s data from the sysmail_account and sysmail_server tables, you can get the results the stored procedure provides if you run the code below. SELECT [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [last_mod_datetime] , [last_mod_user] FROM dbo.sysmail_account AS sa; SELECT [account_id] , [servertype] , [servername] , [port] , [username] , [credential_id] , [use_default_credentials] , [enable_ssl] , [flags] , [last_mod_datetime] , [last_mod_user] , [timeout] FROM dbo.sysmail_server AS sms Now, we have no real idea how these tables are linked and whether making an update direct to one or other of them is going to do what we want or whether it will entirely cripple our ability to send email from SQL Server so we wont touch those tables with any UPDATE TSQL. So, back to Books OnLine then and we find sysmail_update_account_sp. It’s exactly what we need. The examples in BOL take the form (as below) of having every parameter explicitly defined. Not wanting to totally obliterate the existing values by not passing values in all of the parameters I set to writing some code to gather the existing data from the tables and re-write the SMTP server name and then execute the resulting TSQL. IF OBJECT_ID('tempdb..#sysmailprofiles') IS NOT NULL DROP TABLE #sysmailprofiles GO CREATE TABLE #sysmailprofiles ( account_id INT , [name] VARCHAR(50) , [description] VARCHAR(500) , email_address VARCHAR(500) , display_name VARCHAR(500) , replyto_address VARCHAR(500) , servertype VARCHAR(10) , servername VARCHAR(100) , port INT , username VARCHAR(100) , use_default_credentials VARCHAR(1) , ENABLE_ssl VARCHAR(1) ) INSERT [#sysmailprofiles] ( [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [servertype] , [servername] , [port] , [username] , [use_default_credentials] , [ENABLE_ssl] ) EXEC [dbo].[sysmail_help_account_sp] DECLARE @TSQL NVARCHAR(1000) SELECT TOP 1 @TSQL = 'EXEC [dbo].[sysmail_update_account_sp] @account_id = ' + CAST([s].[account_id] AS VARCHAR(20)) + ', @account_name = ''' + [s].[name] + '''' + ', @email_address = N''' + [s].[email_address] + '''' + ', @display_name = N''' + [s].[display_name] + '''' + ', @replyto_address = N''' + s.replyto_address + '''' + ', @description = N''' + [s].[description] + '''' + ', @mailserver_name = ''NEWSMTP.contoso.com''' + +', @mailserver_type = ' + [s].[servertype] + ', @port = ' + CAST([s].[port] AS VARCHAR(20)) + ', @username = ' + COALESCE([s].[username], '''''') + ', @use_default_credentials =' + CAST(s.[use_default_credentials] AS VARCHAR(1)) + ', @enable_ssl =' + [s].[ENABLE_ssl] FROM [#sysmailprofiles] AS s WHERE [s].[servername] = 'SMTP.Contoso.com' SELECT @tsql EXEC [sys].[sp_executesql] @tsql This worked well for me and testing the email function EXEC dbo.sp_send_dbmail afterwards showed that the settings were indeed using our new Exchange server. It was only later in writing this blog that I tried running the sysmail_update_account_sp procedure with only the SMTP server name parameter value specified. Despite what Books OnLine might intimate, you can do this and only the values for parameters specified get changed. If a parameter is not specified in the execution of the procedure then the values remain unchanged. This renders most of the above script unnecessary as I could have simply specified the account_id that I want to amend and the new value for the parameter I want to update. EXEC sysmail_update_account_sp @account_id = 1, @mailserver_name = 'NEWSMTP.Contoso.com' This wasn’t going to be the main reason for this post, it was meant to describe how to capture values from a stored procedure and use them in dynamic TSQL but instead we are here and (re)learning the fact that Books Online is a little flawed in places. It is a fantastic resource for anyone working with SQL Server but the reader must adopt an enquiring frame of mind and use a little curiosity to try simple variations on examples to fully understand the code you are working with. I think the author(s) of this part of Books OnLine missed an opportunity to include a third example that had fewer than all parameters specified to give a lead to this method existing.

    Read the article

  • T-SQL Tuesday #34: Help! I Need Somebody!

    - by Most Valuable Yak (Rob Volk)
    Welcome everyone to T-SQL Tuesday Episode 34!  When last we tuned in, Mike Fal (b|t) hosted Trick Shots.  These highlighted techniques or tricks that you figured out on your own which helped you understand SQL Server better. This month, I'm asking you to look back this past week, year, century, or hour...to a time when you COULDN'T figure it out.  When you were stuck on a SQL Server problem and you had to seek help. In the beginning... SQL Server has changed a lot since I started with it.  <Cranky Old Guy> Back in my day, Books Online was neither.  There were no blogs. Google was the third-place search site. There were perhaps two or three community forums where you could ask questions.  (Besides the Microsoft newsgroups...which you had to access with Usenet.  And endure the wrath of...Celko.)  Your "training" was reading a book, made from real dead trees, that you bought from your choice of brick-and-mortar bookstore. And except for your local user groups, there were no conferences, seminars, SQL Saturdays, or any online video hookups where you could interact with a person. You'd have to call Microsoft Support...on the phone...a LANDLINE phone.  And none of this "SQL Family" business!</Cranky Old Guy> Even now, with all these excellent resources available, it's still daunting for a beginner to seek help for SQL Server.  The product is roughly 1247.4523 times larger than it was 15 years ago, and it's simply impossible to know everything about it.*  So whether you are a beginner, or a seasoned pro of over a decade's experience, what do you do when you need help on SQL Server? That's so meta... In the spirit of offering help, here are some suggestions for your topic: Tell us about a person or SQL Server community who have been helpful to you.  It can be about a technical problem, or not, e.g. someone who volunteered for your local SQL Saturday.  Sing their praises!  Let the world know who they are! Do you have any tricks for using Books Online?  Do you use the locally installed product, or are you completely online with BOL/MSDN/Technet, and why? If you've been using SQL Server for over 10 years, how has your help-seeking changed? Are you using Twitter, StackOverflow, MSDN Forums, or another resource that didn't exist when you started? What made you switch? Do you spend more time helping others than seeking help? What motivates you to help, and how do you contribute? Structure your post along the lyrics to The Beatles song Help! Audio or video renditions are particularly welcome! Lyrics must include reference to SQL Server terminology or community, and performances must be in your voice or include you playing an instrument. These are just suggestions, you are free to write whatever you like.  Bonus points if you can incorporate ALL of these into a single post.  (Or you can do multiple posts, we're flexible like that.)  Help us help others by showing how others helped you! Legalese, Your Rights, Yada yada... If you would like to participate in T-SQL Tuesday please be sure to follow the rules below: Your blog post must be published between Tuesday, September 11, 2012 00:00:00 GMT and Wednesday, September 12, 2012 00:00:00 GMT. Include the T-SQL Tuesday logo (above) and hyperlink it back to this post. If you don’t see your post in trackbacks, add the link to the comments below. If you are on Twitter please tweet your blog using the #TSQL2sDay hashtag.  I can be contacted there as @sql_r, in case you have questions or problems with comments/trackback.  I'll have a follow-up post listing all the contributions as soon as I can. Thank you all for participating, and special thanks to Adam Machanic (b|t) for all his help and for continuing this series!

    Read the article

  • form inside tabview doesn't work

    - by user3536737
    i am working with jsf and primefaces , and here is what 've tried well i want to creat a tabview that get data from an arraylist in my bean i get for exemple 4 tabs , and inside each one i've created a hidden panel where i have a form with 2 input text to update informations , do i display the panel when i click on the second button Update , after that my panel is not hidden anymore , and i set the new values and click on the second button to update the informations , the problem is that the updating and the execution is working only for the first tab , it means when i try to update the new informations it works for the first one and for the other tabs it doesn't here is the code <p:tab title="#{rr.nom_ressource}"> <h:panelGrid> <h:graphicImage value="Ressources/images/emp.jpg" style="vertical-align:middle" /> <span style="font-size:15px; width:170px; display:inline-block;"> Nom : #{rr.nom_ressource} Type: #{rr.type_ressource} Specification: #{rr.experience} </span> <h:commandButton image="Ressources/images/delete.jpg" actionListener="#{SelectBean.act}" update=":form" style="vertical-align:middle" > Update </h:commandButton> <h:commandButton update=":outPanel" actionListener="#{SelectBean.mod1()}" image="Ressources/images/update.png" style="vertical-align:middle" > Modifier </h:commandButton> <h:form id="form111"> <p:growl id="growl" showDetail="true" sticky="true" /> <p:panel rendered ="#{SelectBean.bol}" closable="true" toggleable="true" id="outPanel" styleClass="outPanel" widgetVar="outpanel"> <h:outputLabel value="Nom " /> <h:inputText value="#{SelectBean.nom}" /> <br/> <h:outputLabel value="Experience " /> <h:inputText value="#{SelectBean.exp}" /> <br/> <h:commandButton value="Update" action="#{SelectBean.done}"/> </p:panel> </h:form> </h:panelGrid> </p:tab> for my managedbean the code is correct i think the problem is here

    Read the article

  • Try a sample: Using the counter predicate for event sampling

    - by extended_events
    Extended Events offers a rich filtering mechanism, called predicates, that allows you to reduce the number of events you collect by specifying criteria that will be applied during event collection. (You can find more information about predicates in Using SQL Server 2008 Extended Events (by Jonathan Kehayias)) By evaluating predicates early in the event firing sequence we can reduce the performance impact of collecting events by stopping event collection when the criteria are not met. You can specify predicates on both event fields and on a special object called a predicate source. Predicate sources are similar to action in that they typically are related to some type of global information available from the server. You will find that many of the actions available in Extended Events have equivalent predicate sources, but actions and predicates sources are not the same thing. Applying predicates, whether on a field or predicate source, is very similar to what you are used to in T-SQL in terms of how they work; you pick some field/source and compare it to a value, for example, session_id = 52. There is one predicate source that merits special attention though, not just for its special use, but for how the order of predicate evaluation impacts the behavior you see. I’m referring to the counter predicate source. The counter predicate source gives you a way to sample a subset of events that otherwise meet the criteria of the predicate; for example you could collect every other event, or only every tenth event. Simple CountingThe counter predicate source works by creating an in memory counter that increments every time the predicate statement is evaluated. Here is a simple example with my favorite event, sql_statement_completed, that only collects the second statement that is run. (OK, that’s not much of a sample, but this is for demonstration purposes. Here is the session definition: CREATE EVENT SESSION counter_test ON SERVERADD EVENT sqlserver.sql_statement_completed    (ACTION (sqlserver.sql_text)    WHERE package0.counter = 2)ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS) You can find general information about the session DDL syntax in BOL and from Pedro’s post Introduction to Extended Events. The important part here is the WHERE statement that defines that I only what the event where package0.count = 2; in other words, only the second instance of the event. Notice that I need to provide the package name along with the predicate source. You don’t need to provide the package name if you’re using event fields, only for predicate sources. Let’s say I run the following test queries: -- Run three statements to test the sessionSELECT 'This is the first statement'GOSELECT 'This is the second statement'GOSELECT 'This is the third statement';GO Once you return the event data from the ring buffer and parse the XML (see my earlier post on reading event data) you should see something like this: event_name sql_text sql_statement_completed SELECT ‘This is the second statement’ You can see that only the second statement from the test was actually collected. (Feel free to try this yourself. Check out what happens if you remove the WHERE statement from your session. Go ahead, I’ll wait.) Percentage Sampling OK, so that wasn’t particularly interesting, but you can probably see that this could be interesting, for example, lets say I need a 25% sample of the statements executed on my server for some type of QA analysis, that might be more interesting than just the second statement. All comparisons of predicates are handled using an object called a predicate comparator; the simple comparisons such as equals, greater than, etc. are mapped to the common mathematical symbols you know and love (eg. = and >), but to do the less common comparisons you will need to use the predicate comparators directly. You would probably look to the MOD operation to do this type sampling; we would too, but we don’t call it MOD, we call it divides_by_uint64. This comparator evaluates whether one number is divisible by another with no remainder. The general syntax for using a predicate comparator is pred_comp(field, value), field is always first and value is always second. So lets take a look at how the session changes to answer our new question of 25% sampling: CREATE EVENT SESSION counter_test_25 ON SERVERADD EVENT sqlserver.sql_statement_completed    (ACTION (sqlserver.sql_text)    WHERE package0.divides_by_uint64(package0.counter,4))ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS)GO Here I’ve replaced the simple equivalency check with the divides_by_uint64 comparator to check if the counter is evenly divisible by 4, which gives us back every fourth record. I’ll leave it as an exercise for the reader to test this session. Why order matters I indicated at the start of this post that order matters when it comes to the counter predicate – it does. Like most other predicate systems, Extended Events evaluates the predicate statement from left to right; as soon as the predicate statement is proven false we abandon evaluation of the remainder of the statement. The counter predicate source is only incremented when it is evaluated so whether or not the counter is incremented will depend on where it is in the predicate statement and whether a previous criteria made the predicate false or not. Here is a generic example: Pred1: (WHERE statement_1 AND package0.counter = 2)Pred2: (WHERE package0.counter = 2 AND statement_1) Let’s say I cause a number of events as follows and examine what happens to the counter predicate source. Iteration Statement Pred1 Counter Pred2 Counter A Not statement_1 0 1 B statement_1 1 2 C Not statement_1 1 3 D statement_1 2 4 As you can see, in the case of Pred1, statement_1 is evaluated first, when it fails (A & C) predicate evaluation is stopped and the counter is not incremented. With Pred2 the counter is evaluated first, so it is incremented on every iteration of the event and the remaining parts of the predicate are then evaluated. In this example, Pred1 would return an event for D while Pred2 would return an event for B. But wait, there is an interesting side-effect here; consider Pred2 if I had run my statements in the following order: Not statement_1 Not statement_1 statement_1 statement_1 In this case I would never get an event back from the system because the point at which counter=2, the rest of the predicate evaluates as false so the event is not returned. If you’re using the counter target for sampling and you’re not getting the expected events, or any events, check the order of the predicate criteria. As a general rule I’d suggest that the counter criteria should be the last element of your predicate statement since that will assure that your sampling rate will apply to the set of event records defined by the rest of your predicate. Aside: I’m interested in hearing about uses for putting the counter predicate criteria earlier in the predicate statement. If you have one, post it in a comment to share with the class. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 1 2 3  | Next Page >