Search Results

Search found 293 results on 12 pages for 'ddl'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • SQL SERVER – ?Finding Out What Changed in a Deleted Database – Notes from the Field #041

    - by Pinal Dave
    [Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject. In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words. Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?” The First Place to Check – Schema Changes History Report: Pinal has recently written on the Schema Changes History report and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions. There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off. But the Database Has Been Deleted! Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost. One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_: Therefore, you can read the trace files like any other. Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error. With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database. Testing with a Deleted Database: Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report. CREATE DATABASE DeleteMe; GO USE DeleteMe; GO CREATE SCHEMA Test AUTHORIZATION dbo; GO CREATE TABLE Test.Foo (FooID INT); GO USE MASTER; GO DROP DATABASE DeleteMe; GO This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available. Finding the Information: I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there: Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • ?????????????????!Oracle Database ?????????

    - by Aya Sensui
    ?????????????????????????????????? ?????????????????????????????????????????????? ?????????????????????????????????????????????????? ??????????????????????????????????????????? ?????????????????????????????????? ??????????????????????????????????????????????? ?????????/?????????????????????????????????????????? ?????????????? ????????????????????????????????????????? ???????????????????????????????????? ??????????????????????????????????????????????? ????????????????????????????????????????????????????????? Oracle Database ????????????????? ??????????????????????????? ????????????????????????? ???????????????????????? ????????????????????????????????? ???????????????? ??????????????????????????????????!??????? ??????????????????? TIPS ???????????????? ?????!???????? ???????????????????????????????????????????????? Oracle Database ??????????????????????????????????????????? ??????????????????????????????????????? Oracle Database ????????????????????????????????????? ????????? FAILED_LOGIN_ATTEMPTS    ??????????? PASSWORD_LIFE_TIME       ????????? PASSWORD_REUSE_TIME      ???????????????????? PASSWORD_REUSE_MAX       ????????????????????????????? PASSWORD_LOCK_TIME       ????????????????????????????????????? PASSWORD_GRACE_TIME      ????????????????? PASSWORD_VERIFY_FUNCTION ??????????????????? (*1) (*1)????????????????????????? $ORACLE_HOME/rdbms/admin/utlpwdmg.sql ?????????????????? utlpwdmg.sql ????????? ???????????????????????????  - ?????????????????  - ???????????  - ??????????????  - ?????????????????????  - ???????????????????? ???????????????????????????????????? ???????????????????????????????????? ??????????????????????!?????? ????????????????????????????????????? ??????????????????????????????????? ????????????????????? ???????? SQL*Plus ??????????????????????? ????????????????????????? ???????????????????????????????????? ??????????????????·???????????????? ???Enterprise Edition ?????????????????? Virtual Private Database(VPD) ??????????? ???????????????????????????????????????????????? ?????????????????????????? ???????????????????????????????????????????????? ????????????????????????????? ?????????????????????????????????? ????????????????????????????? ??????????????????????????????????? ???????????????????? ?????????????????????????????? ???????????????????????????????? ????????????????????????????????????????? ??????????????????????????????????????????? ????????????????????????????? Oracle Database ????????????????????Oracle Advanced Security ????????????? ?????????????????????????????? ????DBMS_OBFUSCATION_TOOLKIT ? DBMS_CRYPTO ???????/????? PL/SQL ?????? ?????????????????? ?????????????????????????????????????????????? ?????????????????????????? ????????????!???? Oracle Database ?????????????????????????????????? ????????????????????????DBA ??????????????????? 4 ???? ??????????? Enterprise Edition??????????? ?????DBA ??????????????????????? ?????????????????????? [????] ???????????????????????????????????? ????????????????????  - ????????????  - ?????(SYSOPER, SYSDBA)???????????  - ??????????(??????????) ?????????????????????????????????·?????? ????????????????????????????????? ?????????????????????????????? [DBA ??] ???????????(DBA ??)???????????????????? ??????????????????????????AUDIT_SYS_OPERATIONS ?????? TRUE ????????????? ????? OS ??????????????/????????????????????? ???????????????????????????????????????????????? [????] ???????????????????????????????????? ??AUDIT_TRAIL ??????????????????????????? ???????????  - ???????    ????·????????  - ????????    ??????????????????????????????  - ????    ????????????????  - SQL ???    ????????????? DDL ?????? [??????????] ??????(?????????)??????????????????? Enterprise Edition ?????????? ??????????????????????? ??? Oracle Database ???????????????????????????????????? ??????????????????????????????????????????????? ????????????????????

    Read the article

  • JPA exception: Object: ... is not a known entity type.

    - by Toto
    I'm new to JPA and I'm having problems with the autogeneration of primary key values. I have the following entity: package jpatest.entities; import java.io.Serializable; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; @Entity public class MyEntity implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; public Long getId() { return id; } public void setId(Long id) { this.id = id; } private String someProperty; public String getSomeProperty() { return someProperty; } public void setSomeProperty(String someProperty) { this.someProperty = someProperty; } public MyEntity() { } public MyEntity(String someProperty) { this.someProperty = someProperty; } @Override public String toString() { return "jpatest.entities.MyEntity[id=" + id + "]"; } } and the following main method in other class: public static void main(String[] args) { EntityManagerFactory emf = Persistence.createEntityManagerFactory("JPATestPU"); EntityManager em = emf.createEntityManager(); em.getTransaction().begin(); MyEntity e = new MyEntity("some value"); em.persist(e); /* (exception thrown here) */ em.getTransaction().commit(); em.close(); emf.close(); } This is my persistence unit: <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="JPATestPU" transaction-type="RESOURCE_LOCAL"> <provider>oracle.toplink.essentials.PersistenceProvider</provider> <class>jpatest.entities.MyEntity</class> <properties> <property name="toplink.jdbc.user" value="..."/> <property name="toplink.jdbc.password" value="..."/> <property name="toplink.jdbc.url" value="jdbc:mysql://localhost:3306/jpatest"/> <property name="toplink.jdbc.driver" value="com.mysql.jdbc.Driver"/> <property name="toplink.ddl-generation" value="create-tables"/> </properties> </persistence-unit> </persistence> When I execute the program I get the following exception in the line marked with the proper comment: Exception in thread "main" java.lang.IllegalArgumentException: Object: jpatest.entities.MyEntity[id=null] is not a known entity type. at oracle.toplink.essentials.internal.sessions.UnitOfWorkImpl.registerNewObjectForPersist(UnitOfWorkImpl.java:3212) at oracle.toplink.essentials.internal.ejb.cmp3.base.EntityManagerImpl.persist(EntityManagerImpl.java:205) at jpatest.Main.main(Main.java:...) What am I missing?

    Read the article

  • MySQL create stored procedure fails but all internal queries succeed alone?

    - by Mark
    Hi all, I just created a simple database in MySQL, and I am learning how to write stored proc's. I'm familiar with M$SQL and as far as I can see the following should work: use mydb; -- -------------------------------------------------------------------------------- -- Routine DDL -- -------------------------------------------------------------------------------- DELIMITER // CREATE PROCEDURE mydb.doStats () BEGIN CREATE TABLE IF NOT EXISTS resultprobability ( ballNumber INT NOT NULL , probability FLOAT NULL, PRIMARY KEY (ballNumber) ); CREATE TABLE IF NOT EXISTS drawProbability ( drawDate DATE NOT NULL , ball1 INT NULL , ball2 INT NULL , ball3 INT NULL , ball4 INT NULL , ball5 INT NULL , ball6 INT NULL , ball7 INT NULL , score FLOAT NULL , PRIMARY KEY (drawDate) ); TRUNCATE TABLE resultprobability; TRUNCATE TABLE drawprobability; INSERT INTO resultprobability (ballNumber, probability) (select resultset.ballNumber ballNumber,(count(0)/(select count(0) from resultset)) probability from resultset group by resultset.ballNumber); INSERT INTO drawProbability (drawDate, ball1, ball2, ball3, ball4, ball5, ball6, ball7, score) (select distinct r.drawDate, a.ballnumber ball1, b.ballnumber ball2, c.ballnumber ball3, d.ballnumber ball4, e.ballnumber ball5, f.ballnumber ball6,g.ballnumber ball7, ((a.probability + b.probability + c.probability + d.probability + e.probability + f.probability + g.probability)/7) score from resultset r inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 1) a on a.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 2) b on b.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 3) c on c.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 4) d on d.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 5) e on e.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 6) f on f.drawdate = r.drawDate inner join (select r.drawDate, r.ballNumber, p.probability from resultset r inner join resultprobability p on p.ballNumber = r.ballNumber where r.appearence = 7) g on g.drawdate = r.drawDate order by score desc); END // DELIMITER ; instead i get the following Executed successfully in 0.002 s, 0 rows affected. Line 1, column 1 Error code 1064, SQL state 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 26 Line 6, column 1 Error code 1064, SQL state 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')) probability from resultset group by resultset.ballNumber); INSERT INTO d' at line 1 Line 31, column 51 Error code 1064, SQL state 42000: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ') score from resultset r inner join (select r.drawDate, r.ballNumber, p.probabi' at line 1 Line 39, column 114 Execution finished after 0.002 s, 3 error(s) occurred. What am I doing wrong? I seem to have exhausted my limited mental abilities!

    Read the article

  • Dropdownlist and Datareader

    - by salvationishere
    After trying many solutions listed on the internet I am very confused now. I have a C#/SQL web application for which I am simply trying to bind an ExecuteReader command to a Dropdownlist so the user can select a value. This is a VS2008 project on an XP OS. How it works is after the user selects a table, I use this selection as an input parameter to a method from my Datamatch.aspx.cs file. Then this Datamatch.aspx.cs file calls a method from my ADONET.cs class file. Finally this method executes a SQL procedure to return the list of columns from that table. (These are all tables in Adventureworks DB). I know that this method returns successfully the list of columns if I execute this SP in SSMS. However, I'm not sure how to tell if it works in VS or not. This should be simple. How can I do this? Here is some of my code. The T-sQL stored proc: CREATE PROCEDURE [dbo].[getColumnNames] @TableName VarChar(50) AS BEGIN SET NOCOUNT ON; SELECT col.name 'COLUMN_NAME' FROM sysobjects obj INNER JOIN syscolumns col ON obj.id = col.id WHERE obj.name = @TableName END It gives me desired output when I execute following from SSMS: exec getColumnNames 'AddressType' And the code from Datamatch.aspx.cs file currently is: // Add DropDownList Control to Placeholder private void CreateDropDownLists() { SqlDataReader dr2 = ADONET_methods.DisplayTableColumns(targettable); int NumControls = targettable.Length; DropDownList ddl = new DropDownList(); } Where ADONET_methods.DisplayTableColumns(targettable) is: public static SqlDataReader DisplayTableColumns(string tt) { SqlDataReader dr = null; string TableName = tt; string connString = "Server=(local);Database=AdventureWorks;Integrated Security = SSPI"; string errorMsg; SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = new SqlCommand("getColumnNames"); //conn2.CreateCommand(); try { cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; SqlParameter parm = new SqlParameter("@TableName", SqlDbType.VarChar); parm.Value = "Person." + TableName.Trim(); parm.Direction = ParameterDirection.Input; cmd.Parameters.Add(parm); conn2.Open(); dr = cmd.ExecuteReader(); } catch (Exception ex) { errorMsg = ex.Message; } return dr; }

    Read the article

  • PL/SQL pre-compile and Code Quality checks in an automatted build environment?

    - by Lars Corneliussen
    We build software using Hudson and Maven. We have C#, java and last, but not least PL/SQL sources (sprocs, packages, DDL, crud) For C# and Java we do unit tests and code analysis, but we don't really know the health of our PL/SQL sources before we actually publish them to the target database. Requirements There are a couple of things we wan't to test in the following priority: Are the sources valid, hence "compilable"? For packages, with respect to a certain database, would they compile? Code Quality: Do we have code flaws like duplicates, too complex methods or other violations to a defined set of rules? Also, the tool must run head-less (commandline, ant, ...) we wan't to do analysis on a partial code base (changed sources only) Tools We did a little research and found the following tools that could potencially help: Cast Application Intelligence Platform (AIP): Seems to be a server that grasps information about "anything". Couldn't find a console version that would export in readable format. Toad for Oracle: The Professional version is said to include something called Xpert validates a set of rules against a code base. Sonar + PL/SQL-Plugin: Uses Toad for Oracle to display code-health the sonar-way. This is for browsing the current state of the code base. Semantic Designs DMSToolkit: Quite general analysis of source code base. Commandline available? Semantic Designs Clones Detector: Detects clones. But also via command line? Fortify Source Code Analyzer: Seems to be focussed on security issues. But maybe it is extensible? more... So far, Toad for Oracle together with Sonar seems to be an elegant solution. But may be we are missing something here? Any ideas? Other products? Experiences? Related Questions on SO: http://stackoverflow.com/questions/531430/any-static-code-analysis-tools-for-stored-procedures http://stackoverflow.com/questions/839707/any-code-quality-tool-for-pl-sql http://stackoverflow.com/questions/956104/is-there-a-static-analysis-tool-for-python-ruby-sql-cobol-perl-and-pl-sql

    Read the article

  • hibernate3-maven-plugin: entiries in different maven projects, hbm2ddl fails

    - by Mike
    I'm trying to put an entity in a different maven project. In the current project I have: @Entity public class User { ... private FacebookUser facebookUser; ... public FacebookUser getFacebookUser() { return facebookUser; } ... public void setFacebookUser(FacebookUser facebookUser) { this.facebookUser = facebookUser; } Then FacebookUser (in a different maven project, that's a dependency of a current project) is defined as: @Entity public class FacebookUser { ... @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } Here is my maven hibernate3-maven-plugin configuration: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <phase>process-classes</phase> <goals> <goal>hbm2ddl</goal> </goals> </execution> </executions> <configuration> <components> <component> <name>hbm2ddl</name> <implementation>jpaconfiguration</implementation> </component> </components> <componentProperties> <ejb3>false</ejb3> <persistenceunit>Default</persistenceunit> <outputfilename>schema.ddl</outputfilename> <drop>false</drop> <create>true</create> <export>false</export> <format>true</format> </componentProperties> </configuration> </plugin> Here is the error I'm getting: org.hibernate.MappingException: Could not determine type for: com.xxx.facebook.model.FacebookUser, at table: user, for columns: [org.hibernate.mapping.Column(facebook_user)] I know that FacebookUser is on the classpath because if I make facebook user transient, project compiles fine: @Transient public FacebookUser getFacebookUser() { return facebookUser; }

    Read the article

  • Foreign Key Relationships and "belongs to many"

    - by jan
    I have the following model: S belongs to T T has many S A,B,C,D,E (etc) have 1 T each, so the T should belong to each of A,B,C,D,E (etc) At first I set up my foreign keys so that in A, fk_a_t would be the foreign key on A.t to T(id), in B it'd be fk_b_t, etc. Everything looks fine in my UML (using MySQLWorkBench), but generating the yii models results in it thinking that T has many A,B,C,D (etc) which to me is the reverse. It sounds to me like either I need to have A_T, B_T, C_T (etc) tables, but this would be a pain as there are a lot of tables that have this relationship. I've also googled that the better way to do this would be some sort of behavior, such that A,B,C,D (etc) can behave as a T, but I'm not clear on exactly how to do this (I will continue to google more on this) What do you think is the better solution? UML: Here's the DDL (auto generated). Just pretend that there is more than 3 tables referencing T. -- ----------------------------------------------------- -- Table `mydb`.`T` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`T` ( `id` INT NOT NULL AUTO_INCREMENT , PRIMARY KEY (`id`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`S` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`S` ( `id` INT NOT NULL AUTO_INCREMENT , `thing` VARCHAR(45) NULL , `t` INT NOT NULL , PRIMARY KEY (`id`) , INDEX `fk_S_T` (`id` ASC) , CONSTRAINT `fk_S_T` FOREIGN KEY (`id` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`A` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`A` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff` VARCHAR(45) NULL , `bar` VARCHAR(45) NULL , `foo` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`B` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`B` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff2` VARCHAR(45) NULL , `foobar` VARCHAR(45) NULL , `other` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `mydb`.`C` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `mydb`.`C` ( `id` INT NOT NULL AUTO_INCREMENT , `T` INT NOT NULL , `stuff3` VARCHAR(45) NULL , `foobar2` VARCHAR(45) NULL , `other4` VARCHAR(45) NULL , PRIMARY KEY (`id`) , INDEX `fk_A_T` (`T` ASC) , CONSTRAINT `fk_A_T` FOREIGN KEY (`T` ) REFERENCES `mydb`.`T` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB;

    Read the article

  • Self referencing a table

    - by mue
    Hello, so I'm new to NHibernate and have a problem. Perhaps somebody can help me here. Given a User-class with many, many properties: public class User { public virtual Int64 Id { get; private set; } public virtual string Firstname { get; set; } public virtual string Lastname { get; set; } public virtual string Username { get; set; } public virtual string Email { get; set; } ... public virtual string Comment { get; set; } public virtual UserInfo LastModifiedBy { get; set; } } Here some DDL for the table: CREATE TABLE USERS ( "ID" BIGINT NOT NULL , "FIRSTNAME" VARCHAR(50) NOT NULL , "LASTNAME" VARCHAR(50) NOT NULL , "USERNAME" VARCHAR(128) NOT NULL , "EMAIL" VARCHAR(128) NOT NULL , ... "LASTMODIFIEDBY" BIGINT NOT NULL , ) IN "USERSPACE1" ; Database-table-field 'LASTMODIFIEDBY' holds for auditing purposes the Id from the User who is acting in case of inserts or updates. This would normally be an admin. Because the UI shall display not this Int64 but admins name (pattern like 'Lastname, Firstname') I need to retrieve these values by self referencing table USERS to itself. Next is, that a whole object of type User would be overkill by the amount of unwanted fields. So there is a class UserInfo with much smaller footprint. public class UserInfo { public Int64 Id { get; set; } public string Firstname { get; set; } public string Lastname { get; set; } public string FullnameReverse { get { return string.Format("{0}, {1}", Lastname ?? string.Empty, Firstname ?? string.Empty); } } } So here starts the problem. Actually I have no clue how to accomplish this task. Im not sure if I also must provide a mapping for class UserInfo and not only for class User. I'd like to integrate class UserInfo as Composite-element within the mapping for User-class. But I dont no how to define the mapping between USERS.ID and USERS.LASTMODIFIEDBY table-fields. Hopefully I decribes my problem clear enough to get some hints. Thanks alot!

    Read the article

  • JQuery not copying state dropdown list 1 to DDL2 on checkbox change

    - by user1001411
    I've looked at similar posts on the subject but none of the recommended solutions have worked for me so I'm not sure where I'm going wrong. I have a billing and shipping address form with a dropdown list for the state. Upon checking/unchecking the "billing address same as shipping" everything copies over except the state dropdown list. The state dropdown list is populated from a SQLDataSource state table. Here's my code: <script type="text/javascript" language="javascript" src="../Scripts/jquery-1.4.1.min.js"></script> <script type="text/javascript"> $(document).ready(function () { $('input:checkbox[id*=chkCopy]').change(function () { if ($(this).is(':checked')) { $('input:text[id*=TextBox5]').val($('input:text[id*=TextBox1]').val()); $('input:text[id*=TextBox7]').val($('input:text[id*=TextBox2]').val()); $('input:text[id*=TextBox9]').val($('input:text[id*=TextBox3]').val()); $('input:text[id*=TextBox12]').val($('input:text[id*=TextBox4]').val()); $('select#DropDownList6').val($("select#DropDownList1").val()); $('input:text[id*=TextBox14]').val($('input:text[id*=TextBox6]').val()); } else { $('input:text[id*=TextBox5]').val(''); $('input:text[id*=TextBox7]').val(''); $('input:text[id*=TextBox9]').val(''); $('input:text[id*=TextBox12]').val(''); $('select#DropDownList6').val(''); $('input:text[id*=TextBox14]').val(''); } }); }); and here is my SQL for the DDL: <asp:DropDownList ID="DropDownList1" runat="server" DataTextField="Descr" DataValueField="ID" DataSourceID="SqlDataSource1" Width="254"> </asp:DropDownList> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:ASPNETDBConnectionString1 %>" SelectCommand="SELECT * FROM [xrefState]"> </asp:SqlDataSource> the other one: <asp:DropDownList ID="DropDownList6" runat="server" DataTextField="Descr" DataValueField="ID" DataSourceID="SqlDataSource5" Width="254"> </asp:DropDownList> <asp:SqlDataSource ID="SqlDataSource5" runat="server" ConnectionString="<%$ ConnectionStrings:ASPNETDBConnectionString1 %>" SelectCommand="SELECT * FROM [xrefState]"> </asp:SqlDataSource>

    Read the article

  • PHP crashing (seg-fault) under mod_fcgi, apache

    - by Andras Gyomrey
    I've been programming a site using: Zend Framework 1.11.5 (complete MVC) PHP 5.3.6 Apache 2.2.19 CentOS 5.6 i686 virtuozzo on vps cPanel WHM 11.30.1 (build 4) Mysql 5.1.56-log Mysqli API 5.1.56 The issue started here http://stackoverflow.com/questions/6769515/php-programming-seg-fault. In brief, php is giving me random segmentation-faults. [Wed Jul 20 17:45:34 2011] [error] mod_fcgid: process /usr/local/cpanel/cgi-sys/php5(11562) exit(communication error), get unexpected signal 11 [Wed Jul 20 17:45:34 2011] [warn] [client 190.78.208.30] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Wed Jul 20 17:45:34 2011] [error] [client 190.78.208.30] Premature end of script headers: index.php About extensions. When i compile php with "--enable-debug" flag, i have to disable this line: zend_extension="/usr/local/IonCube/ioncube_loader_lin_5.3.so" Otherwise, the server doesn't accept requests and i get a "The connection with the server was reset". It is possible that i have to disable eaccelerator too because of the same reason. I still don't get why apache gets running it some times and some others not: extension="eaccelerator.so" Anyway, after i get httpd running, seg-faults can occurr randomly. If i don't compile php with "--enable-debug" flag, i can get DETERMINISTICALLY a php crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $row = $db->fetchRow("SHOW CREATE TABLE 222AFI"); } } ?> BUT if i compile php with "--enable-debug" flag, it's really hard to get this error. I must add some complexity to make it crash. I have to be doing many paralell requests for a few seconds to get a crash: <?php class Admin_DbController extends Controller_BaseController { public function updateSqlDefinitionsAction() { $db = Zend_Registry::get('db'); $tableList = $db->listTables(); foreach ($tableList as $tableName){ $row = $db->fetchRow("SHOW CREATE TABLE " . $db->quoteIdentifier($tableName)); file_put_contents( DB_DEFINITIONS_PATH . '/' . $tableName . '.sql', $row['Create Table'] . ';' ); } } } ?> Please notice this is the same script, but creating DDL for all tables in database rather than for one. It seems that if php is heavy loaded (with extensions and me doing many paralell requests) it's when i get php to crash. About starting httpd with "-X": i've tried. The thing is, it is already hard to make php crash with --enable-debug. With "-X" option (which only enables one child process) i can't do parallel requests. So i haven't been able to create to proper debug backtrace: https://bugs.php.net/bugs-generating-backtrace.php My concrete question is, what do i do to get a coredump? root@GWT4 [~]# httpd -V Server version: Apache/2.2.19 (Unix) Server built: Jul 20 2011 19:18:58 Cpanel::Easy::Apache v3.4.2 rev9999 Server's Module Magic Number: 20051115:28 Server loaded: APR 1.4.5, APR-Util 1.3.12 Compiled using: APR 1.4.5, APR-Util 1.3.12 Architecture: 32-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/usr/local/apache" -D SUEXEC_BIN="/usr/local/apache/bin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf"

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Try a sample: Using the counter predicate for event sampling

    - by extended_events
    Extended Events offers a rich filtering mechanism, called predicates, that allows you to reduce the number of events you collect by specifying criteria that will be applied during event collection. (You can find more information about predicates in Using SQL Server 2008 Extended Events (by Jonathan Kehayias)) By evaluating predicates early in the event firing sequence we can reduce the performance impact of collecting events by stopping event collection when the criteria are not met. You can specify predicates on both event fields and on a special object called a predicate source. Predicate sources are similar to action in that they typically are related to some type of global information available from the server. You will find that many of the actions available in Extended Events have equivalent predicate sources, but actions and predicates sources are not the same thing. Applying predicates, whether on a field or predicate source, is very similar to what you are used to in T-SQL in terms of how they work; you pick some field/source and compare it to a value, for example, session_id = 52. There is one predicate source that merits special attention though, not just for its special use, but for how the order of predicate evaluation impacts the behavior you see. I’m referring to the counter predicate source. The counter predicate source gives you a way to sample a subset of events that otherwise meet the criteria of the predicate; for example you could collect every other event, or only every tenth event. Simple CountingThe counter predicate source works by creating an in memory counter that increments every time the predicate statement is evaluated. Here is a simple example with my favorite event, sql_statement_completed, that only collects the second statement that is run. (OK, that’s not much of a sample, but this is for demonstration purposes. Here is the session definition: CREATE EVENT SESSION counter_test ON SERVERADD EVENT sqlserver.sql_statement_completed    (ACTION (sqlserver.sql_text)    WHERE package0.counter = 2)ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS) You can find general information about the session DDL syntax in BOL and from Pedro’s post Introduction to Extended Events. The important part here is the WHERE statement that defines that I only what the event where package0.count = 2; in other words, only the second instance of the event. Notice that I need to provide the package name along with the predicate source. You don’t need to provide the package name if you’re using event fields, only for predicate sources. Let’s say I run the following test queries: -- Run three statements to test the sessionSELECT 'This is the first statement'GOSELECT 'This is the second statement'GOSELECT 'This is the third statement';GO Once you return the event data from the ring buffer and parse the XML (see my earlier post on reading event data) you should see something like this: event_name sql_text sql_statement_completed SELECT ‘This is the second statement’ You can see that only the second statement from the test was actually collected. (Feel free to try this yourself. Check out what happens if you remove the WHERE statement from your session. Go ahead, I’ll wait.) Percentage Sampling OK, so that wasn’t particularly interesting, but you can probably see that this could be interesting, for example, lets say I need a 25% sample of the statements executed on my server for some type of QA analysis, that might be more interesting than just the second statement. All comparisons of predicates are handled using an object called a predicate comparator; the simple comparisons such as equals, greater than, etc. are mapped to the common mathematical symbols you know and love (eg. = and >), but to do the less common comparisons you will need to use the predicate comparators directly. You would probably look to the MOD operation to do this type sampling; we would too, but we don’t call it MOD, we call it divides_by_uint64. This comparator evaluates whether one number is divisible by another with no remainder. The general syntax for using a predicate comparator is pred_comp(field, value), field is always first and value is always second. So lets take a look at how the session changes to answer our new question of 25% sampling: CREATE EVENT SESSION counter_test_25 ON SERVERADD EVENT sqlserver.sql_statement_completed    (ACTION (sqlserver.sql_text)    WHERE package0.divides_by_uint64(package0.counter,4))ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS)GO Here I’ve replaced the simple equivalency check with the divides_by_uint64 comparator to check if the counter is evenly divisible by 4, which gives us back every fourth record. I’ll leave it as an exercise for the reader to test this session. Why order matters I indicated at the start of this post that order matters when it comes to the counter predicate – it does. Like most other predicate systems, Extended Events evaluates the predicate statement from left to right; as soon as the predicate statement is proven false we abandon evaluation of the remainder of the statement. The counter predicate source is only incremented when it is evaluated so whether or not the counter is incremented will depend on where it is in the predicate statement and whether a previous criteria made the predicate false or not. Here is a generic example: Pred1: (WHERE statement_1 AND package0.counter = 2)Pred2: (WHERE package0.counter = 2 AND statement_1) Let’s say I cause a number of events as follows and examine what happens to the counter predicate source. Iteration Statement Pred1 Counter Pred2 Counter A Not statement_1 0 1 B statement_1 1 2 C Not statement_1 1 3 D statement_1 2 4 As you can see, in the case of Pred1, statement_1 is evaluated first, when it fails (A & C) predicate evaluation is stopped and the counter is not incremented. With Pred2 the counter is evaluated first, so it is incremented on every iteration of the event and the remaining parts of the predicate are then evaluated. In this example, Pred1 would return an event for D while Pred2 would return an event for B. But wait, there is an interesting side-effect here; consider Pred2 if I had run my statements in the following order: Not statement_1 Not statement_1 statement_1 statement_1 In this case I would never get an event back from the system because the point at which counter=2, the rest of the predicate evaluates as false so the event is not returned. If you’re using the counter target for sampling and you’re not getting the expected events, or any events, check the order of the predicate criteria. As a general rule I’d suggest that the counter criteria should be the last element of your predicate statement since that will assure that your sampling rate will apply to the set of event records defined by the rest of your predicate. Aside: I’m interested in hearing about uses for putting the counter predicate criteria earlier in the predicate statement. If you have one, post it in a comment to share with the class. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • CodePlex Daily Summary for Saturday, June 16, 2012

    CodePlex Daily Summary for Saturday, June 16, 2012Popular ReleasesCosmos (C# Open Source Managed Operating System): Release 92560: Prerequisites Visual Studio 2010 - Any version including Express. Express users must also install Visual Studio 2010 Integrated Shell runtime VMWare - Cosmos can run on real hardware as well as other virtualization environments but our default debug setup is configured for VMWare. VMWare Player (Free). or Workstation VMWare VIX API 1.11AutoUpdaterdotNET : Autoupdate for VB.NET and C# Developer: AutoUpdater.NET 1.1: Release Notes *New feature added that allows user to select remind later interval.Sumzlib: API document: API documentMicrosoft SQL Server Product Samples: Database: AdventureWorks 2008 OLTP Script: Install AdventureWorks2008 OLTP database from script The AdventureWorks database can be created by running the instawdb.sql DDL script contained in the AdventureWorks 2008 OLTP Script.zip file. The instawdb.sql script depends on two path environment variables: SqlSamplesDatabasePath and SqlSamplesSourceDataPath. The SqlSamplesDatabasePath environment variable is set to the default Microsoft ® SQL Server 2008 path. You will need to change the SqlSamplesSourceDataPath environment variable to th...HigLabo: HigLabo_20120613: Bug fix HigLabo.Mail Decode header encoded by CP1252Jasc (just another script compressor): 1.3.1: Updated Ajax Minifier to 4.55.WipeTouch, a jQuery touch plugin: 1.2.0: Changes since 1.1.0: New: wipeMove event, triggered while moving the mouse/finger. New: added "source" to the result object. Bug fix: sometimes vertical wipe events would not trigger correctly. Bug fix: improved tapToClick handler. General code refactoring. Windows Phone 7 is not supported, yet! Its behaviour is completely broken and would require some special tricks to make it work. Maybe in the future...Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3026 (June 2012): Fixes: round( 0.0 ) local TimeZone name TimeZone search compiling multi-script-assemblies PhpString serialization DocDocument::loadHTMLFile() token_get_all() parse_url()BlackJumboDog: Ver5.6.4: 2012.06.13 Ver5.6.4  (1) Web???????、???POST??????????????????Yahoo! UI Library: YUI Compressor for .Net: Version 2.0.0.0 - Ferret: - Merging both 3.5 and 2.0 codebases to a single .NET 2.0 assembly. - MSBuild Task. - NAnt Task.Bumblebee: Version 0.3.1: Changed default config values to decent ones. Restricted visibility of Hive.fs to internal. Added some XML documentation. Added Array.shuffle utility. The dll is also available on NuGet My apologies, the initial source code referenced was missing one file which prevented it from building The source code contains two examples, one in C#, one in F#, illustrating the usage of the framework on the Travelling Salesman Problem: Source CodeSharePoint XSL Templates: SPXSLT 0.0.9: Added new template FixAmpersands. Fixed the contents of the MultiSelectValueCheck.xsl file, which was missing the stylesheet wrapper.ExcelFileEditor: .CS File: nothingBizTalk Scheduled Task Adapter: Release 4.0: Works with BizTalk Server 2010. Compiled in .NET Framework 4.0. In this new version are available small improvements compared to the current version (3.0). We can highlight the following improvements or changes: 24 hours support in “start time” property. Previous versions had an issue with setting the start time, as it shown 12 hours watch but no AM/PM. Daily scheduler review. Solved a small bug on Daily Properties: unable to switch between “Every day” and “on these days” Installation e...Weapsy - ASP.NET MVC CMS: 1.0.0 RC: - Upgrade to Entity Framework 4.3.1 - Added AutoMapper custom version (by nopCommerce Team) - Added missed model properties and localization resources of Plugin Definitions - Minor changes - Fixed some bugsXenta Framework - extensible enterprise n-tier application framework: Xenta Framework 1.8.0 Beta: Catalog and Publication reviews and ratings Store language packs in data base Improve reporting system Improve Import/Export system A lot of WebAdmin app UI improvements Initial implementation of the WebForum app DB indexes Improve and simplify architecture Less abstractions Modernize architecture Improve, simplify and unify API Simplify and improve testing A lot of new unit tests Codebase refactoring and ReSharpering Utilize Castle Windsor Utilize NHibernate ORM ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.55: Properly handle IE extension to CSS3 grammar that allows for multiple parameters to functional pseudo-class selectors. add new switch -braces:(new|same) that affects where opening braces are placed in multi-line output. The default, "new" puts them on their own new line; "same" outputs them at the end of the previous line. add new optional values to the -inline switch: -inline:(force|noforce), which can be combined with the existing boolean value via comma-separators; value "force" (which...Microsoft Media Platform: Player Framework: MMP Player Framework 2.7 (Silverlight and WP7): Additional DownloadsSMFv2.7 Full Installer (MSI) - This will install everything you need in order to develop your own SMF player application, including the IIS Smooth Streaming Client. It only includes the assemblies. If you want the source code please follow the link above. Smooth Streaming Sample Player - This is a pre-built player that includes support for IIS Smooth Streaming. You can configure the player to playback your content by simplying editing a configuration file - no need to co...Liberty: v3.2.1.0 Release 10th June 2012: Change Log -Added -Liberty is now digitally signed! If the certificate on Liberty.exe is missing, invalid, or does not state that it was developed by "Xbox Chaos, Open Source Developer," your copy of Liberty may have been altered in some (possibly malicious) way. -Reach Mass biped max health and shield changer -Fixed -H3/ODST Fixed all of the glitches that users kept reporting (also reverted the changes made in 3.2.0.2) -Reach Made some tag names clearer and more consistent between m...Media Companion: Media Companion 3.503b: It has been a while, so it's about time we release another build! Major effort has been for fixing trailer downloads, plus a little bit of work for episode guide tag in TV show NFOs.New Projects.NinJa (dotNinja): An extensive JavaScript Framework revolving around principles found in .NET and aiming to integrate full Intellisense support. bab-rizg: solve unemployment problemBizTalk Multi-part Message Attachments Zipper Pipeline Component: This pipeline component replaces all attachments of a multi-part message, in a send pipeline, for its zipped equivalent.Boggle.Net: A basic implementation of Boggle for WPF.CFScript: CFScript is an ANT-like scripting system for Compact Framework. Tasks like copying files, setting registry values o install CAB files can be done with CFScript.Diablo3: Diablo3Dygraphs.NET: Dygraphs.NETDynamics CRM plugin for nopCommerce: This plugins is a bridge between nopCommerce and Dynamics CRM. nms.gaming: Place holderProject Bright Star: Project Bright Star. Deal with it.RDFSharp: RDFSharp is a library designed to ease the development of .NET applications based on the RDF and Semantic Web data model.SlamCMS: An application framework that allows you to build content managed sites leveraging SharePoint 2010 for publishing with tools to query and manifest your data.test02: no

    Read the article

  • how to relaod jqgrid in asp.net mvc when i change dropdownlist

    - by sandeep
    what is wrong in this code? when i change drop down list,the grid takes old value of ddl only, not taken newely selected values why? <%-- $(function() { $("#StateId").change(function() { $('#TheForm').submit(); }); }); $(function() { $("#CityId").change(function() { $('#TheForm').submit(); }); }); $(function() { $("#HospitalName").change(function() { $('#TheForm').submit(); }); }); --% var gridimgpath = '/scripts/themes/coffee/images'; var gridDataUrl = '/Claim/DynamicGridData/'; jQuery(document).ready(function() { // $("#btnSearch").click(function() { var StateId = document.getElementById('StateId').value; var CityId = document.getElementById('CityId').value; var HName = document.getElementById('HospitalName').value; // alert(CityId); // alert(StateId); // alert(HName); if (StateId 0 && CityId == '' && HName == '') { CityId = 0; HName = 'Default'.toString(); // alert("elseif0" + HName.toString()); } else if (CityId 0 && StateId == '') { // alert("elseif1"); alert("Please Select State..") } else if (CityId 0 && StateId 0 && HName == '') { // alert("elseif2"); alert(CityId); alert(StateId); HName = "Default"; } else { // alert("else"); StateId = 0; CityId = 0; HName = "Default"; } jQuery("#list").jqGrid({ url: gridDataUrl + '?StateId=' + StateId + '&CityId=' + CityId + '&hospname=' + HName, datatype: 'json', mtype: 'GET', colNames: ['Id', 'HospitalName', 'Address', 'City', 'District', 'FaxNumber', 'PhoneNumber'], colModel: [{ name: 'HospitalId', index: 'HospitalId', width: 40, align: 'left' }, { name: 'HospitalName', index: 'HospitalName', width: 40, align: 'left' }, { name: 'Address1', Address: 'Address1', width: 300 }, { name: 'CityName', index: 'CityName', width: 100 }, { name: 'DistName', index: 'DistName', width: 100 }, { name: 'FaxNo', index: 'FaxNo', width: 100 }, { name: 'ContactNo1', index: 'PhoneNumber', width: 100 } ], pager: jQuery('#pager'), rowNum: 10, rowList: [5, 10, 20, 50], // sortname: 'Id,', sortname: '1', sortorder: "asc", viewrecords: true, //multiselect: true, //multikey: "ctrlKey", // imgpath: '/scripts/themes/coffee/images', imgpath: gridimgpath, caption: 'Hospital Search', width: 700, height: 250 }); $(function() { // $("#btnSearch").click(function() { $('#CityId').change(function() { alert("kjasd"); // Set the vars whenever the date range changes and then filter the results StateId = document.getElementById('StateId').value; CityId = document.getElementById('CityId').value; HName = 'default'; setGridUrl(); }); // Set the date range textbox values $('#StateId').val(StateId.toString()); $('#CityId').val(CityId.toString()); // Set the grid json url to get the data to display setGridUrl(); }); function setGridUrl() { alert(StateId); alert(CityId); alert("hi"); var newGridDataUrl = gridDataUrl + '?StateId=' + StateId + '&CityId=' + CityId + '&hospname=' + HName; jQuery('#list').jqGrid('setGridParam', { url: newGridDataUrl }).trigger("reloadGrid"); } // }); }); <%--<%using (Html.BeginForm("HospitalSearch", "Claim", FormMethod.Post, new { id = "TheForm" })) --% Hospital Search   State :   City :   Hospital Name :       <div id="jqGridContainer"> <table id="list" class="scroll" cellpadding="0" cellspacing="0"></table> </div>

    Read the article

  • How to query on table returned by Stored procedure within a procedure.

    - by Shantanu Gupta
    I have a stored procedure that is performing some ddl dml operations. It retrieves a data after processing data from CTE and cross apply and other such complex things. Now this returns me a 4 tables which gets binded to various sources at frontend. Now I want to use one of the table to further processing so as to get more usefull information from it. eg. This table would be containing approx 2000 records at most of which i want to get records that belongs to lodging only. PK_CATEGORY_ID DESCRIPTION FK_CATEGORY_ID IMMEDIATE_PARENT Department_ID Department_Name DESCRIPTION_HIERARCHY DEPTH IS_ACTIVE ID_PATH DESC_PATH -------------------- -------------------------------------------------- -------------------- -------------------------------------------------- -------------------- -------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------- ----------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1 Food NULL NULL 1 Food (Food) Food 0 1 0 Food 5 Chinese 1 Food 1 Food (Food) ----Chinese 1 1 1 Food->Chinese 14 X 5 Chinese 1 Food (Food) --------X 2 1 1->5 Food->Chinese->X 15 Y 5 Chinese 1 Food (Food) --------Y 2 1 1->5 Food->Chinese->Y 65 asdasd 5 Chinese 1 Food (Food) --------asdasd 2 1 1->5 Food->Chinese->asdasd 66 asdas 5 Chinese 1 Food (Food) --------asdas 2 1 1->5 Food->Chinese->asdas 8 Italian 1 Food 1 Food (Food) ----Italian 1 1 1 Food->Italian 48 hfghfgh 1 Food 1 Food (Food) ----hfghfgh 1 1 1 Food->hfghfgh 55 Asd 1 Food 1 Food (Food) ----Asd 1 1 1 Food->Asd 2 Lodging NULL NULL 2 Lodging (Lodging) Lodging 0 1 0 Lodging 3 Room 2 Lodging 2 Lodging (Lodging) ----Room 1 1 2 Lodging->Room 4 Floor 3 Room 2 Lodging (Lodging) --------Floor 2 1 2->3 Lodging->Room->Floor 9 First 4 Floor 2 Lodging (Lodging) ------------First 3 1 2->3->4 Lodging->Room->Floor->First 10 Second 4 Floor 2 Lodging (Lodging) ------------Second 3 1 2->3->4 Lodging->Room->Floor->Second 11 Third 4 Floor 2 Lodging (Lodging) ------------Third 3 1 2->3->4 Lodging->Room->Floor->Third 29 Fourth 4 Floor 2 Lodging (Lodging) ------------Fourth 3 1 2->3->4 Lodging->Room->Floor->Fourth 12 Air Conditioned 3 Room 2 Lodging (Lodging) --------Air Conditioned 2 1 2->3 Lodging->Room->Air Conditioned 20 With Balcony 12 Air Conditioned 2 Lodging (Lodging) ------------With Balcony 3 1 2->3->12 Lodging->Room->Air Conditioned->With Balcony 24 Mountain View 20 With Balcony 2 Lodging (Lodging) ----------------Mountain View 4 1 2->3->12->20 Lodging->Room->Air Conditioned->With Balcony->Mountain View 25 Ocean View 20 With Balcony 2 Lodging (Lodging) ----------------Ocean View 4 1 2->3->12->20 Lodging->Room->Air Conditioned->With Balcony->Ocean View 26 Garden View 20 With Balcony 2 Lodging (Lodging) ----------------Garden View 4 1 2->3->12->20 Lodging->Room->Air Conditioned->With Balcony->Garden View 52 Smoking 20 With Balcony 2 Lodging (Lodging) ----------------Smoking 4 1 2->3->12->20 Lodging->Room->Air Conditioned->With Balcony->Smoking 21 Without Balcony 12 Air Conditioned 2 Lodging (Lodging) ------------Without Balcony 3 1 2->3->12 Lodging->Room->Air Conditioned->Without Balcony 13 Non Air Conditioned 3 Room 2 Lodging (Lodging) --------Non Air Conditioned 2 1 2->3 Lodging->Room->Non Air Conditioned 22 With Balcony 13 Non Air Conditioned 2 Lodging (Lodging) ------------With Balcony 3 1 2->3->13 Lodging->Room->Non Air Conditioned->With Balcony 71 EA 3 Room 2 Lodging (Lodging) --------EA 2 1 2->3 Lodging->Room->EA 50 Casabellas 2 Lodging 2 Lodging (Lodging) ----Casabellas 1 1 2 Lodging->Casabellas 51 North Beach 50 Casabellas 2 Lodging (Lodging) --------North Beach 2 1 2->50 Lodging->Casabellas->North Beach 40 Fooding NULL NULL 40 Fooding (Fooding) Fooding 0 1 0 Fooding 41 Pizza 40 Fooding 40 Fooding (Fooding) ----Pizza 1 1 40 Fooding->Pizza 45 Onion 41 Pizza 40 Fooding (Fooding) --------Onion 2 1 40->41 Fooding->Pizza->Onion 47 Extra Cheeze 41 Pizza 40 Fooding (Fooding) --------Extra Cheeze 2 1 40->41 Fooding->Pizza->Extra Cheeze 77 Burger 40 Fooding 40 Fooding (Fooding) ----Burger 1 1 40 Fooding->Burger This result is being obtained to me using some stored procedure which contains some DML operations as well. i want something like this select description from exec spName where fk_category_id=5 Remember that this spName is returning me 4 tables of which i want to perform some query on one of the table whose index will be known to me. I dont have to send it to UI before querying further. I am using Sql Server 2008 but would like a compatible solution for 2005 also.

    Read the article

  • Input string was not in the correct format using int.Parse

    - by JDWebs
    I have recently been making a login 'representation' which is not secure. So before answering, please note I am aware of security risks etc., and this will not be on a live site. Also note I am a beginner :P. For my login representation, I am using LINQ to compare values of a DDL to select a username and a Textbox to enter a password, when a login button is clicked. However, an error is thrown 'Input string was not in the correct format', when using int.Parse. Front End: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Login_Test.aspx.cs" Inherits="Login_Login_Test" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Login Test</title> </head> <body> <form id="LoginTest" runat="server"> <div> <asp:DropDownList ID="DDL_Username" runat="server" Height="20px" DataTextField="txt"> </asp:DropDownList> <br /> <asp:TextBox ID="TB_Password" runat="server" TextMode="Password"></asp:TextBox> <br /> <asp:Button ID="B_Login" runat="server" onclick="B_Login_Click" Text="Login" /> <br /> <asp:Literal ID="LI_Result" runat="server"></asp:Literal> </div> </form> </body> </html> Back End: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; public partial class Login_Login_Test : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) { Binder(); } } private void Binder() { using (DataClassesDataContext db = new DataClassesDataContext()) { DDL_Username.DataSource = from x in db.DT_Honeys select new { id = x.UsernameID, txt = x.Username }; DDL_Username.DataValueField = "id"; DDL_Username.DataTextField = "txt"; DDL_Username.DataBind(); } } protected void B_Login_Click(object sender, EventArgs e) { if (TB_Password.Text != "") { using (DataClassesDataContext db = new DataClassesDataContext()) { DT_Honey blah = new DT_Honey(); blah = db.DT_Honeys.SingleOrDefault(x => x.UsernameID == int.Parse(DDL_Username.SelectedValue.ToString())); if (blah == null) { LI_Result.Text = "Something went wrong :/"; } if (blah.Password == TB_Password.Text) { LI_Result.Text = "Credentials recognised :-)"; } else { LI_Result.Text = "Error with credentials :-("; } } } } } I am aware this problem is very common, but none of the help I have found online is useful/relevant. Any help/suggestions appreciated; thank you for your time :-).

    Read the article

  • Programatically add UserControl with events

    - by schaermu
    Hi everybody I need to add multiple user controls to a panel for further editing of the contained data. My user control contains some panels, dropdown lists and input elements, which are populated in the user control's Page_Load event. protected void Page_Load(object sender, EventArgs e) { // populate comparer ddl from enum string[] enumNames = Enum.GetNames(typeof (SearchComparision)); var al = new ArrayList(); for (int i = 0; i < enumNames.Length; i++) al.Add(new {Value = i, Name = enumNames[i]}); scOperatorSelection.DataValueField = "Value"; scOperatorSelection.DataTextField = "Name"; ... The data to be displayed is added to the user control as a Field, defined above Page_Load. The signature of the events is the following: public delegate void ControlStateChanged(object sender, SearchCriteriaEventArgs eventArgs); public event ControlStateChanged ItemUpdated; public event ControlStateChanged ItemRemoved; public event ControlStateChanged ItemAdded; The update button on the user control triggers the following method: protected void UpdateCriteria(object sender, EventArgs e) { var searchCritCtl = (SearchCriteria) sender; var scEArgs = new SearchCriteriaEventArgs { TargetCriteria = searchCritCtl.CurrentCriteria.CriteriaId, SearchComparision = ParseCurrentComparer(searchCritCtl.scOperatorSelection.SelectedValue), SearchField = searchCritCtl.scFieldSelection.SelectedValue, SearchValue = searchCritCtl.scFilterValue.Text, ClickTarget = SearchCriteriaClickTarget.Update }; if (ItemUpdated != null) ItemUpdated(this, scEArgs); } The rendering page fetches the data objects from a storage backend and displays it in it's Page_Load event. This is the point where it starts getting tricky: i connect to the custom events! int idIt = 0; foreach (var item in _currentSearch.Items) { SearchCriteria sc = (SearchCriteria)LoadControl("~/content/controls/SearchCriteria.ascx"); sc.ID = "scDispCtl_" + idIt; sc.ControlMode = SearchCriteriaMode.Display; sc.CurrentCriteria = item; sc.ItemUpdated += CriteriaUpdated; sc.ItemRemoved += CriteriaRemoved; pnlDisplayCrit.Controls.Add(sc); idIt++; } When first rendering the page, everything is displayed fine, i get all my data. When i trigger an update event, the user control event is fired correctly, but all fields and controls of the user control are NULL. After a bit of research, i had to come to the conclusion that the event is fired before the controls are initialized... Is there any way to prevent such behavior / to override the page lifecycle somehow? I cannot initialize the user controls in the page's Init-event, because i have to access the Session-Store (not initialized in Page_Init). Any advice is welcome... EDIT: Since we hold all criteria informations in the storage backend (including the count of criteria) and that store uses the userid from the session, we cannot use Page_Init... just for clarification EDIT #2: I managed to get past some of the problems. Since i'm now using simple types, im able to bind all the data declaratively (using a repeater with a simple ItemTemplate). It is bound to the control, they are rendered in correct fashion. On Postback, all the data is rebound to the user control, data is available in the OnDataBinding and OnLoad events, everything looks fine. But as soon it enters the real event (bound to the button control of the user control), all field values are lost somehow... Does anybody know, how the page lifecycle continues to process the request after Databinding/Loading ? I'm going crazy about this issue...

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • CodePlex Daily Summary for Friday, June 04, 2010

    CodePlex Daily Summary for Friday, June 04, 2010New Projects23 Umbraco addons: 23 Umbraco addonsAdd-ons for EPiServer Relate+: In the Add-ons for EPiServer Relate+ you will find add-ons, extensions and modules that work together with EPiServer Relate+.Advanced Mail Merge (AMM) for Microsoft Office: Advanced Mail Merge for Microsoft Word 2007/2010, offers great extensable functionality: - Merge to document (PDF) - Merge to attachment - Use Out...Cenobith RLS Sample: Simple implementation of Row Level Security for Microsoft SQL ServerCodingWheels.DataTypes: DataTypes tries to make it easier for developers to have concrete typesafe objects for working with many common forms of data. Many times these dat...DigitArchive: Digit Archive makes it easy for the DIGIT magazine readers to find the correct software or movie bundled in the media along with the magazine. You'...dNet.DB: dNetDB is a .net framework that simplifies model and data access by providing a database independent object-based persistence, where objects are pe...Dynamic Application Framework: The Dynamic Application Framework provides a highly flexible environment for creating applications. Multiple UI and Execution Environments, along w...ECoG: ECoG toolkitFB Toolkit with Contracts: This is a research project where I have inserted code contracts into the Facebook Toolkit source code., version 3.1 beta. This delivers an efficien...GeneCMS: GeneCMS allows users to generate static HTML based websites by offering an ASP.NET editing front-end that can be run in the local machine. It is ta...HooIzDat: HooIzDat is game that asks, who the heck is that?! It's a two player game where your task is to guess your opponent's person before he or she guess...JingQiao.Interacting: JingQiao Interacting MessagingKanbanBoard: Visual task board for Kanban and Scrum.Learning CSharp: Just Learning CSharpMammoth: mammothMapWindow Mobile: MapWindow Mobile is mobile GIS Software which can run on windows mobile, developed in C# .NET Compact Framework. It provides basic GIS functionalit...Mindless Setback: Setback is a card game popular in New England. This project uses a combination of brute force and Monte Carlo methods to play Setback. This is an e...MSNCore(DirectUI) Element Viewer: MSNCore Element Viewer is an application designed to enumerate the elements with in applications built with MSNCore.dll and UXCore.dll. This appli...MSVN Team: bài tập thầy lườngNugget: Web Socket Server: A web socket server implemented in c#. The goal of the projects is to create an easy way to start using HTML5 web sockets in .NET web applications.oSoft ColorPicker Control for Visual Studio 2010: oSoft ColorPicker is an user control that can be used instead of the ColorDialog when you want to allow your users to select a color in a windows f...Prism Software Factory: The Prism Software Factory is a software factory for Visual Studio 2010 assisting developers in the process of building WPF & Silverlight applicati...Project Lion: Project lion is forum developed in Silverlight technology. Refix - .NET dependency management: Refix is an attempt to solve the problem of binary dependency management in large .NET solutions. It will achieve the goal using (amongst other thi...Rich Task List: Rich Task List is a tutorial project for DotNetNuke Module Development.SharePoint PowerRSS: Easy/Clean way to get SharePoint list data via more standard RSS feed. I found CleanRSS.aspx as part of SPRSS: Enhanced RSS Functionality for WSS ...SOAPI - StackOverflow API Generator: Generates, directly from the self documenting StackOverflow API specification, an end-to-end, fully documented API wrapper library with Visual Stu...SQL Script Application Utility: This C# project allows you to apply scripts to a database for table creation, data creation, etc. You can keep DDL in separate SQL scripts which c...Sql Server Reports Viewer: Sql Server Reports Viewer makes it easier to render Sql Server Reports without the need to setup a SSRS Server. This makes deployments a breeze. ...StorageHD: StorageHD system for large video filesUrzaGatherer: UrzaGatherer is a WPF 4.0 client application to handle Magic The Gathering cards collections. You can manage expansions, blocks and all informatio...webrel: This tool executes simple relational algebra expressions. It is useful for learning of Database course. Javascript and xhtml is used to develop thi...World Wide Grab: World Wide Grab allows retrieval and integration of various semi-structured data sorces, expecially Web applications. It turns every available res...New Releases3FD - Framework For Fast Development (C++): Alpha 3: This release was compiled in Visual Studio Release mode. It means you can use it in whatever compiler you want. However, the compatibility with ano...Advanced Mail Merge (AMM) for Microsoft Office: Advanced MailMerge 2007.zip: Release 1.1.0.0Army Bodger: Bodger 3 Archetype Test: Ok so it's later and I've largely finished it. Right now the Space Wolves have their Troops written and one HQ unit. The equipment panel largely wo...AwesomiumDotNet: AwesomiumDotNet 1.6 beta: Preview of AwesomiumDotNet 1.6.Bojinx: Bojinx Core V4.6: New features in this release: Greatly improved logging for INFO and DEBUG. Improved the getClassName function in ObjectUtils. Added the ability ...Cenobith RLS Sample: Sample App: Change connection strings in App.config and Web.config files.Christoc's DotNetNuke C# Module Development Template: 00.00.02: A minor update from the original release with a few fixes including Localization and some updated documentation.Community Forums NNTP bridge: Community Forums NNTP Bridge V25: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...DEWD: DEWD for Umbraco v1.0: Beta release of the package. Functional feature set and fairly stable. Since the alpha: Validation on input fields Custom view controls Ability...DotNetNuke Developers Help File: DNNHelpSystem 05.04.02: Release of the developer core API help documentation of DotNetNuke in MSDN style format, both as .CHM stand alone file as well as a html website ba...Drive Backup: Drive Backup V.0604: This release includes the following fixes/features: * Fixed incompatibility with some USB drives (those marked as “fixed” by Windows) * Ad...Event Scavenger: Version 3.3 (Refresh): Archiving bit added to database plus archiving stored procedure updated. Rest of items just refreshed. Database set to version 3.3Expression Encoder Batch Processor: Expression Batch v0.3: Now set the newly-converted file's Created DateTime to equal the source file's. This helps keep your videos sorterd chronologically in media librar...Folder Bookmarks: Folder Bookmarks 1.6.1: The latest version of Folder Bookmarks (1.6.1), with Mini-Menu bug fixes and 'Help' feature - all the instructions needed to use the software (If y...Genesis Smart Client Framework: Genesis v2.0 - Ruby User Experience Platform (UXP): This is the start of the rewrite of the entire framework. The rewrite will include support for XAML through WPF and Silverlight, WCF, Workflow Serv...Global: http requester tool: Added a brnad new console app for making http requests.GMap.NET - Great Maps for Windows Forms & Presentation: Hot Build: this is latest change-set build, unstable previewHERB.IQ: Alpha 0.1 Source code release 4: As of 6-23-10 @ 9:48ESTInfragistics Analytics Framework: Infragistics Analytics Framework 1.0: This project includes wrappers for the Infragistics controls that integrate with the recently launched Microsoft Silverlight Analytics Framework. T...Innovative Games: Cube Mapper: Cube Mapper is a small tool that takes in six textures and outputs a cube map that is a combination of the six textures. Cube Mapper supports .tga...jQuery Library for SharePoint Web Services: SPServices 0.5.6: This release is in an alpha state. Please only download it if you know what you are getting and are willing to test it. In any case, it's a bad ide...linq to jquery: jlinq v1.00 no doc: First public version of jlinq! no doc yet, soon too come!LinqSpecs: Version 1.0.1: Fabio Maulo has sent several patchs in order to make LinqSpecs to work with any linq provider other than in memory. Big KUDOS for him.mojoPortal: 2.3.4.4: see release notes on mojoportal.com Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on ...Nugget: Web Socket Server: Initial POC release: The initial proof of concept release. To try it out, open the Sample App.sln, set the ChatServer project as the start-up project, start debugging ...oSoft ColorPicker Control for Visual Studio 2010: oSoft ColorPicker Control for VS 2010 Beta 1: Beta 1Refix - .NET dependency management: Refix v0.1.0.48 ALPHA: First preview version of Refix command-line tool.SharePoint 2010 CSV Bulk Term Set Importer: Bulk Term Set Importer: Initial ReleaseSOAPI - StackOverflow API Generator: SOAPI Wrappers: SOAPI-JS First release as SOAPI-JS, SOAPI-CS coming shortly. Tests and example includedSQL Compact Toolbox: Beta 0.8.1: Initial test release - mind the bumps. Requires Visual Studio 2010.Thumb nail creator and image resizer: ThumbnailCreator1.2: this release fixes and issue that was occuring when the control was used inside paged dataTS3QueryLib.Net: TS3QueryLib.Net Version 0.23.17.0: Changelog Added Properties "IsSpacer" and "SpacerInfo" to ChannelListEntry. "IsSpacer" allows you to check whether the channel is a spacer channel ...UI Accessibility Checker: UI Accessibility Checker v.2.0: We are excited to announce the release of AccChecker 2.0! In addition to numerous bug fixes and usability improvements, these major features have...webrel: webrel 1.0: webrel 1.0WindStyle SlugHelper for Windows Live Writer: 1.2.0.0: 增加:可以配置是否忽略已经包含slug的日志,请在插件选项中配置; 增加:插件图标; 更新:支持最新Windows Live Writer,版本号14.0.8117.416。Work Recorder - Hold on own time!: WorkRecorder 1.1: +Only one instance can run #Change histogram to pie chartMost Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)PHPExcelpatterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesASP.NETMost Active ProjectsCommunity Forums NNTP bridgeRawrIonics Isapi Rewrite Filterpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSBlogEngine.NETFarseer Physics EngineMain projectMirror Testing System

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Commit in SQL

    - by PRajkumar
    SQL Transaction Control Language Commands (TCL)                                           (COMMIT) Commit Transaction As a SQL language we use transaction control language very frequently. Committing a transaction means making permanent the changes performed by the SQL statements within the transaction. A transaction is a sequence of SQL statements that Oracle Database treats as a single unit. This statement also erases all save points in the transaction and releases transaction locks. Oracle Database issues an implicit COMMIT before and after any data definition language (DDL) statement. Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back.   Until you commit a transaction: ·         You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit ·         You can roll back (undo) any changes made during the transaction with the ROLLBACK statement   Note: Most of the people think that when we type commit data or changes of what you have made has been written to data files, but this is wrong when you type commit it means that you are saying that your job has been completed and respective verification will be done by oracle engine that means it checks whether your transaction achieved consistency when it finds ok it sends a commit message to the user from log buffer but not from data buffer, so after writing data in log buffer it insists data buffer to write data in to data files, this is how it works.   Before a transaction that modifies data is committed, the following has occurred: ·         Oracle has generated undo information. The undo information contains the old data values changed by the SQL statements of the transaction ·         Oracle has generated redo log entries in the redo log buffer of the System Global Area (SGA). The redo log record contains the change to the data block and the change to the rollback block. These changes may go to disk before a transaction is committed ·         The changes have been made to the database buffers of the SGA. These changes may go to disk before a transaction is committed   Note:   The data changes for a committed transaction, stored in the database buffers of the SGA, are not necessarily written immediately to the data files by the database writer (DBWn) background process. This writing takes place when it is most efficient for the database to do so. It can happen before the transaction commits or, alternatively, it can happen some times after the transaction commits.   When a transaction is committed, the following occurs: 1.      The internal transaction table for the associated undo table space records that the transaction has committed, and the corresponding unique system change number (SCN) of the transaction is assigned and recorded in the table 2.      The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the transaction's SCN to the redo log file. This atomic event constitutes the commit of the transaction 3.      Oracle releases locks held on rows and tables 4.      Oracle marks the transaction complete   Note:   The default behavior is for LGWR to write redo to the online redo log files synchronously and for transactions to wait for the redo to go to disk before returning a commit to the user. However, for lower transaction commit latency application developers can specify that redo be written asynchronously and that transaction do not need to wait for the redo to be on disk.   The syntax of Commit Statement is   COMMIT [WORK] [COMMENT ‘your comment’]; ·         WORK is optional. The WORK keyword is supported for compliance with standard SQL. The statements COMMIT and COMMIT WORK are equivalent. Examples Committing an Insert INSERT INTO table_name VALUES (val1, val2); COMMIT WORK; ·         COMMENT Comment is also optional. This clause is supported for backward compatibility. Oracle recommends that you used named transactions instead of commit comments. Specify a comment to be associated with the current transaction. The 'text' is a quoted literal of up to 255 bytes that Oracle Database stores in the data dictionary view DBA_2PC_PENDING along with the transaction ID if a distributed transaction becomes in doubt. This comment can help you diagnose the failure of a distributed transaction. Examples The following statement commits the current transaction and associates a comment with it: COMMIT     COMMENT 'In-doubt transaction Code 36, Call (415) 555-2637'; ·         WRITE Clause Use this clause to specify the priority with which the redo information generated by the commit operation is written to the redo log. This clause can improve performance by reducing latency, thus eliminating the wait for an I/O to the redo log. Use this clause to improve response time in environments with stringent response time requirements where the following conditions apply: The volume of update transactions is large, requiring that the redo log be written to disk frequently. The application can tolerate the loss of an asynchronously committed transaction. The latency contributed by waiting for the redo log write to occur contributes significantly to overall response time. You can specify the WAIT | NOWAIT and IMMEDIATE | BATCH clauses in any order. Examples To commit the same insert operation and instruct the database to buffer the change to the redo log, without initiating disk I/O, use the following COMMIT statement: COMMIT WRITE BATCH; Note: If you omit this clause, then the behavior of the commit operation is controlled by the COMMIT_WRITE initialization parameter, if it has been set. The default value of the parameter is the same as the default for this clause. Therefore, if the parameter has not been set and you omit this clause, then commit records are written to disk before control is returned to the user. WAIT | NOWAIT Use these clauses to specify when control returns to the user. The WAIT parameter ensures that the commit will return only after the corresponding redo is persistent in the online redo log. Whether in BATCH or IMMEDIATE mode, when the client receives a successful return from this COMMIT statement, the transaction has been committed to durable media. A crash occurring after a successful write to the log can prevent the success message from returning to the client. In this case the client cannot tell whether or not the transaction committed. The NOWAIT parameter causes the commit to return to the client whether or not the write to the redo log has completed. This behavior can increase transaction throughput. With the WAIT parameter, if the commit message is received, then you can be sure that no data has been lost. Caution: With NOWAIT, a crash occurring after the commit message is received, but before the redo log record(s) are written, can falsely indicate to a transaction that its changes are persistent. If you omit this clause, then the transaction commits with the WAIT behavior. IMMEDIATE | BATCH Use these clauses to specify when the redo is written to the log. The IMMEDIATE parameter causes the log writer process (LGWR) to write the transaction's redo information to the log. This operation option forces a disk I/O, so it can reduce transaction throughput. The BATCH parameter causes the redo to be buffered to the redo log, along with other concurrently executing transactions. When sufficient redo information is collected, a disk write of the redo log is initiated. This behavior is called "group commit", as redo for multiple transactions is written to the log in a single I/O operation. If you omit this clause, then the transaction commits with the IMMEDIATE behavior. ·         FORCE Clause Use this clause to manually commit an in-doubt distributed transaction or a corrupt transaction. ·         In a distributed database system, the FORCE string [, integer] clause lets you manually commit an in-doubt distributed transaction. The transaction is identified by the 'string' containing its local or global transaction ID. To find the IDs of such transactions, query the data dictionary view DBA_2PC_PENDING. You can use integer to specifically assign the transaction a system change number (SCN). If you omit integer, then the transaction is committed using the current SCN. ·         The FORCE CORRUPT_XID 'string' clause lets you manually commit a single corrupt transaction, where string is the ID of the corrupt transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to specify this clause. ·         Specify FORCE CORRUPT_XID_ALL to manually commit all corrupt transactions. You must have DBA privileges to specify this clause. Examples Forcing an in doubt transaction. Example The following statement manually commits a hypothetical in-doubt distributed transaction. Query the V$CORRUPT_XID_LIST data dictionary view to find the transaction IDs of corrupt transactions. You must have DBA privileges to view the V$CORRUPT_XID_LIST and to issue this statement. COMMIT FORCE '22.57.53';

    Read the article

  • Operator of the week - Assert

    - by Fabiano Amorim
    Well my friends, I was wondering how to help you in a practical way to understand execution plans. So I think I'll talk about the Showplan Operators. Showplan Operators are used by the Query Optimizer (QO) to build the query plan in order to perform a specified operation. A query plan will consist of many physical operators. The Query Optimizer uses a simple language that represents each physical operation by an operator, and each operator is represented in the graphical execution plan by an icon. I'll try to talk about one operator every week, but so as to avoid having to continue to write about these operators for years, I'll mention only of those that are more common: The first being the Assert. The Assert is used to verify a certain condition, it validates a Constraint on every row to ensure that the condition was met. If, for example, our DDL includes a check constraint which specifies only two valid values for a column, the Assert will, for every row, validate the value passed to the column to ensure that input is consistent with the check constraint. Assert  and Check Constraints: Let's see where the SQL Server uses that information in practice. Take the following T-SQL: IF OBJECT_ID('Tab1') IS NOT NULL   DROP TABLE Tab1 GO CREATE TABLE Tab1(ID Integer, Gender CHAR(1))  GO  ALTER TABLE TAB1 ADD CONSTRAINT ck_Gender_M_F CHECK(Gender IN('M','F'))  GO INSERT INTO Tab1(ID, Gender) VALUES(1,'X') GO To the command above the SQL Server has generated the following execution plan: As we can see, the execution plan uses the Assert operator to check that the inserted value doesn't violate the Check Constraint. In this specific case, the Assert applies the rule, 'if the value is different to "F" and different to "M" than return 0 otherwise returns NULL'. The Assert operator is programmed to show an error if the returned value is not NULL; in other words, the returned value is not a "M" or "F". Assert checking Foreign Keys Now let's take a look at an example where the Assert is used to validate a foreign key constraint. Suppose we have this  query: ALTER TABLE Tab1 ADD ID_Genders INT GO  IF OBJECT_ID('Tab2') IS NOT NULL   DROP TABLE Tab2 GO CREATE TABLE Tab2(ID Integer PRIMARY KEY, Gender CHAR(1))  GO  INSERT INTO Tab2(ID, Gender) VALUES(1, 'F') INSERT INTO Tab2(ID, Gender) VALUES(2, 'M') INSERT INTO Tab2(ID, Gender) VALUES(3, 'N') GO  ALTER TABLE Tab1 ADD CONSTRAINT fk_Tab2 FOREIGN KEY (ID_Genders) REFERENCES Tab2(ID) GO  INSERT INTO Tab1(ID, ID_Genders, Gender) VALUES(1, 4, 'X') Let's look at the text execution plan to see what these Assert operators were doing. To see the text execution plan just execute SET SHOWPLAN_TEXT ON before run the insert command. |--Assert(WHERE:(CASE WHEN NOT [Pass1008] AND [Expr1007] IS NULL THEN (0) ELSE NULL END))      |--Nested Loops(Left Semi Join, PASSTHRU:([Tab1].[ID_Genders] IS NULL), OUTER REFERENCES:([Tab1].[ID_Genders]), DEFINE:([Expr1007] = [PROBE VALUE]))           |--Assert(WHERE:(CASE WHEN [Tab1].[Gender]<>'F' AND [Tab1].[Gender]<>'M' THEN (0) ELSE NULL END))           |    |--Clustered Index Insert(OBJECT:([Tab1].[PK]), SET:([Tab1].[ID] = RaiseIfNullInsert([@1]),[Tab1].[ID_Genders] = [@2],[Tab1].[Gender] = [Expr1003]), DEFINE:([Expr1003]=CONVERT_IMPLICIT(char(1),[@3],0)))           |--Clustered Index Seek(OBJECT:([Tab2].[PK]), SEEK:([Tab2].[ID]=[Tab1].[ID_Genders]) ORDERED FORWARD) Here we can see the Assert operator twice, first (looking down to up in the text plan and the right to left in the graphical plan) validating the Check Constraint. The same concept showed above is used, if the exit value is "0" than keep running the query, but if NULL is returned shows an exception. The second Assert is validating the result of the Tab1 and Tab2 join. It is interesting to see the "[Expr1007] IS NULL". To understand that you need to know what this Expr1007 is, look at the Probe Value (green text) in the text plan and you will see that it is the result of the join. If the value passed to the INSERT at the column ID_Gender exists in the table Tab2, then that probe will return the join value; otherwise it will return NULL. So the Assert is checking the value of the search at the Tab2; if the value that is passed to the INSERT is not found  then Assert will show one exception. If the value passed to the column ID_Genders is NULL than the SQL can't show a exception, in that case it returns "0" and keeps running the query. If you run the INSERT above, the SQL will show an exception because of the "X" value, but if you change the "X" to "F" and run again, it will show an exception because of the value "4". If you change the value "4" to NULL, 1, 2 or 3 the insert will be executed without any error. Assert checking a SubQuery: The Assert operator is also used to check one subquery. As we know, one scalar subquery can't validly return more than one value: Sometimes, however, a  mistake happens, and a subquery attempts to return more than one value . Here the Assert comes into play by validating the condition that a scalar subquery returns just one value. Take the following query: INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT ID_TipoSexo FROM Tab1), 'F')    INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT ID_TipoSexo FROM Tab1), 'F')    |--Assert(WHERE:(CASE WHEN NOT [Pass1016] AND [Expr1015] IS NULL THEN (0) ELSE NULL END))        |--Nested Loops(Left Semi Join, PASSTHRU:([tempdb].[dbo].[Tab1].[ID_TipoSexo] IS NULL), OUTER REFERENCES:([tempdb].[dbo].[Tab1].[ID_TipoSexo]), DEFINE:([Expr1015] = [PROBE VALUE]))              |--Assert(WHERE:([Expr1017]))             |    |--Compute Scalar(DEFINE:([Expr1017]=CASE WHEN [tempdb].[dbo].[Tab1].[Sexo]<>'F' AND [tempdb].[dbo].[Tab1].[Sexo]<>'M' THEN (0) ELSE NULL END))              |         |--Clustered Index Insert(OBJECT:([tempdb].[dbo].[Tab1].[PK__Tab1__3214EC277097A3C8]), SET:([tempdb].[dbo].[Tab1].[ID_TipoSexo] = [Expr1008],[tempdb].[dbo].[Tab1].[Sexo] = [Expr1009],[tempdb].[dbo].[Tab1].[ID] = [Expr1003]))              |              |--Top(TOP EXPRESSION:((1)))              |                   |--Compute Scalar(DEFINE:([Expr1008]=[Expr1014], [Expr1009]='F'))              |                        |--Nested Loops(Left Outer Join)              |                             |--Compute Scalar(DEFINE:([Expr1003]=getidentity((1856985942),(2),NULL)))              |                             |    |--Constant Scan              |                             |--Assert(WHERE:(CASE WHEN [Expr1013]>(1) THEN (0) ELSE NULL END))              |                                  |--Stream Aggregate(DEFINE:([Expr1013]=Count(*), [Expr1014]=ANY([tempdb].[dbo].[Tab1].[ID_TipoSexo])))             |                                       |--Clustered Index Scan(OBJECT:([tempdb].[dbo].[Tab1].[PK__Tab1__3214EC277097A3C8]))              |--Clustered Index Seek(OBJECT:([tempdb].[dbo].[Tab2].[PK__Tab2__3214EC27755C58E5]), SEEK:([tempdb].[dbo].[Tab2].[ID]=[tempdb].[dbo].[Tab1].[ID_TipoSexo]) ORDERED FORWARD)  You can see from this text showplan that SQL Server as generated a Stream Aggregate to count how many rows the SubQuery will return, This value is then passed to the Assert which then does its job by checking its validity. Is very interesting to see that  the Query Optimizer is smart enough be able to avoid using assert operators when they are not necessary. For instance: INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT ID_TipoSexo FROM Tab1 WHERE ID = 1), 'F') INSERT INTO Tab1(ID_TipoSexo, Sexo) VALUES((SELECT TOP 1 ID_TipoSexo FROM Tab1), 'F')  For both these INSERTs, the Query Optimiser is smart enough to know that only one row will ever be returned, so there is no need to use the Assert. Well, that's all folks, I see you next week with more "Operators". Cheers, Fabiano

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >