Search Results

Search found 9516 results on 381 pages for 'duplicate resource'.

Page 105/381 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Know More About Oracle Row Lock

    - by Liu Maclean(???)
    ??????Oracle??????????row lock,??ORACLE????????????????????,row lock???????????????????????????????,??Server Process?pin????block buffer????????? ????????,?process A ??update???????? Z?????????, ???????rollback???commit;??Process B??????DML??, ???????rowid???? Z???, ???????????process A????????ITL???,????????cleanout??,????????row???????????commit, ???????Process B????”enq: TX – row lock contention”??????? ????Process B????????????? ?????????Process A???????row,??Process B???????”enq: TX – row lock contention”???? ????????  ????????: SESSION A: SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE    10.2.0.5.0      Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> create table maclean_lock(t1 int); Table created. SQL> insert into maclean_lock values (1); 1 row created. SQL> commit; Commit complete. SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from maclean_lock; DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) ------------------------------------ ------------------------------------                                67642                                    1 SQL>  select distinct sid from v$mystat;        SID ----------        142 SQL> select pid,spid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat));        PID SPID ---------- ------------         17 15636 ??SESSION A ????savepoint ,?update ?????????         SQL>  savepoint NONLOCK; Savepoint created. SQL> select * From v$Lock where sid=142; no rows selected SQL> set linesize 140 pagesize 1400 SQL>  update maclean_lock set t1=t1+2; 1 row updated. SQL> select * From v$Lock where sid=142; ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- 0000000091FC69F0 0000000091FC6A18        142 TM      55829          0          3          0          6          0 00000000914B4008 00000000914B4040        142 TX     393232        609          6          0          6          0         SQL> select dump(3,16) from dual; DUMP(3,16) -------------------------------------------------------------------------------- Typ=2 Len=2: c1,4 ALTER SYSTEM DUMP DATAFILE 1 BLOCK 67642;  Object id on Block? Y  seg/obj: 0xda16  csc: 0x00.234718  itc: 2  flg: O  typ: 1 - DATA      fsl: 0  fnx: 0x0 ver: 0x01  Itl           Xid                  Uba         Flag  Lck        Scn/Fsc 0x01   0x000a.00f.000001e0  0x00800075.02a6.29  C---    0  scn 0x0000.00234711 0x02   0x0007.018.000001fe  0x0080065c.017a.02  ----    1  fsc 0x0000.00000000 data_block_dump,data header at 0x81d185c =============== tsiz: 0x1fa0 hsiz: 0x14 pbl: 0x081d185c bdba: 0x0041083a      76543210 flag=-------- ntab=1 nrow=1 frre=-1 fsbo=0x14 fseo=0x1f9a avsp=0x1f83 tosp=0x1f83 0xe:pti[0]      nrow=1  offs=0 0x12:pri[0]     offs=0x1f9a block_row_dump: tab 0, row 0, @0x1f9a tl: 6 fb: --H-FL-- lb: 0x2  cc: 1 col  0: [ 2]  c1 04 end_of_block_dump ?? BLOCK DUMP ???? ??????XID=0x0007.018.000001fe ?transaction?? lb:0x1 ??SESSION B ,?????UPDATE?? ???enq: TX - row lock contention ?? SQL> select distinct sid from v$mystat;        SID ----------        140 SQL> select pid,spid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat));        PID SPID ---------- ------------         24 15652 SQL> alter system set "_trace_events"='10000-10999:255:24'; System altered.         SQL> update maclean_lock set t1=t1+2; select * From v$Lock where sid=142 or sid=140 order by sid; SESSION C: SQL> select * From v$Lock where sid=142 or sid=140 order by sid; ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- 0000000091FC6B10 0000000091FC6B38        140 TM      55829          0          3          0         84          0 00000000924F4A58 00000000924F4A78        140 TX     458776        510          0          6         84          0 00000000914B51E8 00000000914B5220        142 TX     458776        510          6          0        312          1 0000000091FC69F0 0000000091FC6A18        142 TM      55829          0          3          0        312          0 ???? SESSION B SID=140 ?SESSION A ?TX ENQUEUE ?X mode?REQUEST SQL> oradebug dump systemstate 266; Statement processed. SESSION B waiter's enqueue lock       SO: 0x924f4a58, type: 5, owner: 0x92bb8dc8, flag: INIT/-/-/0x00       (enqueue) TX-00070018-000001FE    DID: 0001-0018-00000022       lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6       req: X, lock_flag: 0x0, lock: 0x924f4a78, res: 0x925617c0       own: 0x92b76be0, sess: 0x92b76be0, proc: 0x92a737a0, prv: 0x925617e0 TX-00070018-000001FE=> TX 458776 510 SESSION A owner's enqueue lock       SO: 0x914b51e8, type: 40, owner: 0x92b796d0, flag: INIT/-/-/0x00       (trans) flg = 0x1e03, flg2 = 0xc0000, prx = 0x0, ros = 2147483647 bsn = 0xed5 bndsn = 0xee7 spn = 0xef7       efd = 3       file:xct.c lineno:1179       DID: 0001-0011-000000C2       parent xid: 0x0000.000.00000000       env: (scn: 0x0000.00234718  xid: 0x0007.018.000001fe  uba: 0x0080065c.017a.02  statement num=0  parent xid: xid: 0x0000.000.00000000  scn: 0x00 00.00234718 0sch: scn: 0x0000.00000000)       cev: (spc = 7818  arsp = 914e8310  ubk tsn: 1 rdba: 0x0080065c  useg tsn: 1 rdba: 0x00800069             hwm uba: 0x0080065c.017a.02  col uba: 0x00000000.0000.00             num bl: 1 bk list: 0x91435070)             cr opc: 0x0 spc: 7818 uba: 0x0080065c.017a.02       (enqueue) TX-00070018-000001FE    DID: 0001-0011-000000C2       lv: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  res_flag: 0x6       mode: X, lock_flag: 0x0, lock: 0x914b5220, res: 0x925617c0       own: 0x92b796d0, sess: 0x92b796d0, proc: 0x92a6ffd8, prv: 0x925617d0        xga: 0x8b7c6d40, heap: UGA       Trans IMU st: 2 Pool index 65535, Redo pool 0x914b58d0, Undo pool 0x914b59b8       Redo pool range [0x86de640 0x86de640 0x86e0e40]       Undo pool range [0x86dbe40 0x86dbe40 0x86de640]         ----------------------------------------         SO: 0x91435070, type: 39, owner: 0x914b51e8, flag: -/-/-/0x00         (List of Blocks) next index = 1         index   itli   buffer hint   rdba       savepoint         -----------------------------------------------------------             0      2   0x647f1fc8    0x41083a     0xee7 ?SESSION A? ROLLBACK ?savepoint: SQL> rollback to NONLOCK; Rollback complete. ????savepoint ??update??????? ??UPDATE???????? ROLLBACK: SQL> select * From v$Lock where sid=142 or sid=140; ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ---------- 00000000924F4A58 00000000924F4A78        140 TX     458776        510          0          6        822          0 0000000091FC6B10 0000000091FC6B38        140 TM      55829          0          3          0        822          0 00000000914B51E8 00000000914B5220        142 TX     458776        510          6          0       1050          1 ???? SESSION A 142 ???SAVEPOINT ???????TM LOCK ????? ROLLBACK TO SAVEPOINT?????SESSION???TX LOCK!!!! ??????SESSION 142???TX ID1=458776 ID2=510, ????ROLLBACK TO SAVEPOINT?????????ABORT TRANSACTION ?? SESSION B  SID=140??  SESSION A ?? , ?????????SESSION B? update???HANG?? ?????????CACHE?????:  Object id on Block? Y  seg/obj: 0xda16  csc: 0x00.2347b7  itc: 2  flg: O  typ: 1 - DATA      fsl: 0  fnx: 0x0 ver: 0x01  Itl           Xid                  Uba         Flag  Lck        Scn/Fsc 0x01   0x000a.00f.000001e0  0x00800075.02a6.29  C---    0  scn 0x0000.00234711 0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000 data_block_dump,data header at 0x745d85c =============== tsiz: 0x1fa0 hsiz: 0x14 pbl: 0x0745d85c bdba: 0x0041083a      76543210 flag=-------- ntab=1 nrow=1 frre=-1 fsbo=0x14 fseo=0x1f9a avsp=0x1f83 tosp=0x1f83 0xe:pti[0]      nrow=1  offs=0 0x12:pri[0]     offs=0x1f9a block_row_dump: tab 0, row 0, @0x1f9a tl: 6 fb: --H-FL-- lb: 0x0  cc: 1 col  0: [ 2]  c1 02 end_of_block_dump ???? ITL=0x02? ?????????,col  0: [ 2]  c1 02 ????????? ?????????SESSION D ,??????row lock?? ?UPDATE???????? SESSION D: SQL> update maclean_lock set t1=t1+2; 1 row updated. SQL> rollback; Rollback complete. ??SESSION B ??????????? ?????ORACLE????????, ??????????? TX lock?? row lock , ????????2??? row lock?????????, ?TX lock????????ENQUEUE LOCK???? ?????????PROCESS K?DML???????????????????????,??????????TX LOCK, ????PROCESS Z?????????????????????????ROW LOCK????????, ???????OLTP?????????????????????? ??ROW LOCK?Release ??????TX?ENQUEUE LOCK,?????????Process J ????????????, Process K??????????? ,Process K?????????,???row piece?lb??0x0 ,?????ITL, Process Z???ITL???????Process J????XID,?????Process J?????TX lock, PROCESS K ???TX resource?Enqueue Waiter Linked List?????X mode(exclusive)?enqueue lock? ???Process J??TX lock?,Process J?????TX resource?Enqueue Waiter Linked List ???Process K??????,??POST?????Process K? TX lock??????, ???????row lock???????,????????? ?????????? ?????: SESSION A ???PID =17 ?????????????????? SESSION B ???PID =24 ??????? "_trace_events"='10000-10999:255:24';  KST trace ??????? Server Process??? SESSION A PID=17  ?? acqure?SX mode???TM Lock ,?? ????Transaction?????UNDO SEGMENT 7,???XID 7.24.510, ?acquire ?X mode? TX-00070018-000001fe ? ?????? 00070018-000001fe ???? 7- 24 - 510? XID ? 781F4B8A:007A569C    17   142 10704  83 ksqgtl: acquire TM-0000da15-00000000 mode=SX flags=GLOBAL|XACT why="contention" 781F4B92:007A569D    17   142 10704  19 ksqgtl: SUCCESS 781F4BB3:007A569E    17   142 10812   2 0x000000000041083A 0x0000000000000000 0x0000000000234717 781F4BBA:007A569F    17   142 10812   3 0x0000000000000000 0x0000000000000000 0x0000000000000000 781F4BC0:007A56A0    17   142 10812   4 0x0000000000000000 0x0000000000000000 0x0000000000000000 781F4BD3:007A56A1    17   142 10812   5 0x000000000041083A 0x0000000000000000 0x0000000000000000 781F4BFE:007A56A2    17   142 10811   1 0x000000000041083A 0x0000000000000000 0x0000000000234711 0x0000000000000002 781F4C06:007A56A3    17   142 10811   2 0x000000000041083A 0x0000000000000000 0x0000000000234718 0x00007FA074EDA560 781F4C26:007A56A4    17   142 10813   1 ktubnd: Bind usn 7 nax 1 nbx 0 lng 0 par 0 781F4C43:007A56A5    17   142 10813   2 ktubnd: Txn Bound xid: 7.24.510 781F4C4A:007A56A6    17   142 10704  83 ksqgtl: acquire TX-00070018-000001fe mode=X flags=GLOBAL|XACT why="contention" 781F4C51:007A56A7    17   142 10704  19 ksqgtl: SUCCESS ?????????? ???????? 781F4CBF:007A56A8    17   142 10005   1 KSL WAIT BEG [SQL*Net message to client] 1650815232/0x62657100 1/0x1 0/0x0 781F4CCC:007A56A9    17   142 10005   2 KSL WAIT END [SQL*Net message to client] 1650815232/0x62657100 1/0x1 0/0x0 time=13 781F4CDE:007A56AA    17   142 10005   1 KSL WAIT BEG [SQL*Net message from client] 1650815232/0x62657100 1/0x1 0/0x0 786BD85D:007A57E0    17   142 10005   2 KSL WAIT END [SQL*Net message from client] 1650815232/0x62657100 1/0x1 0/0x0 time=5016447 786BD966:007A57E1    17   142 10005   1 KSL WAIT BEG [SQL*Net message to client] 1650815232/0x62657100 1/0x1 0/0x0 786BD96E:007A57E2    17   142 10005   2 KSL WAIT END [SQL*Net message to client] 1650815232/0x62657100 1/0x1 0/0x0 time=8 SESSION B ???PID =24  ,??????? SX mode? TM lock,??row lock? acquire X mode?TX-00070018-000001fe ksqgtl: acquire TM-0000da15-00000000 mode=SX flags=GLOBAL|XACT why="contention" ksqgtl: SUCCESS 0x000000000041083A 0x0000000000000000 0x00000000002354F8 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x000000000041083A 0x0000000000000000 0x00000000002354F8 0x0000000000000001 0x000000000041083A 0x0000000000000000 0x00000000002354F8 0x0000000008A63780 0x0000000000000001 0x0000000000800861 0x0000000000000241 0x0000000000000001 0x000000000041083A 0x0000000000000001 0x0000000000000001 0x000000000041083A 0x0000000000000000 0x00000000002354F9 0x0000000000000002 ksqgtl: acquire TX-00070018-000001fe mode=X flags=GLOBAL|LONG why="row lock contention" C4048EBD:007F52B6    24   140 10005   2 KSL WAIT END [enq: TX - row lock contention] 1415053318/0x54580006 458776/0x70018 510/0x1fe time=2929879 C4048ED4:007F52B7    24   140 10005   1 KSL WAIT BEG [enq: TX - row lock contention] 1415053318/0x54580006 458776/0x70018 510/0x1fe C43146CA:007F535E    24   140 10005   2 KSL WAIT END [enq: TX - row lock contention] 1415053318/0x54580006 458776/0x70018 510/0x1fe time=2930676 ????????? ,PID=24 ??????ksqcmi???????? deadlock C43146D9:007F535F    24   140 10704 134 ksqcmi: performing local deadlock detection on TX-00070018-000001fe C43146F8:007F5360    24   140 10704 150 ksqcmi: deadlock not detected on TX-00070018-000001fe ?? ??? PID 17 ??ROLLBACK ???? ,????????: PID 17 ROLLBACK; D7A495BB:007F9D3E    17   142 10005   4 KSL POST SENT postee=24 loc='ksqrcl' id1=0 id2=0 name=   type=0 D7A495D8:007F9D3F    17   142 10444  12 ABORT TRANSACTION - xid: 0x0007.018.000001fe ??  PID 17 ??? TX resource?Enqueue Waiter linked List ???PID 24???,????KSL POST SENT ?? PID 24, ???ksqrcl???ENQUEUE LOCK ?PID 24??????KSL POST (KSL POST RCVD poster=17), ?ksqgtl???? TX-00070018-000001fe ?? ksqrcl??, ??PID 24???????? TX lock?USN ,??????? USN 3 XID 3.11.582 ,???acquire TX-0003000b-00000246 D7A49616:007F9D41    24   140 10005   3 KSL POST RCVD poster=17 loc='ksqrcl' id1=0 id2=0 name=   type=0 fac#=0 facpost=1 D7A4961C:007F9D42    24   140 10704  19 ksqgtl: SUCCESS D7A4967D:007F9D43    24   140 10704 117 ksqrcl: release TX-00070018-000001fe mode=X D7A496A5:007F9D44    24   140 10813   1 ktubnd: Bind usn 3 nax 1 nbx 0 lng 0 par 0 D7A496C2:007F9D45    24   140 10813   2 ktubnd: Txn Bound xid: 3.11.582 D7A496C7:007F9D46    24   140 10704  83 ksqgtl: acquire TX-0003000b-00000246 mode=X flags=GLOBAL|XACT why="contention" D7A496E4:007F9D47    24   140 10704  19 ksqgtl: SUCCESS ROW LOCK?Release ??????TX?ENQUEUE LOCK,?????????Process J ????????????, Process K??????????? ,Process K?????????,???row piece?lb??0×0 ,?????ITL,Process Z???ITL???????Process J????XID,?????Process J?????TX lock,PROCESS K ???TX resource?Enqueue Waiter Linked List?????X mode(exclusive)?enqueue lock? ???Process J??TX lock?,Process J?????TX resource?Enqueue Waiter Linked List ???Process K??????,??POST?????Process K? TX lock??????,???????row lock???????,?????????

    Read the article

  • error initializing multiple configuration files

    - by lurscher
    Hi, during initialization startup on tomcat, the configurations are: 1) a webapp/WEB-INF/web.xml that imports yummy-servlet.xml in contextConfigLocation (although i'm aware that is not required since the servlet-name is yummy it will try to load yummy-servlet.xml by default) 2) a webapp/WEB-INF/yummy-servlet.xml that imports a spring/applicationContext-hibernate.xml file 3) a webapp/WEB-INF/spring/applicationContext-hibernate.xml that imports a applicationContext-dataSource.xml file 4) a webapp/WEB-INF/spring/applicationContext-dataSource.xml i'm getting errors about Failed to import bean definitions from relative location, but the stack trace is not very explicit about exactly what is the problem, i've been looking at these since yesterday and i really don't see any problem on the files my web.xml: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>hello-spring3-RC1</display-name> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/yummy-servlet.xml</param-value> </context-param> <!-- <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/spring/applicationContext-hibernate.xml</param-value> </context-param> --> <!-- Location of the Log4J config file, for initialization and refresh checks. Applied by Log4jConfigListener. --> <context-param> <param-name>log4jConfigLocation</param-name> <param-value>classpath:log4j.properties</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <servlet> <servlet-name>yummy</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>yummy</servlet-name> <url-pattern>*.html</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> </web-app> my yummy-servlet.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd"> <import resource="spring/applicationContext-hibernate.xml"/> <context:component-scan base-package="com.mine.web.controllers"/> <bean id="jspViewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/WEB-INF/jsp/"/> <property name="suffix" value=".jsp"/> </bean> </beans> my applicationContext-hibernate.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd"> <!-- import the dataSource definition --> <import resource="applicationContext-dataSource.xml"/> <!-- Configurer that replaces ${...} placeholders with values from a properties file --> <!-- (in this case, Hibernate-related settings for the sessionFactory definition below) --> <context:property-placeholder location="classpath:jdbc.properties"/> <context:property-placeholder location="classpath:hibernate.properties"/> <!-- Hibernate SessionFactory --> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" p:dataSource-ref="dataSource" p:mappingResources="hello.hbm.xml"> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${hibernate.dialect}</prop> <prop key="hibernate.show_sql">${hibernate.show_sql}</prop> <prop key="hibernate.generate_statistics">${hibernate.generate_statistics}</prop> </props> </property> <property name="eventListeners"> <map> <entry key="merge"> <bean class="org.springframework.orm.hibernate3.support.IdTransferringMergeEventListener"/> </entry> </map> </property> </bean> <bean id="hibernateTemplate" class="org.springframework.orm.hibernate3.HibernateTemplate"> <property name="sessionFactory" ref="sessionFactory" /> </bean> <!-- Transaction manager for a single Hibernate SessionFactory (alternative to JTA) --> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager" p:sessionFactory-ref="sessionFactory"/> <!-- ========================= BUSINESS OBJECT DEFINITIONS ========================= --> <!-- Activates various annotations to be detected in bean classes: Spring's @Required and @Autowired, as well as JSR 250's @Resource. --> <context:annotation-config/> <!-- Instruct Spring to perform declarative transaction management automatically on annotated classes. --> <tx:annotation-driven transaction-manager="transactionManager"/> <bean id="EntityManager" class="com.mine.persistence.hibernate.HibernateHelloWorldDao"/> </beans> and my applicationContext-dataSource.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:jdbc="http://www.springframework.org/schema/jdbc" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc.xsd"> <!-- Configurer that replaces ${...} placeholders with values from a properties file --> <!-- (in this case, JDBC-related settings for the dataSource definition below) --> <context:property-placeholder location="classpath:jdbc.properties"/> <context:property-placeholder location="classpath:hibernate.properties"/> <!-- data source using apache common dbcp pool manager <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.url}" p:username="${jdbc.username}" p:password="${jdbc.password}"/>--> <!-- c3p0 pool manager data source --> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="${jdbc.driverClassName}"/> <property name="jdbcUrl" value="${jdbc.url}"/> <property name="user" value="${jdbc.username}"/> <property name="password" value="${jdbc.password}"/> <property name="initialPoolSize" value="${hibernate.c3p0.min_size}"/> <property name="minPoolSize" value="${hibernate.c3p0.min_size}"/> <property name="maxPoolSize" value="${jdbc.maxconn}"/> <property name="idleConnectionTestPeriod" value="150"/> <property name="acquireIncrement" value="1"/> <property name="maxStatements" value="0"/> <property name="numHelperThreads" value="5"/> </bean> </beans> and this is the stack trace: 2010-06-13 12:16:33,526 INFO [org.springframework.web.context.ContextLoader] - < Root WebApplicationContext: initialization started> 2010-06-13 12:16:33,707 INFO [org.springframework.web.context.support.XmlWebAppl icationContext] - <Refreshing Root WebApplicationContext: startup date [Sun Jun 13 12:16:33 GMT-05:00 2010]; root of context hierarchy> 2010-06-13 12:16:34,086 INFO [org.springframework.beans.factory.xml.XmlBeanDefin itionReader] - <Loading XML bean definitions from ServletContext resource [/WEB- INF/yummy-servlet.xml]> 2010-06-13 12:16:34,378 INFO [org.springframework.beans.factory.xml.XmlBeanDefin itionReader] - <Loading XML bean definitions from URL [jndi:/localhost/protoweb/ WEB-INF/spring/applicationContext-hibernate.xml]> 2010-06-13 12:16:34,473 INFO [org.springframework.beans.factory.xml.XmlBeanDefin itionReader] - <Loading XML bean definitions from URL [jndi:/localhost/protoweb/ WEB-INF/spring/applicationContext-dataSource.xml]> 2010-06-13 12:16:35,098 ERROR [org.springframework.web.context.ContextLoader] - <Context initialization failed> org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Failed to import bean definitions from relative location [spring/applicationContext-hibernate.xml] Offending resource: ServletContext resource [/WEB-INF/yummy-servlet.xml]; nested exception is org.springframework.beans.factory.BeanDefinitionStoreException: Un expected exception parsing XML document from URL [jndi:/localhost/protoweb/WEB-INF/spring/applicationContext-hibernate.xml]; nested exception is java.lang.NoSuchMethodError: org.springframework.beans.MutablePropertyValues.add(Ljava/lang/Str ing;Ljava/lang/Object;)Lorg/springframework/beans/MutablePropertyValues; at org.springframework.beans.factory.parsing.FailFastProblemReporter.err or(FailFastProblemReporter.java:68) at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderC ontext.java:85) at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderC ontext.java:76) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.importBeanDefinitionResource(DefaultBeanDefinitionDocumentReader.java:197) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.parseDefaultElement(DefaultBeanDefinitionDocumentReader.java:146) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:131) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:91) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registe rBeanDefinitions(XmlBeanDefinitionReader.java:475) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadB eanDefinitions(XmlBeanDefinitionReader.java:372) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:316) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:284) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:125) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:93) at org.springframework.context.support.AbstractRefreshableApplicationCon text.refreshBeanFactory(AbstractRefreshableApplicationContext.java:127) at org.springframework.context.support.AbstractApplicationContext.obtain FreshBeanFactory(AbstractApplicationContext.java:429) at org.springframework.context.support.AbstractApplicationContext.refres h(AbstractApplicationContext.java:356) at org.springframework.web.context.ContextLoader.createWebApplicationCon text(ContextLoader.java:270) at org.springframework.web.context.ContextLoader.initWebApplicationConte xt(ContextLoader.java:197) at org.springframework.web.context.ContextLoaderListener.contextInitiali zed(ContextLoaderListener.java:47) at org.apache.catalina.core.StandardContext.listenerStart(StandardContex t.java:3972) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4 467) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase .java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:77 1) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:905) at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:740 ) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:500 ) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1277) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java :321) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(Lifecycl eSupport.java:119) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:785) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443 ) at org.apache.catalina.core.StandardService.start(StandardService.java:5 19) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710 ) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414) Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: Unexp ected exception parsing XML document from URL [jndi:/localhost/protoweb/WEB-INF/ spring/applicationContext-hibernate.xml]; nested exception is java.lang.NoSuchMe thodError: org.springframework.beans.MutablePropertyValues.add(Ljava/lang/String ;Ljava/lang/Object;)Lorg/springframework/beans/MutablePropertyValues; at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadB eanDefinitions(XmlBeanDefinitionReader.java:394) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:316) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:284) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.importBeanDefinitionResource(DefaultBeanDefinitionDocumentReader.java:187) ... 42 more Caused by: java.lang.NoSuchMethodError: org.springframework.beans.MutablePropert yValues.add(Ljava/lang/String;Ljava/lang/Object;)Lorg/springframework/beans/Muta blePropertyValues; at org.springframework.transaction.config.AnnotationDrivenBeanDefinition Parser.registerTransactionManager(AnnotationDrivenBeanDefinitionParser.java:95) at org.springframework.transaction.config.AnnotationDrivenBeanDefinition Parser.access$0(AnnotationDrivenBeanDefinitionParser.java:94) at org.springframework.transaction.config.AnnotationDrivenBeanDefinition Parser$AopAutoProxyConfigurer.configureAutoProxyCreator(AnnotationDrivenBeanDefi nitionParser.java:121) at org.springframework.transaction.config.AnnotationDrivenBeanDefinition Parser.parse(AnnotationDrivenBeanDefinitionParser.java:79) at org.springframework.beans.factory.xml.NamespaceHandlerSupport.parse(N amespaceHandlerSupport.java:72) at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.pa rseCustomElement(BeanDefinitionParserDelegate.java:1327) at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.pa rseCustomElement(BeanDefinitionParserDelegate.java:1317) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:134) at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentRe ader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:91) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registe rBeanDefinitions(XmlBeanDefinitionReader.java:475) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadB eanDefinitions(XmlBeanDefinitionReader.java:372) ... 47 more 06/13/2010 12:16:35 PM org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart

    Read the article

  • TFS 2010 SDK: Connecting to TFS 2010 Programmatically&ndash;Part 1

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS 2010 SDK,TFS API,TFS Programming,TFS ALM   Download Working Demo Great! You have reached that point where you would like to extend TFS 2010. The first step is to connect to TFS programmatically. 1. Download TFS 2010 SDK => http://visualstudiogallery.msdn.microsoft.com/25622469-19d8-4959-8e5c-4025d1c9183d?SRC=VSIDE 2. Alternatively you can also download this from the visual studio extension manager 3. Create a new Windows Forms Application project and add reference to TFS Common and client dlls Note - If Microsoft.TeamFoundation.Client and Microsoft.TeamFoundation.Common do not appear on the .NET tab of the References dialog box, use the Browse tab to add the assemblies. You can find them at %ProgramFiles%\Microsoft Visual Studio 10.0\Common7\IDE\ReferenceAssemblies\v2.0. using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common;   4. There are several ways to connect to TFS, the two classes of interest are, Option 1 – Class – TfsTeamProjectCollectionClass namespace Microsoft.TeamFoundation.Client { public class TfsTeamProjectCollection : TfsConnection { public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection); public TfsTeamProjectCollection(Uri uri); public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection, IdentityDescriptor identityToImpersonate); public TfsTeamProjectCollection(Uri uri, ICredentials credentials); public TfsTeamProjectCollection(Uri uri, ICredentialsProvider credentialsProvider); public TfsTeamProjectCollection(Uri uri, IdentityDescriptor identityToImpersonate); public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsTeamProjectCollection(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public TfsTeamProjectCollection(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public override CatalogNode CatalogNode { get; } public TfsConfigurationServer ConfigurationServer { get; internal set; } public override string Name { get; } public static Uri GetFullyQualifiedUriForName(string name); protected override object GetServiceInstance(Type serviceType, object serviceInstance); protected override object InitializeTeamFoundationObject(string fullName, object instance); } } Option 2 – Class – TfsConfigurationServer namespace Microsoft.TeamFoundation.Client { public class TfsConfigurationServer : TfsConnection { public TfsConfigurationServer(RegisteredConfigurationServer application); public TfsConfigurationServer(Uri uri); public TfsConfigurationServer(RegisteredConfigurationServer application, IdentityDescriptor identityToImpersonate); public TfsConfigurationServer(Uri uri, ICredentials credentials); public TfsConfigurationServer(Uri uri, ICredentialsProvider credentialsProvider); public TfsConfigurationServer(Uri uri, IdentityDescriptor identityToImpersonate); public TfsConfigurationServer(RegisteredConfigurationServer application, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsConfigurationServer(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsConfigurationServer(RegisteredConfigurationServer application, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public TfsConfigurationServer(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public override CatalogNode CatalogNode { get; } public override string Name { get; } protected override object GetServiceInstance(Type serviceType, object serviceInstance); public TfsTeamProjectCollection GetTeamProjectCollection(Guid collectionId); protected override object InitializeTeamFoundationObject(string fullName, object instance); } }   Note – The TeamFoundationServer class is obsolete. Use the TfsTeamProjectCollection or TfsConfigurationServer classes to talk to a 2010 Team Foundation Server. In order to talk to a 2005 or 2008 Team Foundation Server use the TfsTeamProjectCollection class. 5. Sample code for programmatically connecting to TFS 2010 using the TFS 2010 API How do i know what the URI of my TFS server is, Note – You need to be have Team Project Collection view details permission in order to connect, expect to receive an authorization failure message if you do not have sufficient permissions. Case 1: Connect by Uri string _myUri = @"https://tfs.codeplex.com:443/tfs/tfs30"; TfsConfigurationServer configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri)); Case 2: Connect by Uri, prompt for credentials string _myUri = @"https://tfs.codeplex.com:443/tfs/tfs30"; TfsConfigurationServer configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri), new UICredentialsProvider()); configurationServer.EnsureAuthenticated(); Case 3: Connect by Uri, custom credentials In order to use this method of connectivity you need to implement the interface ICredentailsProvider public class ConnectByImplementingCredentialsProvider : ICredentialsProvider { public ICredentials GetCredentials(Uri uri, ICredentials iCredentials) { return new NetworkCredential("UserName", "Password", "Domain"); } public void NotifyCredentialsAuthenticated(Uri uri) { throw new ApplicationException("Unable to authenticate"); } } And now consume the implementation of the interface, string _myUri = @"https://tfs.codeplex.com:443/tfs/tfs30"; ConnectByImplementingCredentialsProvider connect = new ConnectByImplementingCredentialsProvider(); ICredentials iCred = new NetworkCredential("UserName", "Password", "Domain"); connect.GetCredentials(new Uri(_myUri), iCred); TfsConfigurationServer configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri), connect); configurationServer.EnsureAuthenticated();   6. Programmatically query TFS 2010 using the TFS SDK for all Team Project Collections and retrieve all Team Projects and output the display name and description of each team project. CatalogNode catalogNode = configurationServer.CatalogNode; ReadOnlyCollection<CatalogNode> tpcNodes = catalogNode.QueryChildren( new Guid[] { CatalogResourceTypes.ProjectCollection }, false, CatalogQueryOptions.None); // tpc = Team Project Collection foreach (CatalogNode tpcNode in tpcNodes) { Guid tpcId = new Guid(tpcNode.Resource.Properties["InstanceId"]); TfsTeamProjectCollection tpc = configurationServer.GetTeamProjectCollection(tpcId); // Get catalog of tp = 'Team Projects' for the tpc = 'Team Project Collection' var tpNodes = tpcNode.QueryChildren( new Guid[] { CatalogResourceTypes.TeamProject }, false, CatalogQueryOptions.None); foreach (var p in tpNodes) { Debug.Write(Environment.NewLine + " Team Project : " + p.Resource.DisplayName + " - " + p.Resource.Description + Environment.NewLine); } }   Output   You can download a working demo that uses TFS SDK 2010 to programmatically connect to TFS 2010. Screen Shots of the attached demo application, Share this post :

    Read the article

  • Request Limit Length Limits for IIS&rsquo;s requestFiltering Module

    - by Rick Strahl
    Today I updated my CodePaste.net site to MVC 3 and pushed an update to the site. The update of MVC went pretty smooth as well as most of the update process to the live site. Short of missing a web.config change in the /views folder that caused blank pages on the server, the process was relatively painless. However, one issue that kicked my ass for about an hour – and not foe the first time – was a problem with my OpenId authentication using DotNetOpenAuth. I tested the site operation fairly extensively locally and everything worked no problem, but on the server the OpenId returns resulted in a 404 response from IIS for a nice friendly OpenId return URL like this: http://codepaste.net/Account/OpenIdLogon?dnoa.userSuppliedIdentifier=http%3A%2F%2Frstrahl.myopenid.com%2F&dnoa.return_to_sig_handle=%7B634239223364590000%7D%7BjbHzkg%3D%3D%7D&dnoa.return_to_sig=7%2BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%2F%2FbF%2FhhYscgWzjg%2BB%2Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%3D%3D&openid.assoc_handle=%7BHMAC-SHA256%7D%7B4cca49b2%7D%7BMVGByQ%3D%3D%7D&openid.claimed_id=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.identity=http%3A%2F%2Frstrahl.myopenid.com%2F&openid.mode=id_res&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.ns.sreg=http%3A%2F%2Fopenid.net%2Fextensions%2Fsreg%2F1.1&openid.op_endpoint=http%3A%2F%2Fwww.myopenid.com%2Fserver&openid.response_nonce=2010-10-29T04%3A12%3A53Zn5F4r5&openid.return_to=http%3A%2F%2Fcodepaste.net%2FAccount%2FOpenIdLogon%3Fdnoa.userSuppliedIdentifier%3Dhttp%253A%252F%252Frstrahl.myopenid.com%252F%26dnoa.return_to_sig_handle%3D%257B634239223364590000%257D%257BjbHzkg%253D%253D%257D%26dnoa.return_to_sig%3D7%252BcGhp7UUkcV2B8W29ibIDnZuoGoqzyS%252F%252FbF%252FhhYscgWzjg%252BB%252Fj10ZpNdBkUCu86dkTL6f4OK2zY5qHhCnJ2Dw%253D%253D&openid.sig=h1GCSBTDAn1on98sLA6cti%2Bj1M6RffNerdVEI80mnYE%3D&openid.signed=assoc_handle%2Cclaimed_id%2Cidentity%2Cmode%2Cns%2Cns.sreg%2Cop_endpoint%2Cresponse_nonce%2Creturn_to%2Csigned%2Csreg.email%2Csreg.fullname&openid.sreg.email=rstrahl%40host.com&openid.sreg.fullname=Rick+Strahl A 404 of course isn’t terribly helpful – normally a 404 is a resource not found error, but the resource is definitely there. So how the heck do you figure out what’s wrong? If you’re just interested in the solution, here’s the short version: IIS by default allows only for a 1024 byte query string, which is obviously exceeded by the above. The setting is controlled by the RequestFiltering module in IIS 6 and later which can be configured in ApplicationHost.config (in \%windir\system32\inetsvr\config). To set the value configure the requestLimits key like so: <configuration> <security> <requestFiltering> <requestLimits maxQueryString="2048"> </requestLimits> </requestFiltering> </security> </configuration> This fixed me right up and made the requests work. How do you find out about problems like this? Ah yes the troubles of an administrator? Read on and I’ll take you through a quick review of how I tracked this down. Finding the Problem The issue with the error returned is that IIS returns a 404 Resource not found error and doesn’t provide much information about it. If you’re lucky enough to be able to run your site from the localhost IIS is actually very helpful and gives you the right information immediately in a nicely detailed error page. The bottom of the page actually describes exactly what needs to be fixed. One problem with this easy way to find an error: You HAVE TO run localhost. On my server which has about 10 domains running localhost doesn’t point at the particular site I had problems with so I didn’t get the luxury of this nice error page. Using Failed Request Tracing to retrieve Error Info The first place I go with IIS errors is to turn on Failed Request Tracing in IIS to get more error information. If you have access to the server to make a configuration change you can enable Failed Request Tracing like this: Find the Failed Request Tracing Rules in the IIS Service Manager.   Select the option and then Edit Site Tracing to enable tracing. Then add a rule for * (all content) and specify status codes from 100-999 to capture all errors. if you know exactly what error you’re looking for it might help to specify it exactly to keep the number of errors down. Then run your request and let it fail. IIS will throw error log files into a folder like this C:\inetpub\logs\FailedReqLogFiles\W3SVC5 where the last 5 is the instance ID of the site. These files are XML but they include an XSL stylesheet that provides some decent formatting. In this case it pointed me straight at the offending module:   Ok, it’s the RequestFilteringModule. Request Filtering is built into IIS 6-7 and configured in ApplicationHost.config. This module defines a few basic rules about what paths and extensions are allowed in requests and among other things how long a query string is allowed to be. Most of these settings are pretty sensible but the query string value can easily become a problem especially if you’re dealing with OpenId since these return URLs are quite extensive. Debugging failed requests is never fun, but IIS 6 and forward at least provides us the tools that can help us point in the right direction. The error message the FRT report isn’t as nice as the IIS error message but it at least points at the offending module which gave me the clue I needed to look at request restrictions in ApplicationHost.config. This would still be a stretch if you’re not intimately familiar, but I think with some Google searches it would be easy to track this down with a few tries… Hope this was useful to some of you. Useful to me to put this out as a reminder – I’ve run into this issue before myself and totally forgot. Next time I got it, right?© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  Security  

    Read the article

  • SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28

    - by pinaldave
    Locking is a mechanism used by the SQL Server Database Engine to synchronize access by multiple users to the same piece of data, at the same time. In simpler words, it maintains the integrity of data by protecting (or preventing) access to the database object. From Book On-Line: LCK_M_BU Occurs when a task is waiting to acquire a Bulk Update (BU) lock. LCK_M_IS Occurs when a task is waiting to acquire an Intent Shared (IS) lock. LCK_M_IU Occurs when a task is waiting to acquire an Intent Update (IU) lock. LCK_M_IX Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock. LCK_M_S Occurs when a task is waiting to acquire a Shared lock. LCK_M_SCH_M Occurs when a task is waiting to acquire a Schema Modify lock. LCK_M_SCH_S Occurs when a task is waiting to acquire a Schema Share lock. LCK_M_SIU Occurs when a task is waiting to acquire a Shared With Intent Update lock. LCK_M_SIX Occurs when a task is waiting to acquire a Shared With Intent Exclusive lock. LCK_M_U Occurs when a task is waiting to acquire an Update lock. LCK_M_UIX Occurs when a task is waiting to acquire an Update With Intent Exclusive lock. LCK_M_X Occurs when a task is waiting to acquire an Exclusive lock. LCK_M_XXX Explanation: I think the explanation of this wait type is the simplest. When any task is waiting to acquire lock on any resource, this particular wait type occurs. The common reason for the task to be waiting to put lock on the resource is that the resource is already locked and some other operations may be going on within it. This wait also indicates that resources are not available or are occupied at the moment due to some reasons. There is a good chance that the waiting queries start to time out if this wait type is very high. Client application may degrade the performance as well. You can use various methods to find blocking queries: EXEC sp_who2 SQL SERVER – Quickest Way to Identify Blocking Query and Resolution – Dirty Solution DMV – sys.dm_tran_locks DMV – sys.dm_os_waiting_tasks Reducing LCK_M_XXX wait: Check the Explicit Transactions. If transactions are very long, this wait type can start building up because of other waiting transactions. Keep the transactions small. Serialization Isolation can build up this wait type. If that is an acceptable isolation for your business, this wait type may be natural. The default isolation of SQL Server is ‘Read Committed’. One of my clients has changed their isolation to “Read Uncommitted”. I strongly discourage the use of this because this will probably lead to having lots of dirty data in the database. Identify blocking queries mentioned using various methods described above, and then optimize them. Partition can be one of the options to consider because this will allow transactions to execute concurrently on different partitions. If there are runaway queries, use timeout. (Please discuss this solution with your database architect first as timeout can work against you). Check if there is no memory and IO-related issue using the following counters: Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • World Record Batch Rate on Oracle JD Edwards Consolidated Workload with SPARC T4-2

    - by Brian
    Oracle produced a World Record batch throughput for single system results on Oracle's JD Edwards EnterpriseOne Day-in-the-Life benchmark using Oracle's SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2. The workload includes both online and batch workload. The SPARC T4-2 server delivered a result of 8,000 online users while concurrently executing a mix of JD Edwards EnterpriseOne Long and Short batch processes at 95.5 UBEs/min (Universal Batch Engines per minute). In order to obtain this record benchmark result, the JD Edwards EnterpriseOne, Oracle WebLogic and Oracle Database 11g Release 2 servers were executed each in separate Oracle Solaris Containers which enabled optimal system resources distribution and performance together with scalable and manageable virtualization. One SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2 utilized only 55% of the available CPU power. The Oracle DB server in a Shared Server configuration allows for optimized CPU resource utilization and significant memory savings on the SPARC T4-2 server without sacrificing performance. This configuration with SPARC T4-2 server has achieved 33% more Users/core, 47% more UBEs/min and 78% more Users/rack unit than the IBM Power 770 server. The SPARC T4-2 server with 2 processors ran the JD Edwards "Day-in-the-Life" benchmark and supported 8,000 concurrent online users while concurrently executing mixed batch workloads at 95.5 UBEs per minute. The IBM Power 770 server with twice as many processors supported only 12,000 concurrent online users while concurrently executing mixed batch workloads at only 65 UBEs per minute. This benchmark demonstrates more than 2x cost savings by consolidating the complete solution in a single SPARC T4-2 server compared to earlier published results of 10,000 users and 67 UBEs per minute on two SPARC T4-2 and SPARC T4-1. The Oracle DB server used mirrored (RAID 1) volumes for the database providing high availability for the data without impacting performance. Performance Landscape JD Edwards EnterpriseOne Day in the Life (DIL) Benchmark Consolidated Online with Batch Workload System Rack Units BatchRate(UBEs/m) Online Users Users /Units Users /Core Version SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 3 95.5 8,000 2,667 500 9.0.2 IBM Power 770 (4 x POWER7, 3.3 GHz, 32 cores) 8 65 12,000 1,500 375 9.0.2 Batch Rate (UBEs/m) — Batch transaction rate in UBEs per minute Configuration Summary Hardware Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 4 x 300 GB 10K RPM SAS internal disk 2 x 300 GB internal SSD 2 x Sun Storage F5100 Flash Arrays Software Configuration: Oracle Solaris 10 Oracle Solaris Containers JD Edwards EnterpriseOne 9.0.2 JD Edwards EnterpriseOne Tools (8.98.4.2) Oracle WebLogic Server 11g (10.3.4) Oracle HTTP Server 11g Oracle Database 11g Release 2 (11.2.0.1) Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE – Universal Business Engine workload of 61 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large and medium UBEs, and the QPROCESS queue for short UBEs run concurrently. Oracle's UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers, two Oracle WebLogic Servers 11g Release 1 coupled with two Oracle Web Tier HTTP server instances and one Oracle Database 11g Release 2 database on a single SPARC T4-2 server were hosted in separate Oracle Solaris Containers bound to four processor sets to demonstrate consolidation of multiple applications, web servers and the database with best resource utilizations. Interrupt fencing was configured on all Oracle Solaris Containers to channel the interrupts to processors other than the processor sets used for the JD Edwards Application server, Oracle WebLogic servers and the database server. A Oracle WebLogic vertical cluster was configured on each WebServer Container with twelve managed instances each to load balance users' requests and to provide the infrastructure that enables scaling to high number of users with ease of deployment and high availability. The database log writer was run in the real time RT class and bound to a processor set. The database redo logs were configured on the raw disk partitions. The Oracle Solaris Container running the Enterprise Application server completed 61 Short UBEs, 4 Long UBEs concurrently as the mixed size batch workload. The mixed size UBEs ran concurrently from the Enterprise Application server with the 8,000 online users driven by the LoadRunner. See Also SPARC T4-2 Server oracle.com OTN JD Edwards EnterpriseOne oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Oracle Fusion Middleware oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/30/2012.

    Read the article

  • Using real fonts in HTML 5 & CSS 3 pages

    - by nikolaosk
    This is going to be the fifth post in a series of posts regarding HTML 5. You can find the other posts here, here , here and here.In this post I will provide a hands-on example on how to use real fonts in HTML 5 pages with the use of CSS 3.Font issues have been appearing in all websites and caused all sorts of problems for web designers.The real problem with fonts for web developers until now was that they were forced to use only a handful of fonts.CSS 3 allows web designers not to use only web-safe fonts.These fonts are in wide use in most user's operating systems.Some designers (when they wanted to make their site stand out) resorted in various techniques like using images instead of fonts. That solution is not very accessible-friendly and definitely less SEO friendly.CSS (through CSS3's Fonts module) 3 allows web developers to embed fonts directly on a web page.First we need to define the font and then attach the font to elements.Obviously we have various formats for fonts. Some are supported by all modern browsers and some are not.The most common formats are, Embedded OpenType (EOT),TrueType(TTF),OpenType(OTF). I will use the @font-face declaration to define the font used in this page.  Before you download fonts (in any format) make sure you have understood all the licensing issues. Please note that all these real fonts will be downloaded in the client's computer.A great resource on the web (maybe the best) is http://www.typekit.com/.They have an abundance of web fonts for use. Please note that they sell those fonts.Another free (best things in life a free, aren't they?) resource is the http://www.google.com/webfonts website. I have visited the website and downloaded the Aladin webfont.When you download any font you like make sure you read the license first. Aladin webfont is released under the Open Font License (OFL) license. Before I go on with the actual demo I will use the (http://www.caniuse.com) to see the support for web fonts from the latest versions of modern browsers.Please have a look at the picture below. We see that all the latest versions of modern browsers support this feature. In order to be absolutely clear this is not (and could not be) a detailed tutorial on HTML 5. There are other great resources for that.Navigate to the excellent interactive tutorials of W3School.Another excellent resource is HTML 5 Doctor.Two very nice sites that show you what features and specifications are implemented by various browsers and their versions are http://caniuse.com/ and http://html5test.com/. At this times Chrome seems to support most of HTML 5 specifications.Another excellent way to find out if the browser supports HTML 5 and CSS 3 features is to use the Javascript lightweight library Modernizr.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here.I create a simple HTML 5 page. The markup follows and it is very easy to use and understand<!DOCTYPE html><html lang="en">  <head>    <title>HTML 5, CSS3 and JQuery</title>    <meta http-equiv="Content-Type" content="text/html;charset=utf-8" >    <link rel="stylesheet" type="text/css" href="style.css">       </head>  <body>      <div id="header">      <h1>Learn cutting edge technologies</h1>      <p>HTML 5, JQuery, CSS3</p>    </div>        <div id="main">          <h2>HTML 5</h2>                        <p>            HTML5 is the latest version of HTML and XHTML. The HTML standard defines a single language that can be written in HTML and XML. It attempts to solve issues found in previous iterations of HTML and addresses the needs of Web Applications, an area previously not adequately covered by HTML.          </p>      </div>             </body>  </html> Then I create the style.css file.<style type="text/css">@font-face{font-family:Aladin;src: url('Aladin-Regular.ttf')}h1{font-family:Aladin,Georgia,serif;}</style> As you can see we want to style the h1 tag in our HTML 5 markup.I just use the @font-face property,specifying the font-family and the source of the web font. Then I just use the name in the font-family property to style the h1 tag.Have a look below to see my page in IE10. Make sure you open this page in all your browsers installed in your machine. Make sure you have downloaded the latest versions. Now we can make our site stand out with web fonts and give it a really unique look and feel. Hope it helps!!!  

    Read the article

  • How-to filter table filter input to only allow numeric input

    - by frank.nimphius
    In a previous ADF Code Corner post, I explained how to change the table filter behavior by intercepting the query condition in a query filter. See sample #30 at http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html In this OTN Harvest post I explain how to prevent users from providing invalid character entries as table filter criteria to avoid problems upon re-querying the table. In the example shown next, only numeric values are allowed for a table column filter. To create a table that allows data filtering, drag a View Object – or a data collection of a Web Service or JPA business service – from the DataControls panel and drop it as a table. Choose the Enable Filtering option in the Edit Table Columns dialog so the table renders with the column filter boxes displayed. The table filter fields are created using implicit af:inputText components that need to be customized for you to apply a custom filter input component, or to change the input behavior. To change the input filter, so only a defined set of input keys is allowed, you need to change the default filter field with your own af:inputText field to which you apply an af:clientListener tag that filters user keyboard entries. For this, in the Oracle JDeveloper visual editor, select the column which filter you want to change and expand the column node in the Oracle JDeveloper Structure Window. Part of the column definition is the Column facet node. Expand the facets so you see the filter facet entry. The filter facet is grayed out as there is no custom facet defined. In a next step, open theComponent Palette (ctrl+shift+P) and drag an Input Text component onto the facet. This demarks the first part in the filter customization. To make the custom filter component work, you need to map the af:inputText component value property to the ADF filter criteria that is exposed in the Expression Builder. Open the Expression Builder for the filter input component value property by clicking the arrow icon to its right. In the Expression Builder expand the JSP Objects | vs | filterCriteria node to select the attribute name represented by the table column. The vs entry is the name of a variable that is defined on the table and that grants you access to the table attributes. Now that the filter works as before – though using a custom filter input component – you can add the af:clientListener tag to your custom filter component – af:inputText – to call out to JavaScript when users type in the column filter field Point the client filter method property to a JavaScript function that you reference or add through using the af:resource tag and set the type property value to keyDown. <af:document id="d1">     <af:resource type="javascript" source="/js/filterHandler.js"/> … The filter definition looks as shown below <af:inputText label="Label 1" id="it1"                         value="#{vs.filterCriteria.Employe        <af:clientListener method="suppressCharacterInput"                                     type="keyDown"/> </af:inputText> The JavaScript code that you can use to either filter character inputs or numeric inputs is shown below. Just store this code in an external JavaScript (.js) file and reference it from the af:resource tag. //Allow numbers, cursor control keys and delete keys function suppressCharacterInput(evt) {     var _keyCode = evt.getKeyCode();     var _filterField = evt.getCurrentTarget();     var _oldValue = _filterField.getValue();     if (!((_keyCode < 57) ||(_keyCode > 96 && _keyCode < 105))) {         _filterField.setValue(_oldValue);         evt.cancel();     } } //Allow characters, cursor control keys and delete keys function suppressNumericInput(evt) {  var _keyCode = evt.getKeyCode();  var _filterField = evt.getCurrentTarget();  var _oldValue = _filterField.getValue();  //check for numbers  if ((_keyCode < 57 && _keyCode > 47) ||      (_keyCode > 96 && _keyCode < 105)){     _filterField.setValue(_oldValue);     evt.cancel();   } } But what if browsers don't allow JavaScript ? Don't worry about this. If browsers would not support JavaScript then ADF Faces as a whole would not work and you had a different problem.

    Read the article

  • Auto DOP and Concurrency

    - by jean-pierre.dijcks
    After spending some time in the cloud, I figured it is time to come down to earth and start discussing some of the new Auto DOP features some more. As Database Machines (the v2 machine runs Oracle Database 11.2) are effectively selling like hotcakes, it makes some sense to talk about the new parallel features in more detail. For basic understanding make sure you have read the initial post. The focus there is on Auto DOP and queuing, which is to some extend the focus here. But now I want to discuss the concurrency a little and explain some of the relevant parameters and their impact, specifically in a situation with concurrency on the system. The goal of Auto DOP The idea behind calculating the Automatic Degree of Parallelism is to find the highest possible DOP (ideal DOP) that still scales. In other words, if we were to increase the DOP even more  above a certain DOP we would see a tailing off of the performance curve and the resource cost / performance would become less optimal. Therefore the ideal DOP is the best resource/performance point for that statement. The goal of Queuing On a normal production system we should see statements running concurrently. On a Database Machine we typically see high concurrency rates, so we need to find a way to deal with both high DOP’s and high concurrency. Queuing is intended to make sure we Don’t throttle down a DOP because other statements are running on the system Stay within the physical limits of a system’s processing power Instead of making statements go at a lower DOP we queue them to make sure they will get all the resources they want to run efficiently without trashing the system. The theory – and hopefully – practice is that by giving a statement the optimal DOP the sum of all statements runs faster with queuing than without queuing. Increasing the Number of Potential Parallel Statements To determine how many statements we will consider running in parallel a single parameter should be looked at. That parameter is called PARALLEL_MIN_TIME_THRESHOLD. The default value is set to 10 seconds. So far there is nothing new here…, but do realize that anything serial (e.g. that stays under the threshold) goes straight into processing as is not considered in the rest of this post. Now, if you have a system where you have two groups of queries, serial short running and potentially parallel long running ones, you may want to worry only about the long running ones with this parallel statement threshold. As an example, lets assume the short running stuff runs on average between 1 and 15 seconds in serial (and the business is quite happy with that). The long running stuff is in the realm of 1 – 5 minutes. It might be a good choice to set the threshold to somewhere north of 30 seconds. That way the short running queries all run serial as they do today (if it ain’t broken, don’t fix it) and allows the long running ones to be evaluated for (higher degrees of) parallelism. This makes sense because the longer running ones are (at least in theory) more interesting to unleash a parallel processing model on and the benefits of running these in parallel are much more significant (again, that is mostly the case). Setting a Maximum DOP for a Statement Now that you know how to control how many of your statements are considered to run in parallel, lets talk about the specific degree of any given statement that will be evaluated. As the initial post describes this is controlled by PARALLEL_DEGREE_LIMIT. This parameter controls the degree on the entire cluster and by default it is CPU (meaning it equals Default DOP). For the sake of an example, let’s say our Default DOP is 32. Looking at our 5 minute queries from the previous paragraph, the limit to 32 means that none of the statements that are evaluated for Auto DOP ever runs at more than DOP of 32. Concurrently Running a High DOP A basic assumption about running high DOP statements at high concurrency is that you at some point in time (and this is true on any parallel processing platform!) will run into a resource limitation. And yes, you can then buy more hardware (e.g. expand the Database Machine in Oracle’s case), but that is not the point of this post… The goal is to find a balance between the highest possible DOP for each statement and the number of statements running concurrently, but with an emphasis on running each statement at that highest efficiency DOP. The PARALLEL_SERVER_TARGET parameter is the all important concurrency slider here. Setting this parameter to a higher number means more statements get to run at their maximum parallel degree before queuing kicks in.  PARALLEL_SERVER_TARGET is set per instance (so needs to be set to the same value on all 8 nodes in a full rack Database Machine). Just as a side note, this parameter is set in processes, not in DOP, which equates to 4* Default DOP (2 processes for a DOP, default value is 2 * Default DOP, hence a default of 4 * Default DOP). Let’s say we have PARALLEL_SERVER_TARGET set to 128. With our limit set to 32 (the default) we are able to run 4 statements concurrently at the highest DOP possible on this system before we start queuing. If these 4 statements are running, any next statement will be queued. To run a system at high concurrency the PARALLEL_SERVER_TARGET should be raised from its default to be much closer (start with 60% or so) to PARALLEL_MAX_SERVERS. By using both PARALLEL_SERVER_TARGET and PARALLEL_DEGREE_LIMIT you can control easily how many statements run concurrently at good DOPs without excessive queuing. Because each workload is a little different, it makes sense to plan ahead and look at these parameters and set these based on your requirements.

    Read the article

  • Windows Azure End to End Examples

    - by BuckWoody
    I’m fascinated by the way people learn. I’m told there are several methods people use to understand new information, from reading to watching, from experiencing to exploring. Personally, I use multiple methods of learning when I encounter a new topic, usually starting with reading a bit about the concepts. I quickly want to put those into practice, however, especially in the technical realm. I immediately look for examples where I can start trying out the concepts. But I often want a “real” example – not just something that represents the concept, but something that is real-world, showing some feature I could actually use. And it’s no different with the Windows Azure platform – I like finding things I can do now, and actually use. So when I started learning Windows Azure, I of course began with the Windows Azure Training Kit – which has lots of examples and labs, presentations and so on. But from there, I wanted more examples I could learn from, and eventually teach others with. I was asked if I would write a few of those up, so here are the ones I use. CodePlex CodePlex is Microsoft’s version of an “Open Source” repository. Anyone can start a project, add code, documentation and more to it and make it available to the world, free of charge, using various licenses as they wish. Microsoft also uses this location for most of the examples we publish, and sample databases for SQL Server. If you search in CodePlex for “Azure”, you’ll come back with a list of projects that folks have posted, including those of us at Microsoft. The source code and documentation are there, so you can learn using actual examples of code that will do what you need. There’s everything from a simple table query to a full project that is sort of a “Corporate Dropbox” that uses Windows Azure Storage. The advantage is that this code is immediately usable. It’s searchable, and you can often find a complete solution to meet your needs. The disadvantage is that the code is pretty specific – it may not cover a huge project like you’re looking for. Also, depending on the author(s), you might not find the documentation level you want. Link: http://azureexamples.codeplex.com/site/search?query=Azure&ac=8    Tailspin Microsoft Patterns and Practices is a group here that does an amazing job at sharing standard ways of doing IT – from operations to coding. If you’re not familiar with this resource, make sure you read up on it. Long before I joined Microsoft I used their work in my daily job – saved a ton of time. It has resources not only for Windows Azure but other Microsoft software as well. The Patterns and Practices group also publishes full books – you can buy these, but many are also online for free. There’s an end-to-end example for Windows Azure using a company called “Tailspin”, and the work covers not only the code but the design of the full solution. If you really want to understand the thought that goes into a Platform-as-a-Service solution, this is an excellent resource. The advantages are that this is a book, it’s complete, and it includes a discussion of design decisions. The disadvantage is that it’s a little over a year old – and in “Cloud” years that’s a lot. So many things have changed, improved, and have been added that you need to treat this as a resource, but not the only one. Still, highly recommended. Link: http://msdn.microsoft.com/en-us/library/ff728592.aspx Azure Stock Trader Sometimes you need a mix of a CodePlex-style application, and a little more detail on how it was put together. And it would be great if you could actually play with the completed application, to see how it really functions on the actual platform. That’s the Azure Stock Trader application. There’s a place where you can read about the application, and then it’s been published to Windows Azure – the production platform – and you can use it, explore, and see how it performs. I use this application all the time to demonstrate Windows Azure, or a particular part of Windows Azure. The advantage is that this is an end-to-end application, and online as well. The disadvantage is that it takes a bit of self-learning to work through.  Links: Learn it: http://msdn.microsoft.com/en-us/netframework/bb499684 Use it: https://azurestocktrader.cloudapp.net/

    Read the article

  • Windows Azure End to End Examples

    - by BuckWoody
    I’m fascinated by the way people learn. I’m told there are several methods people use to understand new information, from reading to watching, from experiencing to exploring. Personally, I use multiple methods of learning when I encounter a new topic, usually starting with reading a bit about the concepts. I quickly want to put those into practice, however, especially in the technical realm. I immediately look for examples where I can start trying out the concepts. But I often want a “real” example – not just something that represents the concept, but something that is real-world, showing some feature I could actually use. And it’s no different with the Windows Azure platform – I like finding things I can do now, and actually use. So when I started learning Windows Azure, I of course began with the Windows Azure Training Kit – which has lots of examples and labs, presentations and so on. But from there, I wanted more examples I could learn from, and eventually teach others with. I was asked if I would write a few of those up, so here are the ones I use. CodePlex CodePlex is Microsoft’s version of an “Open Source” repository. Anyone can start a project, add code, documentation and more to it and make it available to the world, free of charge, using various licenses as they wish. Microsoft also uses this location for most of the examples we publish, and sample databases for SQL Server. If you search in CodePlex for “Azure”, you’ll come back with a list of projects that folks have posted, including those of us at Microsoft. The source code and documentation are there, so you can learn using actual examples of code that will do what you need. There’s everything from a simple table query to a full project that is sort of a “Corporate Dropbox” that uses Windows Azure Storage. The advantage is that this code is immediately usable. It’s searchable, and you can often find a complete solution to meet your needs. The disadvantage is that the code is pretty specific – it may not cover a huge project like you’re looking for. Also, depending on the author(s), you might not find the documentation level you want. Link: http://azureexamples.codeplex.com/site/search?query=Azure&ac=8    Tailspin Microsoft Patterns and Practices is a group here that does an amazing job at sharing standard ways of doing IT – from operations to coding. If you’re not familiar with this resource, make sure you read up on it. Long before I joined Microsoft I used their work in my daily job – saved a ton of time. It has resources not only for Windows Azure but other Microsoft software as well. The Patterns and Practices group also publishes full books – you can buy these, but many are also online for free. There’s an end-to-end example for Windows Azure using a company called “Tailspin”, and the work covers not only the code but the design of the full solution. If you really want to understand the thought that goes into a Platform-as-a-Service solution, this is an excellent resource. The advantages are that this is a book, it’s complete, and it includes a discussion of design decisions. The disadvantage is that it’s a little over a year old – and in “Cloud” years that’s a lot. So many things have changed, improved, and have been added that you need to treat this as a resource, but not the only one. Still, highly recommended. Link: http://msdn.microsoft.com/en-us/library/ff728592.aspx Azure Stock Trader Sometimes you need a mix of a CodePlex-style application, and a little more detail on how it was put together. And it would be great if you could actually play with the completed application, to see how it really functions on the actual platform. That’s the Azure Stock Trader application. There’s a place where you can read about the application, and then it’s been published to Windows Azure – the production platform – and you can use it, explore, and see how it performs. I use this application all the time to demonstrate Windows Azure, or a particular part of Windows Azure. The advantage is that this is an end-to-end application, and online as well. The disadvantage is that it takes a bit of self-learning to work through.  Links: Learn it: http://msdn.microsoft.com/en-us/netframework/bb499684 Use it: https://azurestocktrader.cloudapp.net/

    Read the article

  • TFS 2010 SDK: Integrating Twitter with TFS Programmatically

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS API,Integrate Twitter TFS,TFS Programming,ALM,TwitterSharp   Friends at ‘Twitter Sharp’ have created a wonderful .net API for twitter. With this blog post i will try to show you a basic TFS – Twitter integration scenario where i will retrieve the Team Project details programmatically and then publish these details on my twitter page. In future blogs i will be demonstrating how to create a windows service to capture the events raised by TFS and then publishing them in your social eco-system. Download Working Demo: Integrate Twitter - Tfs Programmatically   1. Setting up Twitter API Download Tweet Sharp from => https://github.com/danielcrenna/tweetsharp  Before you can start playing around with this, you will need to register an application on twitter. This is because Twitter uses the OAuth authentication protocol and will not issue an Access token unless your application is registered with them. Go to https://dev.twitter.com/ and register your application   Once you have registered your application, you will need ‘Customer Key’, ‘Customer Secret’, ‘Access Token’, ‘Access Token Secret’ 2. Connecting to Twitter using the Tweet Sharp API Create a new C# windows forms project and add reference to ‘Hammock.ClientProfile’, ‘Newtonsoft.Json’, ‘TweetSharp’ Add the following keys to the App.config (Note – The values for the keys below are in correct and if you try and connect using them then you will get an authorization failure error). Add a new class ‘TwitterProxy’ and use the following code to connect to the TwitterService (Read more about OAuthentication - http://dev.twitter.com/pages/auth) using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Configuration;using TweetSharp; namespace WindowsFormsApplication2{ public class TwitterProxy { private static string _hero; private static string _consumerKey; private static string _consumerSecret; private static string _accessToken; private static string _accessTokenSecret;  public static TwitterService ConnectToTwitter() { _consumerKey = ConfigurationManager.AppSettings["ConsumerKey"]; _consumerSecret = ConfigurationManager.AppSettings["ConsumerSecret"]; _accessToken = ConfigurationManager.AppSettings["AccessToken"]; _accessTokenSecret = ConfigurationManager.AppSettings["AccessTokenSecret"];  return new TwitterService(_consumerKey, _consumerSecret, _accessToken, _accessTokenSecret); } }} Time to Tweet! _twitterService = Proxy.TwitterProxy.ConnectToTwitter(); _twitterService.SendTweet("Hello World"); SendTweet will return the TweetStatus, If you do not get a 200 OK status that means you have failed authentication, please revisit the Access tokens. --RESPONSE: https://api.twitter.com/1/statuses/update.json HTTP/1.1 200 OK X-Transaction: 1308476106-69292-41752 X-Frame-Options: SAMEORIGIN X-Runtime: 0.03040 X-Transaction-Mask: a6183ffa5f44ef11425211f25 Pragma: no-cache X-Access-Level: read-write X-Revision: DEV X-MID: bd8aa0abeccb6efba38bc0a391a73fab98e983ea Cache-Control: no-cache, no-store, must-revalidate, pre-check=0, post-check=0 Content-Type: application/json; charset=utf-8 Date: Sun, 19 Jun 2011 09:35:06 GMT Expires: Tue, 31 Mar 1981 05:00:00 GMT Last-Modified: Sun, 19 Jun 2011 09:35:06 GMT Server: hi Vary: Accept-Encoding Content-Encoding: Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Transfer-Encoding: chunked   3. Integrate with TFS In my blog post Connect to TFS Programmatically i have in depth demonstrated how to connect to TFS using the TFS API. 1: // Update the AppConfig with the URI of the Team Foundation Server you want to connect to, Make sure you have View Team Project Collection Details permissions on the server 2: private static string _myUri = ConfigurationManager.AppSettings["TfsUri"]; 3: private static TwitterService _twitterService = null; 4:   5: private void button1_Click(object sender, EventArgs e) 6: { 7: lblNotes.Text = string.Empty; 8:   9: try 10: { 11: StringBuilder notes = new StringBuilder(); 12:   13: _twitterService = Proxy.TwitterProxy.ConnectToTwitter(); 14:   15: _twitterService.SendTweet("Hello World"); 16:   17: TfsConfigurationServer configurationServer = 18: TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri)); 19:   20: CatalogNode catalogNode = configurationServer.CatalogNode; 21:   22: ReadOnlyCollection<CatalogNode> tpcNodes = catalogNode.QueryChildren( 23: new Guid[] { CatalogResourceTypes.ProjectCollection }, 24: false, CatalogQueryOptions.None); 25:   26: // tpc = Team Project Collection 27: foreach (CatalogNode tpcNode in tpcNodes) 28: { 29: Guid tpcId = new Guid(tpcNode.Resource.Properties["InstanceId"]); 30: TfsTeamProjectCollection tpc = configurationServer.GetTeamProjectCollection(tpcId); 31:   32: notes.AppendFormat("{0} Team Project Collection : {1}{0}", Environment.NewLine, tpc.Name); 33: _twitterService.SendTweet(String.Format("http://Lunartech.codeplex.com - Connecting to Team Project Collection : {0} ", tpc.Name)); 34:   35: // Get catalog of tp = 'Team Projects' for the tpc = 'Team Project Collection' 36: var tpNodes = tpcNode.QueryChildren( 37: new Guid[] { CatalogResourceTypes.TeamProject }, 38: false, CatalogQueryOptions.None); 39:   40: foreach (var p in tpNodes) 41: { 42: notes.AppendFormat("{0} Team Project : {1} - {2}{0}", Environment.NewLine, p.Resource.DisplayName,  "This is an open source project hosted on codeplex"); 43: _twitterService.SendTweet(String.Format(" Connected to Team Project: '{0}' – '{1}' ", p.Resource.DisplayName, "This is an open source project hosted on codeplex")); 44: } 45: } 46: notes.AppendFormat("{0} Updates posted on Twitter : {1} {0}", Environment.NewLine, @"http://twitter.com/lunartech1"); 47: lblNotes.Text = notes.ToString(); 48: } 49: catch (Exception ex) 50: { 51: lblError.Text = " Message : " + ex.Message + (ex.InnerException != null ? " Inner Exception : " + ex.InnerException : string.Empty); 52: } 53: }   The extensions you can build integrating TFS and Twitter are incredible!   Share this post :

    Read the article

  • IBM "per core" comparisons for SPECjEnterprise2010

    - by jhenning
    I recently stumbled upon a blog entry from Roman Kharkovski (an IBM employee) comparing some SPECjEnterprise2010 results for IBM vs. Oracle. Mr. Kharkovski's blog claims that SPARC delivers half the transactions per core vs. POWER7. Prior to any argument, I should say that my predisposition is to like Mr. Kharkovski, because he says that his blog is intended to be factual; that the intent is to try to avoid marketing hype and FUD tactic; and mostly because he features a picture of himself wearing a bike helmet (me too). Therefore, in a spirit of technical argument, rather than FUD fight, there are a few areas in his comparison that should be discussed. Scaling is not free For any benchmark, if a small system scores 13k using quantity R1 of some resource, and a big system scores 57k using quantity R2 of that resource, then, sure, it's tempting to divide: is  13k/R1 > 57k/R2 ? It is tempting, but not necessarily educational. The problem is that scaling is not free. Building big systems is harder than building small systems. Scoring  13k/R1  on a little system provides no guarantee whatsoever that one can sustain that ratio when attempting to handle more than 4 times as many users. Choosing the denominator radically changes the picture When ratios are used, one can vastly manipulate appearances by the choice of denominator. In this case, lots of choices are available for the resource to be compared (R1 and R2 above). IBM chooses to put cores in the denominator. Mr. Kharkovski provides some reasons for that choice in his blog entry. And yet, it should be noted that the very concept of a core is: arbitrary: not necessarily comparable across vendors; fluid: modern chips shift chip resources in response to load; and invisible: unless you have a microscope, you can't see it. By contrast, one can actually see processor chips with the naked eye, and they are a bit easier to count. If we put chips in the denominator instead of cores, we get: 13161.07 EjOPS / 4 chips = 3290 EjOPS per chip for IBM vs 57422.17 EjOPS / 16 chips = 3588 EjOPS per chip for Oracle The choice of denominator makes all the difference in the appearance. Speaking for myself, dividing by chips just seems to make more sense, because: I can see chips and count them; and I can accurately compare the number of chips in my system to the count in some other vendor's system; and Tthe probability of being able to continue to accurately count them over the next 10 years of microprocessor development seems higher than the probability of being able to accurately and comparably count "cores". SPEC Fair use requirements Speaking as an individual, not speaking for SPEC and not speaking for my employer, I wonder whether Mr. Kharkovski's blog article, taken as a whole, meets the requirements of the SPEC Fair Use rule www.spec.org/fairuse.html section I.D.2. For example, Mr. Kharkovski's footnote (1) begins Results from http://www.spec.org as of 04/04/2013 Oracle SUN SPARC T5-8 449 EjOPS/core SPECjEnterprise2010 (Oracle's WLS best SPECjEnterprise2010 EjOPS/core result on SPARC). IBM Power730 823 EjOPS/core (World Record SPECjEnterprise2010 EJOPS/core result) The questionable tactic, from a Fair Use point of view, is that there is no such metric at the designated location. At www.spec.org, You can find the SPEC metric 57422.17 SPECjEnterprise2010 EjOPS for Oracle and You can also find the SPEC metric 13161.07 SPECjEnterprise2010 EjOPS for IBM. Despite the implication of the footnote, you will not find any mention of 449 nor anything that says 823. SPEC says that you can, under its fair use rule, derive your own values; but it emphasizes: "The context must not give the appearance that SPEC has created or endorsed the derived value." Substantiation and transparency Although SPEC disclaims responsibility for non-SPEC information (section I.E), it says that non-SPEC data and methods should be accurate, should be explained, should be substantiated. Unfortunately, it is difficult or impossible for the reader to independently verify the pricing: Were like units compared to like (e.g. list price to list price)? Were all components (hw, sw, support) included? Were all fees included? Note that when tpc.org shows IBM pricing, there are often items such as "PROCESSOR ACTIVATION" and "MEMORY ACTIVATION". Without the transparency of a detailed breakdown, the pricing claims are questionable. T5 claim for "Fastest Processor" Mr. Kharkovski several times questions Oracle's claim for fastest processor, writing You see, when you publish industry benchmarks, people may actually compare your results to other vendor's results. Well, as we performance people always say, "it depends". If you believe in performance-per-core as the primary way of looking at the world, then yes, the POWER7+ is impressive, spending its chip resources to support up to 32 threads (8 cores x 4 threads). Or, it just might be useful to consider performance-per-chip. Each SPARC T5 chip allows 128 hardware threads to be simultaneously executing (16 cores x 8 threads). The Industry Standard Benchmark that focuses specifically on processor chip performance is SPEC CPU2006. For this very well known and popular benchmark, SPARC T5: provides better performance than both POWER7 and POWER7+, for 1 chip vs. 1 chip, for 8 chip vs. 8 chip, for integer (SPECint_rate2006) and floating point (SPECfp_rate2006), for Peak tuning and for Base tuning. For example, at the 8-chip level, integer throughput (SPECint_rate2006) is: 3750 for SPARC 2170 for POWER7+. You can find the details at the March 2013 BestPerf CPU2006 page SPEC is a trademark of the Standard Performance Evaluation Corporation, www.spec.org. The two specific results quoted for SPECjEnterprise2010 are posted at the URLs linked from the discussion. Results for SPEC CPU2006 were verified at spec.org 1 July 2013, and can be rechecked here.

    Read the article

  • SQLAuthority News – Great Time Spent at Great Indian Developers Summit 2014

    - by Pinal Dave
    The Great Indian Developer Conference (GIDS) is one of the most popular annual event held in Bangalore. This year GIDS is scheduled on April 22, 25. I will be presented total four sessions at this event and each session is very different from each other. Here are the details of four of my sessions, which I presented there. Pluralsight Shades This event was a great event and I had fantastic fun presenting a technology over here. I was indeed very excited that along with me, I had many of my friends presenting at the event as well. I want to thank all of you to attend my session and having standing room every single time. I have already sent resources in my newsletter. You can sign up for the newsletter over here. Indexing is an Art I was amazed with the crowd present in the sessions at GIDS. There was a great interest in the subject of SQL Server and Performance Tuning. Audience at GIDS I believe event like such provides a great platform to meet and share knowledge. Pinal at Pluralsight Booth Here are the abstract of the sessions which I had presented. They were recorded so at some point in time they will be available, but if you want the content of all the courses immediately, I suggest you check out my video courses on the same subject on Pluralsight. Indexes, the Unsung Hero Relevant Pluralsight Course Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame SQL Server for unsatisfactory performance, the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side to indexes. To master the art of performance tuning one has to understand the fundamentals of indexes and the best practices associated with the same. We will cover various aspects of Indexing such as Duplicate Index, Redundant Index, Missing Index as well as best practices around Indexes. SQL Server Performance Troubleshooting: Ancient Problems and Modern Solutions Relevant Pluralsight Course Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. However, truth is that art has evolved with time and there are more tools and techniques to overcome ancient troublesome scenarios. There are three major resources that when bottlenecked creates performance problems: CPU, IO, and Memory. In this session we will focus on High CPU scenarios detection and their resolutions. If time permits we will cover other performance related tips and tricks. At the end of this session, attendees will have a clear idea as well as action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. We will discuss about performance tuning in this session with the help of Demos. Pinal Dave at GIDS MySQL Performance Tuning – Unexplored Territory Relevant Pluralsight Course Performance is one of the most essential aspects of any application. Everyone wants their server to perform optimally and at the best efficiency. However, not many people talk about MySQL and Performance Tuning as it is an extremely unexplored territory. In this session, we will talk about how we can tune MySQL Performance. We will also try and cover other performance related tips and tricks. At the end of this session, attendees will not only have a clear idea, but also carry home action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. You will also witness some impressive performance tuning demos in this session. Hidden Secrets and Gems of SQL Server We Bet You Never Knew Relevant Pluralsight Course SQL Trio Session! It really amazes us every time when someone says SQL Server is an easy tool to handle and work with. Microsoft has done an amazing work in making working with complex relational database a breeze for developers and administrators alike. Though it looks like child’s play for some, the realities are far away from this notion. The basics and fundamentals though are simple and uniform across databases, the behavior and understanding the nuts and bolts of SQL Server is something we need to master over a period of time. With a collective experience of more than 30+ years amongst the speakers on databases, we will try to take a unique tour of various aspects of SQL Server and bring to you life lessons learnt from working with SQL Server. We will share some of the trade secrets of performance, configuration, new features, tuning, behaviors, T-SQL practices, common pitfalls, productivity tips on tools and more. This is a highly demo filled session for practical use if you are a SQL Server developer or an Administrator. The speakers will be able to stump you and give you answers on almost everything inside the Relational database called SQL Server. I personally attended the session of Vinod Kumar, Balmukund Lakhani, Abhishek Kumar and my favorite Govind Kanshi. Summary If you have missed this event here are two action items 1) Sign up for Resource Newsletter 2) Watch my video courses on Pluralsight Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL Tagged: GIDS

    Read the article

  • Availability Best Practices on Oracle VM Server for SPARC

    - by jsavit
    This is the first of a series of blog posts on configuring Oracle VM Server for SPARC (also called Logical Domains) for availability. This series will show how to how to plan for availability, improve serviceability, avoid single points of failure, and provide resiliency against hardware and software failures. Availability is a broad topic that has filled entire books, so these posts will focus on aspects specifically related to Oracle VM Server for SPARC. The goal is to improve Reliability, Availability and Serviceability (RAS): An article defining RAS can be found here. Oracle VM Server for SPARC Principles for Availability Let's state some guiding principles for availability that apply to Oracle VM Server for SPARC: Avoid Single Points Of Failure (SPOFs). Systems should be configured so a component failure does not result in a loss of application service. The general method to avoid SPOFs is to provide redundancy so service can continue without interruption if a component fails. For a critical application there may be multiple levels of redundancy so multiple failures can be tolerated. Oracle VM Server for SPARC makes it possible to configure systems that avoid SPOFs. Configure for availability at a level of resource and effort consistent with business needs. Effort and resource should be consistent with business requirements. Production has different availability requirements than test/development, so it's worth expending resources to provide higher availability. Even within the category of production there may be different levels of criticality, outage tolerances, recovery and repair time requirements. Keep in mind that a simple design may be more understandable and effective than a complex design that attempts to "do everything". Design for availability at the appropriate tier or level of the platform stack. Availability can be provided in the application, in the database, or in the virtualization, hardware and network layers they depend on - or using a combination of all of them. It may not be necessary to engineer resilient virtualization for stateless web applications applications where availability is provided by a network load balancer, or for enterprise applications like Oracle Real Application Clusters (RAC) and WebLogic that provide their own resiliency. It's (often) the same architecture whether virtual or not: For example, providing resiliency against a lost device path or failing disk media is done for the same reasons and may use the same design whether in a domain or not. It's (often) the same technique whether using domains or not: Many configuration steps are the same. For example, configuring IPMP or creating a redundant ZFS pool is pretty much the same within the guest whether you're in a guest domain or not. There are configuration steps and choices for provisioning the guest with the virtual network and disk devices, which we will discuss. Sometimes it is different using domains: There are new resources to configure. Most notable is the use of alternate service domains, which provides resiliency in case of a domain failure, and also permits improved serviceability via "rolling upgrades". This is an important differentiator between Oracle VM Server for SPARC and traditional virtual machine environments where all virtual I/O is provided by a monolithic infrastructure that itself is a SPOF. Alternate service domains are widely used to provide resiliency in production logical domains environments. Some things are done via logical domains commands, and some are done in the guest: For example, with Oracle VM Server for SPARC we provide multiple network connections to the guest, and then configure network resiliency in the guest via IP Multi Pathing (IPMP) - essentially the same as for non-virtual systems. On the other hand, we configure virtual disk availability in the virtualization layer, and the guest sees an already-resilient disk without being aware of the details. These blogs will discuss configuration details like this. Live migration is not "high availability" in the sense of "continuous availability": If the server is down, then you don't live migrate from it! (A cluster or VM restart elsewhere would be used). However, live migration can be part of the RAS (Reliability, Availability, Serviceability) picture by improving Serviceability - you can move running domains off of a box before planned service or maintenance. The blog Best Practices - Live Migration on Oracle VM Server for SPARC discusses this. Topics Here are some of the topics that will be covered: Network availability using IP Multipathing and aggregates Disk path availability using virtual disks defined with multipath groups ("mpgroup") Disk media resiliency configuring for redundant disks that can tolerate media loss Multiple service domains - this is probably the most significant item and the one most specific to Oracle VM Server for SPARC. It is very widely deployed in production environments as the means to provide network and disk availability, but it can be confusing. Subsequent articles will describe why and how to configure multiple service domains. Note, for the sake of precision: an I/O domain is any domain that has a physical I/O resource (such as a PCIe bus root complex). A service domain is a domain providing virtual device services to other domains; it is almost always an I/O domain too (so it can have something to serve). Resources Here are some important links; we'll be drawing on their content in the next several articles: Oracle VM Server for SPARC Documentation Maximizing Application Reliability and Availability with SPARC T5 Servers whitepaper by Gary Combs Maximizing Application Reliability and Availability with the SPARC M5-32 Server whitepaper by Gary Combs Summary Oracle VM Server for SPARC offers features that can be used to provide highly-available environments. This and the following blog entries will describe how to plan and deploy them.

    Read the article

  • Jersey 2 in GlassFish 4 - First Java EE 7 Implementation Now Integrated (TOTD #182)

    - by arungupta
    The JAX-RS 2.0 specification released their Early Draft 3 recently. One of my earlier blogs explained as the features were first introduced in the very first draft of the JAX-RS 2.0 specification. Last week was another milestone when the first Java EE 7 specification implementation was added to GlassFish 4 builds. Jakub blogged about Jersey 2 integration in GlassFish 4 builds. Most of the basic functionality is working but EJB, CDI, and Validation are still a TBD. Here is a simple Tip Of The Day (TOTD) sample to get you started with using that functionality. Create a Java EE 6-style Maven project mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=example -DartifactId=jersey2-helloworld -DarchetypeVersion=1.5 -DinteractiveMode=false Note, this is still a Java EE 6 archetype, at least for now. Open the project in NetBeans IDE as it makes it much easier to edit/add the files. Add the following <respositories> <repositories> <repository> <id>snapshot-repository.java.net</id> <name>Java.net Snapshot Repository for Maven</name> <url>https://maven.java.net/content/repositories/snapshots/</url> <layout>default</layout> </repository></repositories> Add the following <dependency>s <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope></dependency><dependency> <groupId>javax.ws.rs</groupId> <artifactId>javax.ws.rs-api</artifactId> <version>2.0-m09</version> <scope>test</scope></dependency><dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>2.0-m05</version> <scope>test</scope></dependency> The complete list of Maven coordinates for Jersey2 are available here. An up-to-date status of Jersey 2 can always be obtained from here. Here is a simple resource class: @Path("movies")public class MoviesResource { @GET @Path("list") public List<Movie> getMovies() { List<Movie> movies = new ArrayList<Movie>(); movies.add(new Movie("Million Dollar Baby", "Hillary Swank")); movies.add(new Movie("Toy Story", "Buzz Light Year")); movies.add(new Movie("Hunger Games", "Jennifer Lawrence")); return movies; }} This resource publishes a list of movies and is accessible at "movies/list" path with HTTP GET. The project is using the standard JAX-RS APIs. Of course, you need the trivial "Movie" and the "Application" class as well. They are available in the downloadable project anyway. Build the project mvn package And deploy to GlassFish 4.0 promoted build 43 (download, unzip, and start as "bin/asadmin start-domain") as asadmin deploy --force=true target/jersey2-helloworld.war Add a simple test case by right-clicking on the MoviesResource class, select "Tools", "Create Tests", and take defaults. Replace the function "testGetMovies" to @Testpublic void testGetMovies() { System.out.println("getMovies"); Client client = ClientFactory.newClient(); List<Movie> movieList = client.target("http://localhost:8080/jersey2-helloworld/webresources/movies/list") .request() .get(new GenericType<List<Movie>>() {}); assertEquals(3, movieList.size());} This test uses the newly defined JAX-RS 2 client APIs to access the RESTful resource. Run the test by giving the command "mvn test" and see the output as ------------------------------------------------------- T E S T S-------------------------------------------------------Running example.MoviesResourceTestgetMoviesTests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.561 secResults :Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 GlassFish 4 contains Jersey 2 as the JAX-RS implementation. If you want to use Jersey 1.1 functionality, then Martin's blog provide more details on that. All JAX-RS 1.x functionality will be supported using standard APIs anyway. This workaround is only required if Jersey 1.x functionality needs to be accessed. The complete source code explained in this project can be downloaded from here. Here are some pointers to follow JAX-RS 2 Specification Early Draft 3 Latest status on specification (jax-rs-spec.java.net) Latest JAX-RS 2.0 Javadocs Latest status on Jersey (Reference Implementation of JAX-RS 2 - jersey.java.net) Latest Jersey API Javadocs Latest GlassFish 4.0 Promoted Build Follow @gf_jersey Provide feedback on Jersey 2 to [email protected] and JAX-RS specification to [email protected].

    Read the article

  • BizTalk 2009 - Custom Functoid Wizard

    - by StuartBrierley
    When creating BizTalk maps you may find that there are times when you need perform tasks that the standard functoids do not cover.  At other times you may find yourself reapeating a pattern of standard functoids over and over again, adding visual complexity to an otherwise simple process.  In these cases you may find it preferable to create your own custom functoids.  In the past I have created a number of custom functoids from scratch, but recently I decided to try out the Custom Functoid Wizard for BizTalk 2009. After downloading and installing the wizard you should start Visual Studio and select to create a new BizTalk Server Functoid Project. Following the splash screen you will be presented with the General Properties screen, where you can set the classname, namespace, assembly name and strong name key file. The next screen is the first set of properties for the functoid.  First of all is the fuctoid ID; this must be a value above 6000. You should also then set the name, tooltip and description of the functoid.  The name will appear in the visual studio toolbox and the tooltip on hover over in the toolbox.  The descrition will be shown when you configure the functoid inputs when using it in a map; as such it should provide a decent level of information to allow the functoid to be used. Next you must set the category, exception mesage, icon and implementation language.  The category will affect the positioning of the functoid within the toolbox and also some of the behaviours of the functoid. We must then define the parameters and connections for our new functoid.  Here you can define the names and types of your input parameters along with the minimum and maximum number of input connections.  You will also need to define the types of connections accepted and the output type of the functoid. Finally you can click finish and your custom functoid project will be created. The results of this process can be seen in the solution explorer, where you will see that a project, functoid class file and a resource file have been created for you. If you open the class file you will see that the following code has been created for you: The "base" function sets all the properties that you previsouly detailed in the custom functoid wizard.  public TestFunctoids():base()  {    int functoidID;    // This has to be a number greater than 6000    functoidID = System.Convert.ToInt32(resmgr.GetString("FunctoidId"));    this.ID = functoidID;    // Set Resource strings, bitmaps    SetupResourceAssembly(ResourceName, Assembly.GetExecutingAssembly());    SetName("FunctoidName");                     SetTooltip("FunctoidToolTip");    SetDescription("FunctoidDescription");    SetBitmap("FunctoidBitmap");    // Minimum and maximum parameters that the functoid accepts    this.SetMinParams(2);    this.SetMaxParams(2);    /// Function name that needs to be called when this Functoid is invoked.    /// Put this in GAC.    SetExternalFunctionName(GetType().Assembly.FullName,     "MyCompany.BizTalk.Functoids.TestFuntoids.TestFunctoids", "Execute");    // Category for this functoid.    this.Category = FunctoidCategory.String;    // Input and output Connection type    this.OutputConnectionType = ConnectionType.AllExceptRecord;    AddInputConnectionType(ConnectionType.AllExceptRecord);   } The "Execute" function provides a skeleton function that contains the code to be executed by your new functoid.  The inputs and outputs should match those you defined in the Custom Functoid Wizard.   public System.Int32 Execute(System.Int32 Cool)   {    ResourceManager resmgr = new ResourceManager(ResourceName, Assembly.GetExecutingAssembly());    try    {     // TODO: Implement Functoid Logic    }    catch (Exception e)    {     throw new Exception(resmgr.GetString("FunctoidException"), e);    }   } Opening the resource file you will see some of the various string values that you defined in the Custom Functoid Wizard - Name, Tooltip, Description and Exception. You can also select to look at the image resources.  This will display the embedded icon image for the functoid.  To change this right click the icon and select "Import from File". Once you have completed the skeleton code you can then look at trying out your functoid. To do this you will need to build the project, copy the compiled DLL to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions and then refresh the toolbox in visual studio.

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • OIM 11g - Multi Valued attribute reconciliation of a child form

    - by user604275
    This topic gives a brief description on how we can do reconciliation of a child form attribute which is also multi valued from a flat file . The format of the flat file is (an example): ManagementDomain1|Entitlement1|DIRECTORY SERVER,EMAIL ManagementDomain2|Entitlement2|EMAIL PROVIDER INSTANCE - UMS,EMAIL VERIFICATION In OIM there will be a parent form for fields Management domain and Entitlement.Reconciliation will assign Servers ( which are multi valued) to corresponding Management  Domain and Entitlement .In the flat file , multi valued fields are seperated by comma(,). In the design console, Create a form with 'Server Name' as a field and make it a child form . Open the corresponding Resource Object and add this field for reconcilitaion.While adding , choose 'Multivalued' check box. (please find attached screen shot on how to add it , Child Table.docx) Open process definiton and add child form fields for recociliation. Please click on the 'Create Reconcilitaion Profile' buttton on the resource object tab. The API methods used for child form reconciliation are : 1.           reconEventKey =   reconOpsIntf.createReconciliationEvent(resObjName, reconData,                                                            false); ·                                    ‘False’  here tells that we are creating the recon for a child table . 2.               2.       reconOpsIntf.providingAllMultiAttributeData(reconEventKey, RECON_FIELD_IN_RO, true);                RECON_FIELD_IN_RO is the field that we added in the Resource Object while adding for reconciliation, please refer the screen shot) 3.    reconOpsIntf.addDirectBulkMultiAttributeData(reconEventKey,RECON_FIELD_IN_RO, bulkChildDataMapList);                 bulkChildDataMapList  is coded as below :                 List<Map> bulkChildDataMapList = new ArrayList<Map>();                   for (int i = 0; i < stokens.length; i++) {                            Map<String, String> attributeMap = new HashMap<String, String>();                           String serverName = stokens[i].toUpperCase();                           attributeMap.put("Server Name", stokens[i]);                           bulkChildDataMapList.add(attributeMap);                         } 4                  4.       reconOpsIntf.finishReconciliationEvent(reconEventKey); 5.       reconOpsIntf.processReconciliationEvent(reconEventKey); Now, we have to register the plug-in, import metadata into MDS and then create a scheduled job to execute which will run the reconciliation.

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • CodePlex Daily Summary for Thursday, June 23, 2011

    CodePlex Daily Summary for Thursday, June 23, 2011Popular ReleasesMiniTwitter: 1.70: MiniTwitter 1.70 ???? ?? ????? xAuth ?? OAuth ??????? 1.70 ??????????????????????????。 ???????????????? Twitter ? Web ??????????、PIN ????????????????????。??????????????????、???????????????????????????。Total Commander SkyDrive File System Plugin (.wfx): Total Commander SkyDrive File System Plugin 0.8.7b: Total Commander SkyDrive File System Plugin version 0.8.7b. Bug fixes: - BROKEN PLUGIN by upgrading SkyDriveServiceClient version 2.0.1b. Please do not forget to express your opinion of the plugin by rating it! Donate (EUR)SkyDrive .Net API Client: SkyDrive .Net API Client 2.0.1b (RELOADED): SkyDrive .Net API Client assembly has been RELOADED in version 2.0.1b as a REAL API. It supports the followings: - Creating root and sub folders - Uploading and downloading files - Renaming and deleting folders and files Bug fixes: - BROKEN API (issue 6834) Please do not forget to express your opinion of the assembly by rating it! Donate (EUR)Mini SQL Query: Mini SQL Query v1.0.0.59794: This release includes the following enhancements: Added a Most Recently Used file list Added Row counts to the query (per tab) and table view windows Added the Command Timeout option, only valid for MSSQL for now - see options If you have no idea what this thing is make sure you check out http://pksoftware.net/MiniSqlQuery/Help/MiniSqlQueryQuickStart.docx for an introduction. PK :-]HydroDesktop - CUAHSI Hydrologic Information System Desktop Application: 1.2.591 Beta Release: 1.2.591 Beta ReleaseJavaScript Web Resource Manager for Microsoft Dynamics CRM 2011: JavaScript Web Resource Manager (v1.0.521.54): Initial releasepatterns & practices: Project Silk: Project Silk Community Drop 12 - June 22, 2011: Changes from previous drop: Minor code changes. New "Introduction" chapter. New "Modularity" chapter. Updated "Architecture" chapter. Updated "Server-Side Implementation" chapter. Updated "Client Data Management and Caching" chapter. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separate download for your convenience. Installation Overview To ins...MapWinGIS ActiveX Map and GIS Component: Second Release Candidate for v4.8: This is the second release candidate for v4.8. This is the ocx installer only, with the new dependancies like GDAL v1.8, GEOS v3.3 and updated libraries for MrSid and ECW.Epinova.CRMFramework: Epinova.CRMFramework 0.5: Beta Release Candidate. Issues solved in this release: http://crmframework.codeplex.com/workitem/593SQL Server HowTo: Version 1.0: Initial ReleaseDropBox Linker: DropBox Linker 1.3: Added "Get links..." dialog, that provides selective public files links copying Get links link added to tray menu as the default option Fixed URL encoding .NET Framework 4.0 Client Profile requiredDotNetNuke® Community Edition: 06.00.00 Beta: Beta 1 (Build 2300) includes many important enhancements to the user experience. The control panel has been updated for easier access to the most important features and additional forms have been adapted to the new pattern. This release also includes many bug fixes that make it more stable than previous CTP releases. Beta ForumsTerraria World Viewer: Version 1.4: Update June 21st World file will be stored in memory to minimize chances of Terraria writing to it while we read it. Different set of APIs allow the program to draw the world much quicker. Loading world information (world variables, chest list) won't cause the GUI to freeze at all anymore. Re-introduced the "Filter chests" checkbox: Allow disabling of chest filter/finder so all chest symbos are always drawn. First-time users will have a default world path suggested to them: C:\Users\U...BlogEngine.NET: BlogEngine.NET 2.5 RC: BlogEngine.NET Hosting - Click Here! 3 Months FREE – BlogEngine.NET Hosting – Click Here! This is a Release Candidate version for BlogEngine.NET 2.5. The most current, stable version of BlogEngine.NET is version 2.0. Find out more about the BlogEngine.NET 2.5 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. To get started, be sure to check out our installation documentation. If you are upgrading from a previous version, please take a look at ...Microsoft All-In-One Code Framework - a centralized code sample library: All-In-One Code Framework 2011-06-19: Alternatively, you can install Sample Browser or Sample Browser VS extension, and download the code samples from Sample Browser. Improved and Newly Added Examples:For an up-to-date code sample index, please refer to All-In-One Code Framework Sample Catalog. NEW Samples for Windows Azure Sample Description Owner CSAzureStartupTask The sample demonstrates using the startup tasks to install the prerequisites or to modify configuration settings for your environment in Windows Azure Rafe Wu ...IronPython: 2.7.1 Beta 1: This is the first beta release of IronPython 2.7. Like IronPython 54498, this release requires .NET 4 or Silverlight 4. This release will replace any existing IronPython installation. The highlights of this release are: Updated the standard library to match CPython 2.7.2. Add the ast, csv, and unicodedata modules. Fixed several bugs. IronPython Tools for Visual Studio are disabled by default. See http://pytools.codeplex.com for the next generation of Python Visual Studio support. See...Facebook C# SDK: 5.0.40: This is a RTW release which adds new features to v5.0.26 RTW. Support for multiple FacebookMediaObjects in one request. Allow FacebookMediaObjects in batch requests. Removes support for Cassini WebServer (visual studio inbuilt web server). Better support for unit testing and mocking. updated SimpleJson to v0.6 Refer to CHANGES.txt for details. For more information about this release see the following blog posts: Facebook C# SDK - Multiple file uploads in Batch Requests Faceb...Candescent NUI: Candescent NUI (7774): This is the binary version of the source code in change set 7774.EffectControls-Silverlight controls with animation effects: EffectControls: EffectControlsNLog - Advanced .NET Logging: NLog 2.0 Release Candidate: Release notes for NLog 2.0 RC can be found at http://nlog-project.org/nlog-2-rc-release-notesNew ProjectsA SharePoint Client Object Model Demo in Silverlight: This demo shows how to utilize the SharePoint 2010 Foundation's Client Object Model within a Silverlight application.Azure SDK Extentions (ja): Azure SDK Tools ????????VS??????????????????。 ??????????????????、????????????????。Bango Apple iOS Application Analytics SDK: Bango application analytics is an analytics solution for mobile applications. This SDK provides a framework you can use in your application to add analytics capabilities to your mobile applications. It's developed in Object C and targets the Apple iOS operating system. Binsembler: Binsembler allows assembler programmers to easily import files into their source code, so that they'll no longer have to use external flash memory just for a few bytes of data. The project is developed in C#.DataTool: Liver.DTEMICHAG: This project contains the .NET Microframework solution for the EMIC Home Automation Gateway (EMICHAG). This project created within the co-funded research project called HOMEPLANE (http://homeplane.org/). html5_stady: html5 DemoJavaScript Web Resource Manager for Microsoft Dynamics CRM 2011: JavaScript Web Resource Manager for Microsoft Dynamics CRM 2011 helps CRM developers to extract javascript web resources to disk, maintain them and import changes back to CRM database. You will no longer have to perform mutliple clicks operation to update you js web resourcesKiefer SharePoint 2010 Templates: The Kiefer Consulting SharePoint 2010 Site Templates allow you to quickly create SharePoint sites pre-configured to meet specific business needs. These sites use only out of the box SharePoint Foundation 2010 features and will run on SharePoint Server 2010 and Office 365.KxFrame: KxFrame is intended to be free, easy-to-use, rapid business application framework. How many times were you starting to develop application from scratch? And every time you say to yourself: "Just this time, next time I'll make some framework"? It's over now! Lets make business application framework together and enjoy creating applications faster than ever.Made In Hell: Small console game. Just a training in code writings.Merula SmartPages: When you want to remember your data in an ASP.NET site after a postback, you will have to use Session or ViewState to remember your data. It’s a lot work to use Session and ViewStates, I just want to declare variables like in a desktop application. So I created a library called Merula SmartPages. This library will remember the properties and variables of your page. MVC ASPX to Razor View Converter: So this project will convert your MVC ASPX pages you worked so hard to make into cshtml files. You simply drop the exe into the folder and it works. I hope you enjoy, thanks! Netduino Utilities: Netduino classes I use with various odds of hardware I have. Hope it's useful ;-) NetEssentialsBugtrackerProject: Bug tracker for telerikPin My Site: Site to help demonstrate pinning capabilities of IE9Python3 ????: Python3 ????RDI Open CTe: RDI Open CTeRDI Open SPED (Contábil): RDI Open SPED (Contábil)RDI Open SPED (Fiscal): RDI Open SPED (Fiscal)sbc: sbcSimply Helpdesk UserControl: Simply Helpdesk is a user control that allows people to embed a helpdesk directly into their active projects. It is design to be as simple as possible, Minimal configuration is required to get it up and running.SQL Server HowTo: Short T-SQL scripts for difficult tasks from everyday job of programmers and database administrators. Also contains links to external resources with such useful information.Stashy: Stash data simply. Stashy is a Micro-NoSql framework. Stashy is a tiny interface, plus four reference implementations, for adding data storage and retrieval to all your .net types. Drop a single stashy class into your application then you can save and load the lot. storeBags01: hacer pedido de bolsos maletas y morrales compra de bolsos y maletas storebags UMTS Net: Program optimizes deployment of UMTS devicesvAddins - A little different addins framework: vAddins transforms C# and VB into powerful scripting languages for making addins or scripts. With this, you no longer need to open Visual Studio or other IDEs, reference assemblies, write the code and then built, move and test... Now you only have to write the code and test!Visual Studio Tags: A Visual Studio extension for tagging your source code files, and then explore your files using the tags.Window Phone Private Media Library: Windows Phone application to protect your media files from unauthorized eyes using your phone.Windows Touch/Mouse/TUIO Multi-touch Gesture recognizer & Trainer: This projects based on Single point Gesture recognizer paper of http://faculty.washington.edu/wobbrock/pubs/uist-07.1.pdf of unistrokes alphabets. The project expand capabilities to allow multi-touch gestures. C# library for rapid use. Mono Client. And C++ Client.WPF, Silverlight PDF viewer: WPF, Silverlight PDF viewer

    Read the article

  • Silverlight 4 MediaElement play sound

    - by NealWalters
    I converted a local sound file to a resource, which built this in my XAML: <UserControl.Resources> <my:Uri x:Key="SoundFiles">file:///c:/Audio/HebrewDemo/Shalom.wav</my:Uri> </UserControl.Resources> I did this by pasting a local disk mp3 filename into source, then clicked on the "dot" by source and chose "Extract Value to Resource". When I run, it tells me that "Uri" is not valid, and sure enough, in the Intellisense, I see other elements that start with "uri" but not just URI by itself. In the real world I want to specify a dynamic mp3 file name. For example, I might have a database of foreign language words used for flashcards, I want to play a sound file on a URL. But I thought I would try to walk before running...

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >