Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 454/906 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • can I use the IOC when using a 3rd party library

    - by Greg
    Hi, Q1 If I have a reusable library that is available, that uses interfaces with classes that use the getInstance concept to create concrete classes for you to use, then in this case would that make sense on the client side to use the IOC container to create instances of these classes? Or is that really applying a double layer of abstraction? Q2 Or in the cases where I'm building the reusable library myself and want the client to use an IOC container, then in my reusable library would I then dispense with any overhead of having factories or "getInstance" methods to instantiate the classes in the client? (i.e. as the IOC container would do this no?)

    Read the article

  • Consuming "Event Tracing for Windows" events

    - by Paul Baker
    An answer to this question has led me to look into using "Event Tracing for Windows" for our tracing needs. I have come across NTrace, which seems to be a good way to produce ETW events from C# code (using the XP-compatible "classic provider" model). However, I am unable to find an easy way to consume these events - to see them in real-time and/or log them to a file. The only way I have found is that described in the NTrace documentation: using a tool which is only available as part of the Windows DDK. In the case of a complex problem in the field, we may need to ask the user to produce a file containing a trace. We can't ask users to download the DDK or carry out a number of complex operations in order to do this. Is there a straightforward, user-friendly way to log ETW events to a file? Also, is it possible for someone to consume ETW events on Windows Vista/7 if they are not running as administrator?

    Read the article

  • Element is already the child of another element.

    - by Erica
    I get the folowing error in my Silverlight application. But i cant figure out what control it is that is the problem. If i debug it don't break on anything in the code, it just fails in this framework callstack with only framework code. Is there any way to get more information on what part of a Silverlight app that is the problem in this case. Message: Sys.InvalidOperationException: ManagedRuntimeError error #4004 in control 'Xaml1': System.InvalidOperationException: Element is already the child of another element. at MS.Internal.XcpImports.CheckHResult(UInt32 hr) at MS.Internal.XcpImports.Collection_AddValue[T](PresentationFrameworkCollection1 collection, CValue value) at MS.Internal.XcpImports.Collection_AddDependencyObject[T](PresentationFrameworkCollection1 collection, DependencyObject value) at System.Windows.PresentationFrameworkCollection1.AddDependencyObject(DependencyObject value) at System.Windows.Controls.UIElementCollection.AddInternal(UIElement value) at System.Windows.PresentationFrameworkCollection1.Add(T value) at System.Windows.Controls.AutoCompleteBox.OnApplyTemplate() at System.Windows.FrameworkElement.OnApplyTemplate(IntPtr nativeTarget)

    Read the article

  • Know more about Enqueue Deadlock Detection

    - by Liu Maclean(???)
    ??? ORACLE ALLSTAR???????????????????,??????? ???????enqueue lock?????????3 ??????,????????????????????????????ora-00060 dead lock??process???3s: SQL> select * from v$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE 10.2.0.5.0 Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com PROCESS A: set timing on; update maclean1 set t1=t1+1; PROCESS B: update maclean2 set t1=t1+1; PROCESS A: update maclean2 set t1=t1+1; PROCESS B: update maclean1 set t1=t1+1; ??3s? PROCESS A ?? ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:03.02 ????Process A????????????? 3s,?????????????,??????? ?????????? ???????: SQL> col name for a30 SQL> col value for a5 SQL> col DESCRIB for a50 SQL> set linesize 140 pagesize 1400 SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ 2 FROM SYS.x$ksppi x, SYS.x$ksppcv y 3 WHERE x.inst_id = USERENV ('Instance') 4 AND y.inst_id = USERENV ('Instance') 5 AND x.indx = y.indx 6 AND x.ksppinm='_enqueue_deadlock_scan_secs'; NAME VALUE DESCRIB ------------------------------ ----- -------------------------------------------------- _enqueue_deadlock_scan_secs 0 deadlock scan interval SQL> alter system set "_enqueue_deadlock_scan_secs"=18 scope=spfile; System altered. Elapsed: 00:00:00.01 SQL> startup force; ORACLE instance started. Total System Global Area 851443712 bytes Fixed Size 2100040 bytes Variable Size 738198712 bytes Database Buffers 104857600 bytes Redo Buffers 6287360 bytes Database mounted. Database opened. PROCESS A: SQL> set timing on; SQL> update maclean1 set t1=t1+1; 1 row updated. Elapsed: 00:00:00.06 Process B SQL> update maclean2 set t1=t1+1; 1 row updated. SQL> update maclean1 set t1=t1+1; Process A: SQL> SQL> alter session set events '10704 trace name context forever,level 10:10046 trace name context forever,level 8'; Session altered. SQL> update maclean2 set t1=t1+1; update maclean2 set t1=t1+1 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource  Elapsed: 00:00:18.05 ksqcmi: TX,90011,4a9 mode=6 timeout=21474836 WAIT #12: nam='enq: TX - row lock contention' ela= 2930070 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114759849120 WAIT #12: nam='enq: TX - row lock contention' ela= 2930636 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114762779801 WAIT #12: nam='enq: TX - row lock contention' ela= 2930439 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114765710430 *** 2012-06-12 09:58:43.089 WAIT #12: nam='enq: TX - row lock contention' ela= 2931698 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114768642192 WAIT #12: nam='enq: TX - row lock contention' ela= 2930428 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114771572755 WAIT #12: nam='enq: TX - row lock contention' ela= 2931408 name|mode=1415053318 usn<<16 | slot=589841 sequence=1193 obj#=56810 tim=1308114774504207 DEADLOCK DETECTED ( ORA-00060 ) [Transaction Deadlock] The following deadlock is not an ORACLE error. It is a deadlock due to user error in the design of an application or from issuing incorrect ad-hoc SQL. The following information may aid in determining the deadlock: ??????Process A?’enq: TX – row lock contention’ ?????ORA-00060 deadlock detected????3s ??? 18s , ???hidden parameter “_enqueue_deadlock_scan_secs”?????,????????0? ??????????: SQL> alter system set "_enqueue_deadlock_scan_secs"=4 scope=spfile; System altered. Elapsed: 00:00:00.01 SQL> alter system set "_enqueue_deadlock_time_sec"=9 scope=spfile; System altered. Elapsed: 00:00:00.00 SQL> startup force; ORACLE instance started. Total System Global Area 851443712 bytes Fixed Size 2100040 bytes Variable Size 738198712 bytes Database Buffers 104857600 bytes Redo Buffers 6287360 bytes Database mounted. Database opened. SQL> set linesize 140 pagesize 1400 SQL> show parameter dead NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _enqueue_deadlock_scan_secs integer 4 _enqueue_deadlock_time_sec integer 9 SQL> set timing on SQL> select * from maclean1 for update wait 8; T1 ---------- 11 Elapsed: 00:00:00.01 PROCESS B SQL> select * from maclean2 for update wait 8; T1 ---------- 3 SQL> select * from maclean1 for update wait 8; select * from maclean1 for update wait 8 PROCESS A SQL> select * from maclean2 for update wait 8; select * from maclean2 for update wait 8 * ERROR at line 1: ORA-30006: resource busy; acquire with WAIT timeout expired Elapsed: 00:00:08.00 ???????? ??? select for update wait?enqueue request timeout ?????8s? ,???????”_enqueue_deadlock_scan_secs”=4(deadlock scan interval),?4s???deadlock detected,????Process A????deadlock ???, ??????? ??Process A?????8s?raised??”ORA-30006: resource busy; acquire with WAIT timeout expired”??,??ORA-00060,?????process A???????? ????????”_enqueue_deadlock_time_sec”(requests with timeout <= this will not have deadlock detection)???,?enqueue request time < “_enqueue_deadlock_time_sec”?Server process?????dead lock detection,?????????enqueue request ??????timeout??????(_enqueue_deadlock_time_sec????5,?timeout<5s),???????????????;??????timeout>”_enqueue_deadlock_time_sec”???,Oracle????????????????????? ??????????: SQL> show parameter dead NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _enqueue_deadlock_scan_secs integer 4 _enqueue_deadlock_time_sec integer 9 Process A: SQL> set timing on; SQL> select * from maclean1 for update wait 10; T1 ---------- 11 Process B: SQL> select * from maclean2 for update wait 10; T1 ---------- 3 SQL> select * from maclean1 for update wait 10; PROCESS A: SQL> select * from maclean2 for update wait 10; select * from maclean2 for update wait 10 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:06.02 ??????? select for update wait 10?10s??, ?? 10s?????_enqueue_deadlock_time_sec???(9s),??Process A???????? ???????????????6s ???????_enqueue_deadlock_scan_secs?4s ? ???????????,???????????_enqueue_deadlock_scan_secs?????????3???? ??: enqueue lock?????????????? 1. ?????????deadlock detection??3s????, ????????_enqueue_deadlock_scan_secs(deadlock scan interval)???,??????0,????????_enqueue_deadlock_scan_secs?????????3???, ?_enqueue_deadlock_scan_secs=0 ??3s??, ?_enqueue_deadlock_scan_secs=4??6s??,????? 2. ???????_enqueue_deadlock_time_sec(requests with timeout <= this will not have deadlock detection)???,?enqueue request timeout< _enqueue_deadlock_time_sec(????5),?Server process?????????enqueue request timeout>_enqueue_deadlock_time_sec ????_enqueue_deadlock_scan_secs???????, ??request timeout??????select for update wait [TIMEOUT]??? ??: ???10.2.0.1?????????2?hidden parameter , ???patchset 10.2.0.3????? _enqueue_deadlock_time_sec, ?patchset 10.2.0.5??????_enqueue_deadlock_scan_secs? ?????RAC???????????10s, ???????_lm_dd_interval(dd time interval in seconds) ,????????8.0.6???? ???????????????,??????,  ?10g???????60s,?11g???????10s?  ???????11g??_lm_dd_interval?????????????,?????11g??LMD????????????,??????????RAC?LMD?Deadlock Detection???????CPU,???11g?Oracle????Team???LMD????????CPU????: ????????11g?LMD???????,???????11g??? UTS TRACE ????? DD???: SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> SQL> select * from global_name 2 ; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> alter system set "_lm_dd_interval"=20 scope=spfile; System altered. SQL> startup force; ORACLE instance started. Total System Global Area 1570009088 bytes Fixed Size 2228704 bytes Variable Size 1325403680 bytes Database Buffers 234881024 bytes Redo Buffers 7495680 bytes Database mounted. Database opened. SQL> set linesize 140 pagesize 1400 SQL> show parameter lm_dd NAME TYPE VALUE ------------------------------------ -------------------------------- ------------------------------ _lm_dd_interval integer 20 SQL> select count(*) from gv$instance; COUNT(*) ---------- 2 instance 1: SQL> oradebug setorapid 12 Oracle pid: 12, Unix process pid: 8608, image: [email protected] (LMD0) ? LMD0??? UTS TRACE??RAC???????????? SQL> oradebug event 10046 trace name context forever,level 8:10708 trace name context forever,level 103: trace[rac.*] disk high; Statement processed. Elapsed: 00:00:00.00 SQL> update maclean1 set t1=t1+1; 1 row updated. instance 2: SQL> update maclean2 set t1=t1+1; 1 row updated. SQL> update maclean1 set t1=t1+1; Instance 1: SQL> update maclean2 set t1=t1+1; update maclean2 set t1=t1+1 * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource Elapsed: 00:00:20.51 LMD0???UTS TRACE 2012-06-12 22:27:00.929284 : [kjmpbmsg:process][type 22][msg 0x7fa620ac85a8][from 1][seq 8148.0][len 192] 2012-06-12 22:27:00.929346 : [kjmxmpm][type 22][seq 0.0][msg 0x7fa620ac85a8][from 1] *** 2012-06-12 22:27:00.929 * kjddind: received DDIND msg with subtype x6 * reqp->dd_master_inst_kjxmddi == 1 * kjddind: dump sgh: 2012-06-12 22:27:00.929346*: kjddind: req->timestamp [0.15], kjddt [0.13] 2012-06-12 22:27:00.929346*: >> DDmsg:KJX_DD_REMOTE,TS[0.15],Inst 1->2,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.929346*: lock [0x95023930,829], op = [mast] 2012-06-12 22:27:00.929346*: reqp->timestamp [0.15], kjddt [0.13] 2012-06-12 22:27:00.929346*: kjddind: updated local timestamp [0.15] * kjddind: case KJX_DD_REMOTE 2012-06-12 22:27:00.929346*: ADD IO NODE WFG: 0 frame pointer 2012-06-12 22:27:00.929346*: PUSH: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: POP: type=res, enqueue(0xffffffff.0xffffffff)=0xbbb9af40, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: kjddopr[TX 0xe000c.0x32][ext 0x5,0x0]: blocking lock 0xbbb9a800, owner 2097154 of inst 2 2012-06-12 22:27:00.929346*: PUSH: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: ADD NODE TO WFG: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: POP: type=txn, enqueue(0xffffffff.0xffffffff)=0xbbb9a800, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: kjddopt: converting lock 0xbbce92f8 on 'TX' 0x80016.0x5d4,txid [2097154,34]of inst 2 2012-06-12 22:27:00.929346*: PUSH: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: PROCESS: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929346*: ADD NODE TO WFG: type=res, enqueue(0xffffffff.0xffffffff)=0xbbce92f8, block=KJUSEREX, snode=1 2012-06-12 22:27:00.929855 : GSIPC:AMBUF: rcv buff 0x7fa620aa8cd8, pool rcvbuf, rqlen 1102 2012-06-12 22:27:00.929878 : GSIPC:GPBMSG: new bmsg 0x7fa620aa8d48 mb 0x7fa620aa8cd8 msg 0x7fa620aa8d68 mlen 192 dest x100 flushsz -1 2012-06-12 22:27:00.929878*: << DDmsg:KJX_DD_REMOTE,TS[0.15],Inst 2->1,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.929878*: lock [0xbbce92f8,287], op = [mast] 2012-06-12 22:27:00.929878*: ADD IO NODE WFG: 0 frame pointer 2012-06-12 22:27:00.929923 : [kjmpbmsg:compl][msg 0x7fa620ac8588][typ p][nmsgs 1][qtime 0][ptime 0] 2012-06-12 22:27:00.929947 : GSIPC:PBAT: flush start. flag 0x79 end 0 inc 4.4 2012-06-12 22:27:00.929963 : GSIPC:PBAT: send bmsg 0x7fa620aa8d48 blen 224 dest 1.0 2012-06-12 22:27:00.929979 : GSIPC:SNDQ: enq msg 0x7fa620aa8d48, type 65521 seq 8325, inst 1, receiver 0, queued 1 012-06-12 22:27:00.929979 : GSIPC:SNDQ: enq msg 0x7fa620aa8d48, type 65521 seq 8325, inst 1, receiver 0, queued 1 2012-06-12 22:27:00.929996 : GSIPC:BSEND: flushing sndq 0xb491dd28, id 0, dcx 0xbc517770, inst 1, rcvr 0 qlen 0 1 2012-06-12 22:27:00.930014 : GSIPC:BSEND: no batch1 msg 0x7fa620aa8d48 type 65521 len 224 dest (1:0) 2012-06-12 22:27:00.930088 : kjbsentscn[0x0.3f72dc][to 1] 2012-06-12 22:27:00.930144 : GSIPC:SENDM: send msg 0x7fa620aa8d48 dest x10000 seq 8325 type 65521 tkts x1 mlen xe00110 2012-06-12 22:27:00.930531 : GSIPC:KSXPCB: msg 0x7fa620aa8d48 status 30, type 65521, dest 1, rcvr 0 WAIT #0: nam='ges remote message' ela= 1372 waittime=80 loop=0 p3=74 obj#=-1 tim=1339554420931640 2012-06-12 22:27:00.931728 : GSIPC:RCVD: ksxp msg 0x7fa620af6490 sndr 1 seq 0.8149 type 65521 tkts 1 2012-06-12 22:27:00.931746 : GSIPC:RCVD: watq msg 0x7fa620af6490 sndr 1, seq 8149, type 65521, tkts 1 2012-06-12 22:27:00.931763 : GSIPC:RCVD: seq update (0.8148)->(0.8149) tp -15 fg 0x4 from 1 pbattr 0x0 2012-06-12 22:27:00.931779 : GSIPC:TKT: collect msg 0x7fa620af6490 from 1 for rcvr 0, tickets 1 2012-06-12 22:27:00.931794 : kjbrcvdscn[0x0.3f72dc][from 1][idx 2012-06-12 22:27:00.931810 : kjbrcvdscn[no bscn dd_master_inst_kjxmddi == 1 * kjddind: dump sgh: NXTIN (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 NXTOUT (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 2012-06-12 22:27:00.932058*: kjddind: req->timestamp [0.15], kjddt [0.15] 2012-06-12 22:27:00.932058*: >> DDmsg:KJX_DD_VALIDATE,TS[0.15],Inst 1->2,ddxid[id1,id2,inst:2097153,31,1],ddlock[0x95023930,829],ddMasterInst 1 2012-06-12 22:27:00.932058*: lock [(nil),0], op = [vald_dd] 2012-06-12 22:27:00.932058*: kjddind: updated local timestamp [0.15] * kjddind: case KJX_DD_VALIDATE *** 2012-06-12 22:27:00.932 * kjddvald called: kjxmddi stuff: * cont_lockp (nil) * dd_lockp 0x95023930 * dd_inst 1 * dd_master_inst 1 * sgh graph: NXTIN (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 NXTOUT (nil) 0 wq 0 cvtops x0 0x0.0x0(ext 0x0,0x0)[0000-0000-00000000] inst 1 POP WFG NODE: lock=(nil) * kjddvald: dump the PRQ: BLOCKER 0xbbb9a800 5 wq 1 cvtops x28 TX 0xe000c.0x32(ext 0x5,0x0)[20000-0002-00000022] inst 2 BLOCKED 0xbbce92f8 5 wq 2 cvtops x1 TX 0x80016.0x5d4(ext 0x2,0x0)[20000-0002-00000022] inst 2 * kjddvald: KJDD_NXTONOD ->node_kjddsg.dinst_kjddnd =1 * kjddvald: ... which is not my node, my subgraph is validated but the cycle is not complete Global blockers dump start:--------------------------------- DUMP LOCAL BLOCKER/HOLDER: block level 5 res [0x80016][0x5d4],[TX][ext 0x2,0x0] ??dead lock!!! ???????11.2.0.3???? RAC LMD???????????”_lm_dd_interval”????????????20s?  ???????10g?_lm_dd_interval???60s,??????Processes?????????????????,????????????Server Process????????60s??????11g?????(??????LMD???????)???????,???????????10s??? Enqueue Deadlock Detection? ?11g??? RAC?LMD???????hidden parameter ????”_lm_dd_interval”???,RAC????????????????,???????????: SQL> col name for a50 SQL> col describ for a60 SQL> col value for a20 SQL> set linesize 140 pagesize 1400 SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ 2 FROM SYS.x$ksppi x, SYS.x$ksppcv y 3 WHERE x.inst_id = USERENV ('Instance') 4 AND y.inst_id = USERENV ('Instance') 5 AND x.indx = y.indx 6 AND x.ksppinm like '_lm_dd%'; NAME VALUE DESCRIB -------------------------------------------------- -------------------- ------------------------------------------------------------ _lm_dd_interval 20 dd time interval in seconds _lm_dd_scan_interval 5 dd scan interval in seconds _lm_dd_search_cnt 3 number of dd search per token get _lm_dd_max_search_time 180 max dd search time per token _lm_dd_maxdump 50 max number of locks to be dumped during dd validation _lm_dd_ignore_nodd FALSE if TRUE nodeadlockwait/nodeadlockblock options are ignored 6 rows selected.

    Read the article

  • LiNQ to Entities, Include less

    - by Freddy
    Hi, If you are making a LinQ to entities expression for ClassA where A has a relation to ClassB like this: var temp = from p in myEntities.ClassA.Include("ClassB") where ... select p; You will get a set of ClassA:s with the reference to ClassB loaded. The thing in my case is that I dont really need to load ALL the ClassB-references, just a few of them. But I dont want to loop through the list of ClassA:s and load them individually, i want my database operations to be fewer and bigger instead of reading small chunks here and there. Is it possible to put some kind of restrictions on which references to include or do you have to accept this all or nothing style?

    Read the article

  • How to catch all exceptions in Flex?

    - by Yaba
    When I run a Flex application in the debug flash player I get an exception pop up as soon as something unexpected happened. However when a customer uses the application he does not use the debug flash player. In this case he does not get an exception pop up, but he UI is not working. So for supportability reasons, I would like to catch any exception that can happen anywhere in the Flex UI and present an error message in a Flex internal popup. By using Java I would just encapsulate the whole UI code in a try/catch block, but with MXML applications in Flex I do not know, where I could perform such a general try/catch.

    Read the article

  • SQL Server 2005 sp_send_dbmail

    - by Jit
    Hi Friends, When we use sp_send_dbmail to send email with attachment, the attachment gets copied into a folder inside C:\Windows\Temp. As we have many emails to be sent every day, the temp folder grows rapidly. This is the case with SQL Server 2005. We noticed that, with SQL Server 2008, we dont see these file under temp folder. Is there any setting to turn the above behavior off? Does SQL Server 2008 store the files in any other folder and not in temp? Appreciate your help and time. Thanks.

    Read the article

  • Find largest rectangle containing all zero's in an N X N binary matrix

    - by Rajendra
    Given an N X N binary matrix (containing only 0's or 1's). How can we go about finding largest rectangle containing all 0's? Example: I 0 0 0 0 1 0 0 0 1 0 0 1 II->0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 <--IV 0 0 1 0 0 0 IV is a 6 X 6 binary matrix, return value in this case will be Cell 1: (2, 1) and Cell 2: (4, 4). Resulting sub-matrix can be square or rectangle. Return value can be size of the largest sub-matrix of all 0's also, for example, here 3 X 4.

    Read the article

  • Inspecting Lucene.NET index with Luke want to replicate NHibernate.Search view

    - by Tim Peel
    Hi, I am trying to put together an index using terms, which I specify as a comma separated list. I want to replicate the display in Luke as seen here: http://ayende.com/Blog/archive/2009/05/03/nhibernate-search-again.aspx But my index value just shows as a single field with the comma separate list value. For example: Tags term,anotherterm When I search my index, it will return results if I search with "term" but will not return anything if I search with "anotherterm" I thought the indexing process would break the comma separate list apart into separate values but this does not seem to be the case. Anyone got any ideas? Thanks

    Read the article

  • Flash CS4 <b> tag in with htmlText

    - by Hanpan
    Wow, this one is really weird. I have the following setup: Two textfields on the stage with Arial normal and Arial bold, both embedded. I then have another textfield which I am setting like so: tb.htmlText = "Test <b>Test</b>"; For some reason, the bold text is not displaying as bold, but as regular weight. I have tried embedding the fonts in the library, using the [Embed] meta tag and even resorted to using CSS to force the fontFamily. Weirdly, I can use Font.enumurateFonts and see both fonts are embedded, but the textfield refused to use the bold version inside the < b tags. I've been told this is a problem with Flash CS4 on a mac and that it will work on PC. I refuse to believe this is the case, however. Surely Adobe would have fixed this by now? Any help would be appreciated.

    Read the article

  • How to (pre)start xamlx workflow service

    - by rwwilden
    Related to this question. I have a xamlx workflow service that loads part of its definition from a database when it runs (using ActivityXamlServices.Load). Reason for this is that I need versioning, see the related question. I'll use WCF routing to direct calls to the right service. The part that I load dynamically contains a Receive activity. However, this activity is 'invisible' as long as the workflow doesn't start because the part of the workflow I load from the database is only loaded when the workflow starts. So from the outside it appears as if there is no Receive activity in the workflow. Apart from not being able to generate a contract for the workflow service, I can't call the service either. My first attempt was to do a soap call with the right contract on the workflow service. However, the runtime doesn't automagically activate my workflow in that case. So the question is, how do I start a workflow that is hosted inside IIS? Regards, Ronald

    Read the article

  • Creating a Wiki site in SharePoint 2010

    - by Ben
    I have a SharePoint 2010 site set up and would like to add a Wiki site to it. Normally, to add a subsite to a parent site you would click "Site Actions" - "New Site" and choose the site type (e.g. Team Site) and it would work fine. In the case of adding a Wiki site, I choose "Enterprise Wiki", and get "Error An Unexpected Error has occured Correlation Id:...". So i took the CorrelationId and checked in the ULS files to see what the error is, and the first bad sign seems to be: SharePoint Foundation Feature Infrastructure " The element of type 'ContentTypeBinding' for feature 'EnterpriseWiki' (id: 76d688ad-c16e-4cec-9b71-7b7f0d79b9cd) threw an exception during activation: Specified argument was out of the range of valid values. Parameter name: Content type not found (Id: '0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF39004C1F8B46085B4D22B1CDC3DE08CFFB9C')." There are more errors in the log, but this seems to be the first one. Any ideas on whats going on? Thanks

    Read the article

  • Zend - combobox value depending on another combobox value

    - by Sorin Adrian Carbunaru
    Is there a way in Zend Framework to fill a combobox with values depending on the value chosen in a previous combobox, but on the same page? In my case I have a combobox for domain and one for specialization. If i choose Informatics in the first combobox (domain), I want to fill the second one with a single specialization - "Informatics". But if I choose Math in the first, I want to fill the second one with two specialization: "Mathematics" and "Mathematics & Informatics". Thank you! Sorin

    Read the article

  • PostSharp OnMethodBoundaryAspect

    - by José F. Romaniello
    I'm working on an aspect with postsharp 1.5 and OnMethodBoundaryAspect. I want my aspect have the following behavior by default: 1-If the attribute is used at class level the aspect is applied only on PUBLIC methods. 2-The user of the aspect can put the aspect in a private or protected method. If I use this [MulticastAttributeUsage( MulticastTargets.Method, TargetMemberAttributes = MulticastAttributes.Public)] the point 1 works, but the case 2 doesn't even build becaue is incompatible. Then I tried to use: AttributeTargetTypeAttributes = MulticastAttributes.Public; in the constructor of the aspect, but doesn't work. Thank you very much in advance.

    Read the article

  • Batch select with SQLAlchemy

    - by muckabout
    I have a large set of values V, some of which are likely to exist in a table T. I would like to insert into the table those which are not yet inserted. So far I have the code: for value in values: s = self.conn.execute(mytable.__table__.select(mytable.value == value)).first() if not s: to_insert.append(value) I feel like this is running slower than it should. I have a few related questions: Is there a way to construct a select statement such that you provide a list (in this case, 'values') to which sqlalchemy responds with records which match that list? Is this code overly expensive in constructing select objects? Is there a way to construct a single select statement, then parameterize at execution time?

    Read the article

  • SFML SetFramerateLimit Not Limiting Frame Rate

    - by Person
    Compiler: Visual C++ OS: Windows 7 Enterprise For some reason, Window::SetFramerateLimit isn't limiting the frame rate in the app I'm working on, but it works fine for others. The framerate is capped to 60, but mine jumps around at 100-99 and then goes down to 50 sometimes. It actually causes serious issues. For example, if I create many objects on screen, I'll see a heavy performance hit, whereas others report no change. Any ideas regarding why this is happening? If you need more information, I'd be happy to oblige. Thanks. P.S. I have strong reasons to believe that it is not simply a case of "their hardware is just more powerful than yours."

    Read the article

  • WCF REST vs. ADO.NET Data Services

    - by ray247
    Hi there, Could someone compare and contrast on WCF Rest services vs. ADO.NET Data Services? What is the difference and when to use which? Thanks, Ray. Edit: thanks to the first answer, just to give a bit background on what I'm looking to do: I have a web app I plan to put in the cloud (someday), the DAL is built with ADO.NET Entity Framework. And, I need to figure which web service data access technology would best fit my case.

    Read the article

  • JDBC: What is the correct JDBC URL to connect to a RAC database

    - by Vinnie
    Hi, We were connecting to Oracle from our code with a simple (custom) JDBC connector class. This class reads the connection properties from a resource file and tries to make a connection to Oracle (thin connection). However, recently the database have moved to a RAC and now the application is unable to connect to the DB. Here is the TNSPING output: Used LDAP adapter to resolve the alias Attempting to contact (DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=OFF)(FAILOVER=ON) (ADDRESS=(PROTOCOL=TCP)(HOST=tst-db1.myco.com)(PORT=1604)) (ADDRESS=(PROTOCOL=TCP)(HOST=tst-db2.myco.com)(PORT=1604)))(CONNECT_DATA= SERVICE_NAME=mydb1.myco.com)(SERVER=DEDICATED))) OK (80 msec) What would be the correct URL to specify in this case. Regards, Ashish

    Read the article

  • What is the purpose of both target API and minSDK

    - by Scott Ferguson
    Can somebody explain to me the difference between the project target and the minimum SDK? I want my app to run on Donut devices, and the APK I built with a target of 7 worked just fine. When I set an explicit minimum SDK in the Android manifest of 4 (1.6) the compiler bitched at me that the target exceeded the minimum. I reset the target to 4 only to see what would happen, and now I've got compiler errors. An example is the START_NOT_STICKY constant in android.app.Service. It doesn't exist in API level 4, but does exist in API level 7. This is also the case with Service.onStartCommand(). In API level 7 you need to explicity override this method, whereas in API level 4 you don't. So why does the app work in 1.6 despite all this? How could 1.6 know how to use SERVICE_NOT_STICKY when the associated API level doesn't know about it?

    Read the article

  • Difference between Thread.Sleep(0) and Thread.Yield()

    - by Xose Lluis
    As Java has had Sleep and Yield from long ago, I've found answers for that platform, but not for .Net .Net 4 includes the new Thread.Yield() static method. Previously the common way to hand over the CPU to other process was Thread.Sleep(0). Apart from Thread.Yield() returning a boolean, are there other performance, OS internals differences? For example, I'm not sure if Thread.Sleep(0) checks if other thread is ready to run before changing the current Thread to waiting state... if that's not the case, when no other threads are ready, Thread.Sleep(0) would seem rather worse that Thread.Yield().

    Read the article

  • Working with XML & XSL

    - by gsvirdi
    I'm planning to create a news item which uses xml as it's backend and the Display should be like: Date: 08/Mar/2010 ------------------------------ Title     | News ------------------------------ News 4 | Some news News 3 | Some news News 2 | Some news News 1 | Some news ------------------------------ Date: 07/Mar/2010 ------------------------------ Title     | News ------------------------------ News 5 | Some news News 4 | Some news News 3 | Some news News 2 | Some news News 1 | Some news Display should be sorted on Date (descending) Then news items should be sorted on time (descending) Today's news item should be on top, then titles should be sorted-decending (timewise), later will come previous day's news items. I'm not able to come up with the logic of xml which should be used in this case. moreover I'm not able to figure out how should I check "Today's date" in xml's "if" statement. Can I please get some code sample to understand this kinda logic???

    Read the article

  • Upload images problem: IO error. (Error #2038)

    - by ile
    I'm using script which is uploading files to server via flash component. Sometimes, very rarely, when trying to upload images via Firefox I get following error: IO error #2038. Searching on the net I could find reason why is it really happening to me. But I found solution for my case: I open IE6, do the same thing there (photos are always uploaded without problem) and the when I try again in Firefox problem disappears. If someone had similar problems maybe this could help or maybe this hint could help to someone discovering cause of the problem :)

    Read the article

  • How to develop screen resolution independent smart device application in C#?

    - by Shailesh Jaiswal
    I am developing smart device application in C#. I am developing this application for 240*320 screen resolution. I want to make this application screen resolution independent so that it can run on different mobile devices with different screen resolutions. Currently I am testing my application on different emulators for platform- Pocket PC 2003, Windows mobile 6 standard SDK & Windows Mobile 6 professional SDK . When I run the application on emulator for 240*320 screen resolution or less than that it works well ( only it provide the horizontal & vertical scrall bar in case of resolution less tha 240*320). If I run my application on emulator with more than 240*320 screen resolution its User Inferface gets badly affected. How to make the smart device application screen resolution independent ? Can you provide me the code or link through which i can resolve the above issue? Is there any setting for making the application screen resolution independent?

    Read the article

  • Debugging with Visual Studio 2010 and VB.NET: Immediate fails due to proection level

    - by marco.ragogna
    It happens quite frequently, more times per day, that with Visual Studio 2010, during the debugging, when I used Immediate commands like: ? NamedVariable I receive the following error: 'NamedVariable' is not declared. It may be inaccessible due to its protection level. In this case also other debug features seems gone, but I can set breakpoints, step into, step over, etc. The solution is stop debugging, clean and rebuild the project, and retry. I am developing a VB.NET Windows Forms application, but it happened with VB.NET WPF projects too. I never had this behavior with VS 2008. Is this a known bug or could it be a problem of my environment/installation? Do you have any idea how to solve this little, but annoying issue?

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >