Search Results

Search found 25005 results on 1001 pages for 'sequential number'.

Page 207/1001 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • I thought the new AUTO_SAMPLE_SIZE in Oracle Database 11g looked at all the rows in a table so why do I see a very small sample size on some tables?

    - by Maria Colgan
    I recently got asked this question and thought it was worth a quick blog post to explain in a little more detail what is going on with the new AUTO_SAMPLE_SIZE in Oracle Database 11g and what you should expect to see in the dictionary views. Let’s take the SH.CUSTOMERS table as an example.  There are 55,500 rows in the SH.CUSTOMERS tables. If we gather statistics on the SH.CUSTOMERS using the new AUTO_SAMPLE_SIZE but without collecting histogram we can check what sample size was used by looking in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views. The sample sized shown in the USER_TABLES is 55,500 rows or the entire table as expected. In USER_TAB_COL_STATISTICS most columns show 55,500 rows as the sample size except for four columns (CUST_SRC_ID, CUST_EFF_TO, CUST_MARTIAL_STATUS, CUST_INCOME_LEVEL ). The CUST_SRC_ID and CUST_EFF_TO columns have no sample size listed because there are only NULL values in these columns and the statistics gathering procedure skips NULL values. The CUST_MARTIAL_STATUS (38,072) and the CUST_INCOME_LEVEL (55,459) columns show less than 55,500 rows as their sample size because of the presence of NULL values in these columns. In the SH.CUSTOMERS table 17,428 rows have a NULL as the value for CUST_MARTIAL_STATUS column (17428+38072 = 55500), while 41 rows have a NULL values for the CUST_INCOME_LEVEL column (41+55459 = 55500). So we can confirm that the new AUTO_SAMPLE_SIZE algorithm will use all non-NULL values when gathering basic table and column level statistics. Now we have clear understanding of what sample size to expect lets include histogram creation as part of the statistics gathering. Again we can look in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views to find the sample size used. The sample size seen in USER_TABLES is 55,500 rows but if we look at the column statistics we see that it is same as in previous case except  for columns  CUST_POSTAL_CODE and  CUST_CITY_ID. You will also notice that these columns now have histograms created on them. The sample size shown for these columns is not the sample size used to gather the basic column statistics. AUTO_SAMPLE_SIZE still uses all the rows in the table - the NULL rows to gather the basic column statistics (55,500 rows in this case). The size shown is the sample size used to create the histogram on the column. When we create a histogram we try to build it on a sample that has approximately 5,500 non-null values for the column.  Typically all of the histograms required for a table are built from the same sample. In our example the histograms created on CUST_POSTAL_CODE and the CUST_CITY_ID were built on a single sample of ~5,500 (5,450 rows) as these columns contained only non-null values. However, if one or more of the columns that requires a histogram has null values then the sample size maybe increased in order to achieve a sample of 5,500 non-null values for those columns. n addition, if the difference between the number of nulls in the columns varies greatly, we may create multiple samples, one for the columns that have a low number of null values and one for the columns with a high number of null values.  This scheme enables us to get close to 5,500 non-null values for each column. +Maria Colgan

    Read the article

  • ?12c database ????Adaptive Execution Plans ????????

    - by Liu Maclean(???)
    12c R1 ????SQL??????- Adaptive Execution Plans ????????,???????optimizer ??????(runtime)???????????????, ????????????????????? SQL???????? ????????????, ?????????????????????????????????????????????????????????????adaptive plan ????????????????????????????????????,?????subplan???????????????????? ??????, ???????? ???????????????,?????????, ?????? ???????????????”???”????, ???????????????????buffer ???????  ????????????,?????,??????????????????? ???optimizer ?????????????????????????,?????????????????????????????????????????plan???? ??12C?????????????, ???????????????????,?????? ???????????? ????????????2???: Dynamic Plans????: ???????????????????????;??????,???optimizer??????????subplans??????????????, ???????????????????,?????????????? Reoptimization????: ?Dynamic Plans????,Reoptimization??????????????????????Reoptimization??,?????????????????????????,??reoptimization????? OPTIMIZER_ADAPTIVE_REPORTING_ONLY ???? report-only????????????????TRUE,?????????report-only????,???????????????,??????????????? Dynamic Plans ??????????????,????????????????????????, ?????????????,???????????,????????????????????????????????????????? ?????????????final plan??????????????default plan, ??final plan?default plan???????,????????????? subplan ???????????????,???????????????????????? ??????,???????statistics collector ?buffer???????????statistics collector?????????????????,???????????????????????????? ?????????????????????????????????????????,??????????,?????????????? ???????????,???????buffer???? ???????????????,?????????????????????????????,??????buffer,??????final plan? ????????,???????????????????????,????????????????? ?V$SQL??????IS_RESOLVED_DYNAMIC_PLAN??????????final plan???default plan? ??????dynamic plan ???????SQL PLAN directives?????? declare cursor PLAN_DIRECTIVE_IDS is select directive_id from DBA_SQL_PLAN_DIRECTIVES; begin for z in PLAN_DIRECTIVE_IDS loop DBMS_SPD.DROP_SQL_PLAN_DIRECTIVE(z.directive_id); end loop; end; / explain plan for select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id; select * from table(dbms_xplan.display()); Plan hash value: 1255158658 www.askmaclean.com ------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 4 | 128 | 7 (0)| 00:00:01 | | 1 | NESTED LOOPS | | | | | | | 2 | NESTED LOOPS | | 4 | 128 | 7 (0)| 00:00:01 | |* 3 | TABLE ACCESS FULL | ORDER_ITEMS | 4 | 48 | 3 (0)| 00:00:01 | |* 4 | INDEX UNIQUE SCAN | PRODUCT_INFORMATION_PK | 1 | | 0 (0)| 00:00:01 | | 5 | TABLE ACCESS BY INDEX ROWID| PRODUCT_INFORMATION | 1 | 20 | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("O"."UNIT_PRICE"=15 AND "QUANTITY">1) 4 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") alter session set events '10053 trace name context forever,level 1'; OR alter session set events 'trace[SQL_Plan_Directive] disk highest'; select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id; ---------------------------------------------------------------+-----------------------------------+ | Id | Operation | Name | Rows | Bytes | Cost | Time | ---------------------------------------------------------------+-----------------------------------+ | 0 | SELECT STATEMENT | | | | 7 | | | 1 | HASH JOIN | | 4 | 128 | 7 | 00:00:01 | | 2 | NESTED LOOPS | | | | | | | 3 | NESTED LOOPS | | 4 | 128 | 7 | 00:00:01 | | 4 | STATISTICS COLLECTOR | | | | | | | 5 | TABLE ACCESS FULL | ORDER_ITEMS | 4 | 48 | 3 | 00:00:01 | | 6 | INDEX UNIQUE SCAN | PRODUCT_INFORMATION_PK| 1 | | 0 | | | 7 | TABLE ACCESS BY INDEX ROWID | PRODUCT_INFORMATION | 1 | 20 | 1 | 00:00:01 | | 8 | TABLE ACCESS FULL | PRODUCT_INFORMATION | 1 | 20 | 1 | 00:00:01 | ---------------------------------------------------------------+-----------------------------------+ Predicate Information: ---------------------- 1 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") 5 - filter(("O"."UNIT_PRICE"=15 AND "QUANTITY">1)) 6 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") ===================================== SPD: BEGIN context at statement level ===================================== Stmt: ******* UNPARSED QUERY IS ******* SELECT /*+ OPT_ESTIMATE (@"SEL$1" JOIN ("P"@"SEL$1" "O"@"SEL$1") ROWS=13.000000 ) OPT_ESTIMATE (@"SEL$1" TABLE "O"@"SEL$1" ROWS=13.000000 ) */ "P"."PRODUCT_NAME" "PRODUCT_NAME" FROM "OE"."ORDER_ITEMS" "O","OE"."PRODUCT_INFORMATION" "P" WHERE "O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1 AND "P"."PRODUCT_ID"="O"."PRODUCT_ID" Objects referenced in the statement PRODUCT_INFORMATION[P] 92194, type = 1 ORDER_ITEMS[O] 92197, type = 1 Objects in the hash table Hash table Object 92197, type = 1, ownerid = 6573730143572393221: No Dynamic Sampling Directives for the object Hash table Object 92194, type = 1, ownerid = 17822962561575639002: No Dynamic Sampling Directives for the object Return code in qosdInitDirCtx: ENBLD =================================== SPD: END context at statement level =================================== ======================================= SPD: BEGIN context at query block level ======================================= Query Block SEL$1 (#0) Return code in qosdSetupDirCtx4QB: NOCTX ===================================== SPD: END context at query block level ===================================== SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92197, objtyp = 1, vecsize = 6, colvec = [4, 5, ], fid = 2896834833840853267 SPD: Inserted felem, fid=2896834833840853267, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = YES, keep = YES SPD: qosdCreateFindingSingTab retCode = CREATED, fid = 2896834833840853267 SPD: qosdCreateDirCmp retCode = CREATED, fid = 2896834833840853267 SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = JOIN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SKIP_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = JOIN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92197, objtyp = 1, vecsize = 6, colvec = [4, 5, ], fid = 2896834833840853267 SPD: Modified felem, fid=2896834833840853267, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = YES, keep = YES SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92194, objtyp = 1, vecsize = 2, colvec = [1, ], fid = 5618517328604016300 SPD: Modified felem, fid=5618517328604016300, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92194, objtyp = 1, vecsize = 2, colvec = [1, ], fid = 1142802697078608149 SPD: Modified felem, fid=1142802697078608149, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO SPD: Generating finding id: type = 1, reason = 2, objcnt = 2, obItr = 0, objid = 92194, objtyp = 1, vecsize = 0, obItr = 1, objid = 92197, objtyp = 1, vecsize = 0, fid = 1437680122701058051 SPD: Modified felem, fid=1437680122701058051, ftype = 1, freason = 2, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO select * from table(dbms_xplan.display_cursor(format=>'report')) ; ????report????adaptive plan Adaptive plan: ------------- This cursor has an adaptive plan, but adaptive plans are enabled for reporting mode only.  The plan that would be executed if adaptive plans were enabled is displayed below. ------------------------------------------------------------------------------------------ | Id  | Operation          | Name                | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------ |   0 | SELECT STATEMENT   |                     |       |       |     7 (100)|          | |*  1 |  HASH JOIN         |                     |     4 |   128 |     7   (0)| 00:00:01 | |*  2 |   TABLE ACCESS FULL| ORDER_ITEMS         |     4 |    48 |     3   (0)| 00:00:01 | |   3 |   TABLE ACCESS FULL| PRODUCT_INFORMATION |     1 |    20 |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------ SQL> select SQL_ID,IS_RESOLVED_DYNAMIC_PLAN,sql_text from v$SQL WHERE SQL_TEXT like '%MALCEAN%' and sql_text not like '%like%'; SQL_ID IS -------------------------- -- SQL_TEXT -------------------------------------------------------------------------------- 6ydj1bn1bng17 Y select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id ???? explain plan for ????default plan, ??????optimizer???final plan,??V$SQL.IS_RESOLVED_DYNAMIC_PLAN???Y,????????????? DBA_SQL_PLAN_DIRECTIVES?????????????SQL PLAN DIRECTIVES, ???12c? ???MMON?????DML ???column usage??????????,????SMON??? MMON????SGA??PLAN DIRECTIVES??? ?????DBMS_SPD.flush_sql_plan_directive???? select directive_id,type,reason from DBA_SQL_PLAN_DIRECTIVES / DIRECTIVE_ID TYPE REASON ----------------------------------- -------------------------------- ----------------------------- 10321283028317893030 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE 4757086536465754886 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE 16085268038103121260 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE SQL> set pages 9999 SQL> set lines 300 SQL> col state format a5 SQL> col subobject_name format a11 SQL> col col_name format a11 SQL> col object_name format a13 SQL> select d.directive_id, o.object_type, o.object_name, o.subobject_name col_name, d.type, d.state, d.reason 2 from dba_sql_plan_directives d, dba_sql_plan_dir_objects o 3 where d.DIRECTIVE_ID=o.DIRECTIVE_ID 4 and o.object_name in ('ORDER_ITEMS') 5 order by d.directive_id; DIRECTIVE_ID OBJECT_TYPE OBJECT_NAME COL_NAME TYPE STATE REASON ------------ ------------ ------------- ----------- -------------------------------- ----- ------------------------------------- --- 1.8156E+19 COLUMN ORDER_ITEMS UNIT_PRICE DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1.8156E+19 TABLE ORDER_ITEMS DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1.8156E+19 COLUMN ORDER_ITEMS QUANTITY DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE DBA_SQL_PLAN_DIRECTIVES????? _BASE_OPT_DIRECTIVE ? _BASE_OPT_FINDING SELECT d.dir_own#, d.dir_id, d.f_id, decode(type, 1, 'DYNAMIC_SAMPLING', 'UNKNOWN'), decode(state, 1, 'NEW', 2, 'MISSING_STATS', 3, 'HAS_STATS', 4, 'CANDIDATE', 5, 'PERMANENT', 6, 'DISABLED', 'UNKNOWN'), decode(bitand(flags, 1), 1, 'YES', 'NO'), cast(d.created as timestamp), cast(d.last_modified as timestamp), -- Please see QOSD_DAYS_TO_UPDATE and QOSD_PLUS_SECONDS for more details -- about 6.5 cast(d.last_used as timestamp) - NUMTODSINTERVAL(6.5, 'day') FROM sys.opt_directive$ d ??dbms_spd??? SQL PLAN DIRECTIVES, SQL PLAN DIRECTIVES???retention ???53?: Package: DBMS_SPD This package provides subprograms for managing Sql Plan Directives(SPD). SPD are objects generated automatically by Oracle server. For example, if server detects that the single table cardinality estimated by optimizer is off from the actual number of rows returned when accessing the table, it will automatically create a directive to do dynamic sampling for the table. When any Sql statement referencing the table is compiled, optimizer will perform dynamic sampling for the table to get more accurate estimate. Notes: DBMSL_SPD is a invoker-rights package. The invoker requires ADMINISTER SQL MANAGEMENT OBJECT privilege for executing most of the subprograms of this package. Also the subprograms commit the current transaction (if any), perform the operation and commit it again. DBA view dba_sql_plan_directives shows all the directives created in the system and the view dba_sql_plan_dir_objects displays the objects that are included in the directives. -- Default value for SPD_RETENTION_WEEKS SPD_RETENTION_WEEKS_DEFAULT CONSTANT varchar2(4) := '53'; | STATE : NEW : Newly created directive. | : MISSING_STATS : The directive objects do not | have relevant stats. | : HAS_STATS : The objects have stats. | : PERMANENT : A permanent directive. Server | evaluated effectiveness and these | directives are useful. | | AUTO_DROP : YES : Directive will be dropped | automatically if not | used for SPD_RETENTION_WEEKS. | This is the default behavior. | NO : Directive will not be dropped | automatically. Procedure: flush_sql_plan_directive This procedure allows manually flushing the Sql Plan directives that are automatically recorded in SGA memory while executing sql statements. The information recorded in SGA are periodically flushed by oracle background processes. This procedure just provides a way to flush the information manually. ????”_optimizer_dynamic_plans”(enable dynamic plans)????????,???TRUE??DYNAMIC PLAN? ???FALSE???????????? ????,Dynamic Plan????????????Nested Loop?Hash Join???case ,????????Nested loop???????????HASH JOIN,?HASH JOIN????????????????? ????????subplan?????,???? pass?? ?join method???,?????STATISTICS COLLECTOR???cardinality?,???????HASH JOIN?????Nested Loop,????????????subplan?????access path; ???????Sales??????????????????,????HASH JOIN,??SUBPLAN??customers?????????;?????Nested Loop,???????cust_id?????Range Scan+Access by Rowid? Cardinality feedback Cardinality feedback????????11.2????,????????re-optimization???;  ???????????,Cardinality feedback?????????????????????????? ???????????????????,?????????????????,??????????Cardinality feedback????????????? ????????????????????????? ??????????????Cardinality feedback ??: ????????,???????????,??????????,????????????????selectivity ??? ????????????: ??????,?????????????????????????????????,??????????????????? ????????????????????????????????????????,?????????????????????????? ?????????,???????????????,?????????? ??????????Cardinality ????,??????join Cardinality ????????? Cardinality feedback???????cursor?,?Cursor???aged out????? SELECT /*+ gather_plan_statistics */ product_name FROM order_items o, product_information p WHERE o.unit_price = 15 AND quantity > 1 AND p.product_id = o.product_id Plan hash value: 1553478007 ---------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem | ---------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 13 |00:00:00.01 | 24 | 20 | | | | |* 1 | HASH JOIN | | 1 | 4 | 13 |00:00:00.01 | 24 | 20 | 2061K| 2061K| 429K (0)| |* 2 | TABLE ACCESS FULL| ORDER_ITEMS | 1 | 4 | 13 |00:00:00.01 | 7 | 6 | | | | | 3 | TABLE ACCESS FULL| PRODUCT_INFORMATION | 1 | 1 | 288 |00:00:00.01 | 17 | 14 | | | | ---------------------------------------------------------------------------------------------------------------------------------------- SELECT /*+ gather_plan_statistics */ product_name FROM order_items o, product_information p WHERE o.unit_price = 15 AND quantity > 1 AND p.product_id = o.product_id Plan hash value: 1553478007 ------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 13 |00:00:00.01 | 24 | | | | |* 1 | HASH JOIN | | 1 | 13 | 13 |00:00:00.01 | 24 | 2061K| 2061K| 413K (0)| |* 2 | TABLE ACCESS FULL| ORDER_ITEMS | 1 | 13 | 13 |00:00:00.01 | 7 | | | | | 3 | TABLE ACCESS FULL| PRODUCT_INFORMATION | 1 | 288 | 288 |00:00:00.01 | 17 | | | | ------------------------------------------------------------------------------------------------------------------------------- Note ----- - statistics feedback used for this statement SQL> select count(*) from v$SQL where SQL_ID='cz0hg2zkvd10y'; COUNT(*) ---------- 2 SQL>select sql_ID,USE_FEEDBACK_STATS FROM V$SQL_SHARED_CURSOR where USE_FEEDBACK_STATS ='Y'; SQL_ID U ------------- - cz0hg2zkvd10y Y ????????Cardinality feedback????,???????????????????????????,????????????order_items???????? ????2??????plan hash value??(??????????),?????2????child cursor??????gather_plan_statistics???actual : A-ROWS  estimate :E-ROWS????????? Automatic Re-optimization ???dynamic plan, Re-optimization???????????????  ?  ??????????????? ????????????????????????????????  ???????????,??????????????, ???????????????????? ???????????  Re-optimization??, ????????????????????? Re-optimization????dynamic plan??????????  dynamic plan????????????????????, ???????????????????? ????,??????????join order ??????????????,?????????????join order????? ??????,????????Re-optimization, ??Re-optimization ??????????????????? ?Oracle database 12c?,join statistics?????????????????????,??????????????????????Re-optimization???????????adaptive cursor sharing????? ????????????????,???????????? ????? ???????statistics collectors ????????????????????Re-optimization??????2?????????????,???????????????? ??????????????Re-optimization?????,?????????????????????? ???v$SQL??????IS_REOPTIMIZABLE?????????????????????Re-optimization,??????????Re-optimization???,?????Re-optimization ,???????reporting????? IS_REOPTIMIZABLE VARCHAR2(1) This columns shows whether the next execution matching this child cursor will trigger a reoptimization. The values are:   Y: If the next execution will trigger a reoptimization R: If the child cursor contains reoptimization information, but will not trigger reoptimization because the cursor was compiled in reporting mode N: If the child cursor has no reoptimization information ??1: select plan_table_output from table (dbms_xplan.display_cursor('gwf99gfnm0t7g',NULL,'ALLSTATS LAST')); SQL_ID  gwf99gfnm0t7g, child number 0 ------------------------------------- SELECT /*+ SFTEST gather_plan_statistics */ o.order_id, v.product_name FROM  orders o,   ( SELECT order_id, product_name FROM order_items o, product_information p     WHERE  p.product_id = o.product_id AND list_price < 50 AND min_price < 40  ) v WHERE o.order_id = v.order_id Plan hash value: 1906736282 ------------------------------------------------------------------------------------------------------------------------------------------- | Id  | Operation             | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem | ------------------------------------------------------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT      |                     |      1 |        |    269 |00:00:00.02 |    1336 |     18 |       |       |          | |   1 |  NESTED LOOPS         |                     |      1 |      1 |    269 |00:00:00.02 |    1336 |     18 |       |       |          | |   2 |   MERGE JOIN CARTESIAN|                     |      1 |      4 |   9135 |00:00:00.02 |      34 |     15 |       |       |          | |*  3 |    TABLE ACCESS FULL  | PRODUCT_INFORMATION |      1 |      1 |     87 |00:00:00.01 |      33 |     14 |       |       |          | |   4 |    BUFFER SORT        |                     |     87 |    105 |   9135 |00:00:00.01 |       1 |      1 |  4096 |  4096 | 4096  (0)| |   5 |     INDEX FULL SCAN   | ORDER_PK            |      1 |    105 |    105 |00:00:00.01 |       1 |      1 |       |       |          | |*  6 |   INDEX UNIQUE SCAN   | ORDER_ITEMS_UK      |   9135 |      1 |    269 |00:00:00.01 |    1302 |      3 |       |       |          | ------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))    6 - access("O"."ORDER_ID"="ORDER_ID" AND "P"."PRODUCT_ID"="O"."PRODUCT_ID") SQL_ID  gwf99gfnm0t7g, child number 1 ------------------------------------- SELECT /*+ SFTEST gather_plan_statistics */ o.order_id, v.product_name FROM  orders o,   ( SELECT order_id, product_name FROM order_items o, product_information p     WHERE  p.product_id = o.product_id AND list_price < 50 AND min_price < 40  ) v WHERE o.order_id = v.order_id Plan hash value: 35479787 -------------------------------------------------------------------------------------------------------------------------------------------- | Id  | Operation              | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem | -------------------------------------------------------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT       |                     |      1 |        |    269 |00:00:00.01 |      63 |      3 |       |       |          | |   1 |  NESTED LOOPS          |                     |      1 |    269 |    269 |00:00:00.01 |      63 |      3 |       |       |          | |*  2 |   HASH JOIN            |                     |      1 |    313 |    269 |00:00:00.01 |      42 |      3 |  1321K|  1321K| 1234K (0)| |*  3 |    TABLE ACCESS FULL   | PRODUCT_INFORMATION |      1 |     87 |     87 |00:00:00.01 |      16 |      0 |       |       |          | |   4 |    INDEX FAST FULL SCAN| ORDER_ITEMS_UK      |      1 |    665 |    665 |00:00:00.01 |      26 |      3 |       |       |          | |*  5 |   INDEX UNIQUE SCAN    | ORDER_PK            |    269 |      1 |    269 |00:00:00.01 |      21 |      0 |       |       |          | -------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID")    3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))    5 - access("O"."ORDER_ID"="ORDER_ID") Note -----    - statistics feedback used for this statement    SQL> select IS_REOPTIMIZABLE,child_number FROM V$SQL  A where A.SQL_ID='gwf99gfnm0t7g'; IS CHILD_NUMBER -- ------------ Y             0 N             1    1* select child_number,other_xml From v$SQL_PLAN  where SQL_ID='gwf99gfnm0t7g' and other_xml is not nul SQL> / CHILD_NUMBER OTHER_XML ------------ --------------------------------------------------------------------------------            1 <other_xml><info type="cardinality_feedback">yes</info><info type="db_version">1              2.1.0.1</info><info type="parse_schema"><![CDATA["OE"]]></info><info type="plan_              hash">35479787</info><info type="plan_hash_2">3382491761</info><outline_data><hi              nt><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]></hint><hint><![CDATA[OPTIMIZER_FEATUR              ES_ENABLE('12.1.0.1')]]></hint><hint><![CDATA[DB_VERSION('12.1.0.1')]]></hint><h              int><![CDATA[ALL_ROWS]]></hint><hint><![CDATA[OUTLINE_LEAF(@"SEL$F5BB74E1")]]></              hint><hint><![CDATA[MERGE(@"SEL$2")]]></hint><hint><![CDATA[OUTLINE(@"SEL$1")]]>              </hint><hint><![CDATA[OUTLINE(@"SEL$2")]]></hint><hint><![CDATA[FULL(@"SEL$F5BB7              4E1" "P"@"SEL$2")]]></hint><hint><![CDATA[INDEX_FFS(@"SEL$F5BB74E1" "O"@"SEL$2"              ("ORDER_ITEMS"."ORDER_ID" "ORDER_ITEMS"."PRODUCT_ID"))]]></hint><hint><![CDATA[I              NDEX(@"SEL$F5BB74E1" "O"@"SEL$1" ("ORDERS"."ORDER_ID"))]]></hint><hint><![CDATA[              LEADING(@"SEL$F5BB74E1" "P"@"SEL$2" "O"@"SEL$2" "O"@"SEL$1")]]></hint><hint><![C              DATA[USE_HASH(@"SEL$F5BB74E1" "O"@"SEL$2")]]></hint><hint><![CDATA[USE_NL(@"SEL$              F5BB74E1" "O"@"SEL$1")]]></hint></outline_data></other_xml>            0 <other_xml><info type="db_version">12.1.0.1</info><info type="parse_schema"><![C              DATA["OE"]]></info><info type="plan_hash">1906736282</info><info type="plan_hash              _2">2579473118</info><outline_data><hint><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]>              </hint><hint><![CDATA[OPTIMIZER_FEATURES_ENABLE('12.1.0.1')]]></hint><hint><![CD              ATA[DB_VERSION('12.1.0.1')]]></hint><hint><![CDATA[ALL_ROWS]]></hint><hint><![CD              ATA[OUTLINE_LEAF(@"SEL$F5BB74E1")]]></hint><hint><![CDATA[MERGE(@"SEL$2")]]></hi              nt><hint><![CDATA[OUTLINE(@"SEL$1")]]></hint><hint><![CDATA[OUTLINE(@"SEL$2")]]>              </hint><hint><![CDATA[FULL(@"SEL$F5BB74E1" "P"@"SEL$2")]]></hint><hint><![CDATA[              INDEX(@"SEL$F5BB74E1" "O"@"SEL$1" ("ORDERS"."ORDER_ID"))]]></hint><hint><![CDATA              [INDEX(@"SEL$F5BB74E1" "O"@"SEL$2" ("ORDER_ITEMS"."ORDER_ID" "ORDER_ITEMS"."PROD              UCT_ID"))]]></hint><hint><![CDATA[LEADING(@"SEL$F5BB74E1" "P"@"SEL$2" "O"@"SEL$1              " "O"@"SEL$2")]]></hint><hint><![CDATA[USE_MERGE_CARTESIAN(@"SEL$F5BB74E1" "O"@"              SEL$1")]]></hint><hint><![CDATA[USE_NL(@"SEL$F5BB74E1" "O"@"SEL$2")]]></hint></o              utline_data></other_xml> ??2: SELECT /*+gather_plan_statistics*/ * FROM customers WHERE cust_state_province='CA' AND country_id='US'; SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID b74nw722wjvy3, child number 0 ------------------------------------- select /*+gather_plan_statistics*/ * from customers where CUST_STATE_PROVINCE='CA' and country_id='US' Plan hash value: 1683234692 -------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | -------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 29 |00:00:00.01 | 17 | 14 | |* 1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 8 | 29 |00:00:00.01 | 17 | 14 | -------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='CA' AND "COUNTRY_ID"='US')) SELECT SQL_ID, CHILD_NUMBER, SQL_TEXT, IS_REOPTIMIZABLE FROM V$SQL WHERE SQL_TEXT LIKE 'SELECT /*+gather_plan_statistics*/%'; SQL_ID CHILD_NUMBER SQL_TEXT I ------------- ------------ ----------- - b74nw722wjvy3 0 select /*+g Y ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' EXEC DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE; SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OWNER, o.OBJECT_NAME, o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON FROM DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o WHERE d.DIRECTIVE_ID=o.DIRECTIVE_ID AND o.OWNER IN ('SH') ORDER BY 1,2,3,4,5; DIR_ID OWNER OBJECT_NAME COL_NAME OBJECT TYPE STATE REASON ----------------------- ----- ------------- ----------- ------ ---------------- ----- ------------------------ 1484026771529551585 SH CUSTOMERS COUNTRY_ID COLUMN DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1484026771529551585 SH CUSTOMERS CUST_STATE_ COLUMN DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY PROVINCE MISESTIMATE 1484026771529551585 SH CUSTOMERS TABLE DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE SELECT /*+gather_plan_statistics*/ * FROM customers WHERE cust_state_province='CA' AND country_id='US'; ELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID b74nw722wjvy3, child number 1 ------------------------------------- select /*+gather_plan_statistics*/ * from customers where CUST_STATE_PROVINCE='CA' and country_id='US' Plan hash value: 1683234692 ----------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | ----------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 29 |00:00:00.01 | 17 | |* 1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 29 | 29 |00:00:00.01 | 17 | ----------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='CA' AND "COUNTRY_ID"='US')) Note ----- - cardinality feedback used for this statement SELECT SQL_ID, CHILD_NUMBER, SQL_TEXT, IS_REOPTIMIZABLE FROM V$SQL WHERE SQL_TEXT LIKE 'SELECT /*+gather_plan_statistics*/%'; SQL_ID CHILD_NUMBER SQL_TEXT I ------------- ------------ ----------- - b74nw722wjvy3 0 select /*+g Y ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' b74nw722wjvy3 1 select /*+g N ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' SELECT /*+gather_plan_statistics*/ CUST_EMAIL FROM CUSTOMERS WHERE CUST_STATE_PROVINCE='MA' AND COUNTRY_ID='US'; SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID 3tk6hj3nkcs2u, child number 0 ------------------------------------- Select /*+gather_plan_statistics*/ cust_email From customers Where cust_state_province='MA' And country_id='US' Plan hash value: 1683234692 ------------------------------------------------------------------------------- |Id | Operation | Name | Starts|E-Rows|A-Rows| A-Time |Buffers| ------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01| 16 | |*1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 2| 2 |00:00:00.01| 16 | ----------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='MA' AND "COUNTRY_ID"='US')) Note ----- - dynamic sampling used for this statement (level=2) - 1 Sql Plan Directive used for this statement EXEC DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE; SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OWNER, o.OBJECT_NAME, o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON FROM DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o WHERE d.DIRECTIVE_ID=o.DIRECTIVE_ID AND o.OWNER IN ('SH') ORDER BY 1,2,3,4,5; DIR_ID OW OBJECT_NA COL_NAME OBJECT TYPE STATE REASON ------------------- -- --------- ---------- ------- --------------- ------------- ------------------------ 1484026771529551585 SH CUSTOMERS COUNTRY_ID COLUMN DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY MISESTIMATE 1484026771529551585 SH CUSTOMERS CUST_STATE_ COLUMN DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY PROVINCE MISESTIMATE 1484026771529551585 SH CUSTOMERS TABLE DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY MISESTIMATE

    Read the article

  • Algorithm for dynamic combinations

    - by sOltan
    My code has a list called INPUTS, that contains a dynamic number of lists, let's call them A, B, C, .. N. These lists contain a dynamic number of Events I would like to call a function with each combination of Events. To illustrate with an example: INPUTS: A(0,1,2), B(0,1), C(0,1,2,3) I need to call my function this many times for each combination (the input count is dynamic, in this example it is three parameter, but it can be more or less) function(A[0],B[0],C[0]) function(A[0],B[1],C[0]) function(A[0],B[0],C[1]) function(A[0],B[1],C[1]) function(A[0],B[0],C[2]) function(A[0],B[1],C[2]) function(A[0],B[0],C[3]) function(A[0],B[1],C[3]) function(A[1],B[0],C[0]) function(A[1],B[1],C[0]) function(A[1],B[0],C[1]) function(A[1],B[1],C[1]) function(A[1],B[0],C[2]) function(A[1],B[1],C[2]) function(A[1],B[0],C[3]) function(A[1],B[1],C[3]) function(A[2],B[0],C[0]) function(A[2],B[1],C[0]) function(A[2],B[0],C[1]) function(A[2],B[1],C[1]) function(A[2],B[0],C[2]) function(A[2],B[1],C[2]) function(A[2],B[0],C[3]) function(A[2],B[1],C[3]) This is what I have thought of so far: My approach so far is to build a list of combinations. The element combination is itself a list of "index" to the input arrays A, B and C. For our example: my list iCOMBINATIONS contains the following iCOMBO lists (0,0,0) (0,1,0) (0,0,1) (0,1,1) (0,0,2) (0,1,2) (0,0,3) (0,1,3) (1,0,0) (1,1,0) (1,0,1) (1,1,1) (1,0,2) (1,1,2) (1,0,3) (1,1,3) (2,0,0) (2,1,0) (2,0,1) (2,1,1) (2,0,2) (2,1,2) (2,0,3) (2,1,3) Then I would do this: foreach( iCOMBO in iCOMBINATIONS) { foreach ( P in INPUTS ) { COMBO.Clear() foreach ( i in iCOMBO ) { COMBO.Add( P[ iCOMBO[i] ] ) } function( COMBO ) --- (instead of passing the events separately) } } But I need to find a way to build the list iCOMBINATIONS for any given number of INPUTS and their events. Any ideas? Is there actually a better algorithm than this? any pseudo code to help me with will be great. C# (or VB) Thank You

    Read the article

  • Solving Diophantine Equations Using Python

    - by HARSHITH
    In mathematics, a Diophantine equation (named for Diophantus of Alexandria, a third century Greek mathematician) is a polynomial equation where the variables can only take on integer values. Although you may not realize it, you have seen Diophantine equations before: one of the most famous Diophantine equations is: We are not certain that McDonald's knows about Diophantine equations (actually we doubt that they do), but they use them! McDonald's sells Chicken McNuggets in packages of 6, 9 or 20 McNuggets. Thus, it is possible, for example, to buy exactly 15 McNuggets (with one package of 6 and a second package of 9), but it is not possible to buy exactly 16 nuggets, since no non- negative integer combination of 6's, 9's and 20's adds up to 16. To determine if it is possible to buy exactly n McNuggets, one has to solve a Diophantine equation: find non-negative integer values of a, b, and c, such that 6a + 9b + 20c = n. Write an iterative program that finds the largest number of McNuggets that cannot be bought in exact quantity. Your program should print the answer in the following format (where the correct number is provided in place of n): "Largest number of McNuggets that cannot be bought in exact quantity: n"

    Read the article

  • Correctly calling setGridWidth on a jqGrid inside a jQueryUI Dialog

    - by Dan
    I have a jQueryUI dialog (#locDialog) which has a jqGrid ($grid) inside it. When the Dialog opens (initially, but it gets called whenever it opens), I want the $grid to resize to the size of the $locDialog. When I do this initially, I get scrollbars inside the grid (not inside the dialog). If I debug the code, I see the width of the $grid is 677. So, I call setGridWidth() again and check the width and now I have 659, which is 18px less, which is the size of the scroll area for the jqGrid (Dun-dun-dun..) When I rezie the dialog, I resize the grid as well, and everything is happy - no scrollbars, except where necessary. My dialog init code: $locDialog = $('#location-dialog').dialog({ autoOpen: false, modal: true, position: ['center', 100], width: 700, height:500, resizable: true, buttons: { "Show Selected": function() {alert($('#grid').jqGrid('getGridParam','selarrrow'));}, "OK": function() {$(this).dialog('close');}, "Cancel": function() {$(this).dialog('close');} }, open: function(event, ui) { $grid.setGridHeight($(this).height()-54); // No idea why 54 is the magic number here $grid.setGridWidth($(this).width(), true); }, close: function(event, ui) { }, resizeStop: function(event, ui) { $grid.setGridWidth($locDialog.width(), true); $grid.setGridHeight($locDialog.height()-54); } }); I am curious if anyone has seen this before. Really, it isn't the end of the world if I initially have unnecessary scrollbars at first, but it is just odd that when I call setGridWidth initially, it doesn't take into account the scroll area of 18px. As far as the magical number 54, that is the number I had to subtract from the height of the dialog value to get the grid to render without unnecessary scrollbars.

    Read the article

  • unable to get "ItemValue" of selected item using f:selectitems tag in ace:autocompleteEntry

    - by user1641976
    i want to get the Value of selectItem (ItemValue which is an Integer and the Item Label is String) in my backing bean using autocompleteentry tag of icefaces 3.1.0 but i get error: here is the code: <tr> <td>Current City</td> <td> <ace:autoCompleteEntry value="#{service.cityId}" styleClass="select-field" rows="10" width="400" filterMatchMode="" > <f:selectItems value="#{service.cities}" ></f:selectItems> </ace:autoCompleteEntry> </td> </tr> Bean is : public class Service{ private Integer cityId; public Integer getCityId() { return cityId; } public void setCityId(Integer cityId) { this.cityId= cityId; } private <SelectItem> cities; public List<SelectItem> getCities() { return cities=Dao.getCityList(); } public void setCities(List<SelectItem> cities) { this.cities= cities; } } the cities has itemvalue as a number and itemLabel as String stored in it. I do get autocomplete fine and shows list of matches if i store value in some String property of backing bean but if storing in integer property of bean, gives this error as soon i write something in autocomplete. INFO: WARNING: FacesMessage(s) have been enqueued, but may not have been displayed. sourceId=frmmaster:j_idt205:txtcity[severity=(ERROR 2), summary=(frmmaster:j_idt205:txtcity: 'a' must be a number consisting of one or more digits.), detail=(frmmaster:j_idt205:txtcity: 'a' must be a number between -2147483648 and 2147483647 Example: 9346)] Kindly reply any person i need to solve this issue as soon as possible.

    Read the article

  • Wrong figures numbering - Package caption Error: Continued 'figure' after 'table'

    - by Eduardo
    Hello I am having a problem with the numbering of figures using Latex, I am getting this error message: Package caption Error: Continued 'figure' after 'table' This is my code: \begin{table} \centering \subfloat[Tabla1\label{tab:Tabla1}]{ \small \begin{tabular}{ | c | c | c | c | c |} \hline \multicolumn{5}{|c|}{\textbf{Tabla 1}} \\ \hline ... ... \end{tabular} } \qquad \subfloat[Tabla2\label{tab:Tabla2}]{ \small \begin{tabular}{ | c | c | c | c | c |} \hline \multicolumn{5}{|c|}{\textbf{Tabla 2}} \\ \hline ... ... \end{tabular} } \caption{These are tables} \label{tab:Tables} \end{table} \begin{figure} \centering \subfloat[][Figure 1]{\label{fig:fig1}\includegraphics[width = 14cm]{fig1}} \qquad \subfloat[][Figure 2]{\label{fig:fig2}\includegraphics[width = 14cm]{fig2}} \end{figure} \begin{figure}[t] \ContinuedFloat \subfloat[][Figure 2]{\label{fig:fig3}\includegraphics[width = 14cm]{fig3}} \caption{Those are figures} \label{fig:Figures} \end{figure} \newpage What I want to do, it is to have this configuration: Table Table Figure 1 Figure 2 Figure 3 Since Figure 1 and Figure 2 are too big to fit vertically I want the Figure 3 to be alone in another page that's why I have the \ContinuedFloat. Externally looks fine but the problem is the numbering, I am getting for the Figures the number 5.2, that is the same number that a Figure I have before (The correct number should be 5.3). However if I try to reference the figures: \ref{fig:fig1}, \ref{fig:fig2} y \ref{fig:fig2} I get: 5.3a, 5.3b y 5.2c The two first right the last one wrong. I have been stuck with this for hours any ideas?. Thans a lot in advance

    Read the article

  • IndexOutOfBoundsException when updating a contact in contact list - Blackberry

    - by Taha
    Software and Simulator version i am using Blackberry Smartphone simulator: 2.13.0.65 Blackberry software version 5.0.0_5.0.0.14 I am looking at modifying contacts. Below is the code snippet i am using. I am getting a IndexOutOfBounds Exception at line String wtel = blackBerryContact.getString(BlackBerryContact.TEL, supportedAttributes[i]); Can someone advise what is going wrong here. Following is the code snippet ..... // Load the addressbook and let the user choose from list of contact BlackBerryContactList contactList = (BlackBerryContactList) PIM.getInstance().openPIMList(PIM.CONTACT_LIST,PIM.READ_WRITE); PIMItem pimItem = contactList.choose(); BlackBerryContact blackBerryContact = (BlackBerryContact)pimItem; PIMList pimList = blackBerryContact.getPIMList(); // get the supported attributes for Contact.TEL int[] supportedAttributes = pimList.getSupportedAttributes(Contact.TEL); Dialog.alert("Supported Attributes "+supportedAttributes.length); // gives me 8 for (int i=0; i < supportedAttributes.length;i++){ if(blackBerryContact.ATTR_WORK == supportedAttributes[i]){ Dialog.alert("updating Work"); // This alert is shown Dialog.alert("is supported "+ pimList.isSupportedAttribute(BlackBerryContact.TEL, supportedAttributes[i])+" "+pimList.getAttributeLabel(supportedAttributes[i])); // shows true and work String wtel = blackBerryContact.getString(BlackBerryContact.TEL, supportedAttributes[i]); // I get a IndexOutOfBounds Exception here if(wtel != ""){ pimItem.removeValue(BlackBerryContact.TEL, supportedAttributes[i]); } pimItem.addString( Contact.TEL, BlackBerryContact.ATTR_WORK, number); // passing the number that has to be updated if(pimItem.isModified()) { pimItem.commit(); Dialog.alert("Updated Work Number"); } } } ..... I want to update all the supported attributes for Contact.TEL field http://www.blackberry.com/developers/docs/5.0.0api/net/rim/blackberry/api/pdap/BlackBerryContact.html Field Values Per Field Supported Attributes ----------------------------------------------------------------------------- Contact.TEL 8 Contact.ATTR_WORK, Contact.ATTR_HOME, Contact.ATTR_MOBILE, Contact.ATTR_PAGER, Contact.ATTR_FAX, Contact.ATTR_OTHER, Contact.ATTR_HOME2, Contact.ATTR_WORK2

    Read the article

  • how to rethrow same exception in sql server

    - by Shantanu Gupta
    I want to rethrow same exception in sql server that has been occured in my try block. I am able to throw same message but i want to throw same error. BEGIN TRANSACTION BEGIN TRY INSERT INTO Tags.tblDomain (DomainName, SubDomainId, DomainCode, Description) VALUES(@DomainName, @SubDomainId, @DomainCode, @Description) COMMIT TRANSACTION END TRY BEGIN CATCH declare @severity int; declare @state int; select @severity=error_severity(), @state=error_state(); RAISERROR(@@Error,@ErrorSeverity,@state); ROLLBACK TRANSACTION END CATCH RAISERROR(@@Error, @ErrorSeverity, @state); This line will show error, but i want functionality something like that. This raises error with error number 50000, but i want erron number to be thrown that i am passing @@error, I want to capture this error no at frontend i.e. catch (SqlException ex) { if ex.number==2627 MessageBox.show("Duplicate value cannot be inserted"); } I want this functionality. which can't be achieved using raiseerror. I dont want to give custom error message at back end.

    Read the article

  • Javascript html5 database transaction problem in loops

    - by Marek
    I'm hittig my head on this and i will be glad for any help. I need to store in a database a context (a list of string ids) for another page. I open a page with a list of artworks and this page save into the database those artowrks ids. When i click on an artwork i open its webpage but i can access the database to know the context and refer to the next and prev artwork. This is my code to retrieve the contex: updateContext = function () { alert("updating context"); try { mydb.transaction( function(transaction) { transaction.executeSql("select artworks.number from artworks, collections where collections.id = artworks.section_id and collections.short_name in ('cro', 'cra', 'crp', 'crm');", [], contextDataHandler, errorHandler); }); } catch(e) { alert(e.message); } } the contextDatahandler function then iterates through the results and fill again the current_context table: contextDataHandler = function(transaction, results) { try { mydb.transaction( function(transaction) { transaction.executeSql("drop table current_context;", [], nullDataHandler, errorHandler); transaction.executeSql("create table current_context(id String);", [], nullDataHandler, errorHandler); } ) } catch(e) { alert(e.message); } var s = ""; for (var i=0; i < results.rows.length; i++) { var item = results.rows.item(0); s += item['number'] + " "; mydb.transaction( function(tx) { tx.executeSql("insert into current_context(id) values (?);", [item['number']]); } ) } alert(s); } the result is, i get the current_context table deleted, recreated, and filled, but all the rows are filled with the LAST artwork id. the query to retrieve the artworks ids works, so i think it's a transaction problem, but i cant figure out where. thanks for any help

    Read the article

  • Android - Launch Intent within ExpandableListView

    - by Ryan
    Hello, I'm trying to figure out if it is possible to launch an intent within an ExpandableListView. Basically one of the "Groups" is Phone Number and it's child is the number. I want the user to be able to click it and have it automatically call that number. Is this possible? How? Here is my code to populate the ExpandableListView using a Map called "data". ExpandableListView myList = (ExpandableListView) findViewById(R.id.myList); //ExpandableListAdapter adapter = new MyExpandableListAdapter(data); List<Map<String, String>> groupData = new ArrayList<Map<String, String>>(); List<List<Map<String, String>>> childData = new ArrayList<List<Map<String, String>>>(); Iterator it = data.entrySet().iterator(); while (it.hasNext()) { //Get the key name and value for it Map.Entry pair = (Map.Entry)it.next(); String keyName = (String) pair.getKey(); String value = pair.getValue().toString(); //Add the parents -- aka main categories Map<String, String> curGroupMap = new HashMap<String, String>(); groupData.add(curGroupMap); curGroupMap.put("NAME", keyName); //Add the child data List<Map<String, String>> children = new ArrayList<Map<String, String>>(); Map<String, String> curChildMap = new HashMap<String, String>(); children.add(curChildMap); curChildMap.put("NAME", value); childData.add(children); } // Set up our adapter mAdapter = new SimpleExpandableListAdapter( mContext, groupData, R.layout.exp_list_parent, new String[] { "NAME", "IS_EVEN" }, new int[] { R.id.rowText1, R.id.rowText2 }, childData, R.layout.exp_list_child, new String[] { "NAME", "IS_EVEN" }, new int[] { R.id.rowText3, R.id.rowText4} ); myList.setAdapter(mAdapter);

    Read the article

  • Dynamic Programming Recursion and a sprinkle of Memoization

    - by Auburnate
    I have this massive array of ints from 0-4 in this triangle. I am trying to learn dynamic programming with Ruby and would like some assistance in calculating the number of paths in the triangle that meet three criterion: You must start at one of the zero points in the row with 70 elements. Your path can be directly above you one row (if there is a number directly above) or one row up heading diagonal to the left. One of these options is always available The sum of the path you take to get to the zero on the first row must add up to 140. Example, start at the second zero in the bottom row. You can move directly up to the one or diagonal left to the 4. In either case, the number you arrive at must be added to the running count of all the numbers you have visited. From the 1 you can travel to a 2 (running sum = 3) directly above or to the 0 (running sum = 1) diagonal to the left. 0 41 302 2413 13024 024130 4130241 30241302 241302413 1302413024 02413024130 413024130241 3024130241302 24130241302413 130241302413024 0241302413024130 41302413024130241 302413024130241302 2413024130241302413 13024130241302413024 024130241302413024130 4130241302413024130241 30241302413024130241302 241302413024130241302413 1302413024130241302413024 02413024130241302413024130 413024130241302413024130241 3024130241302413024130241302 24130241302413024130241302413 130241302413024130241302413024 0241302413024130241302413024130 41302413024130241302413024130241 302413024130241302413024130241302 2413024130241302413024130241302413 13024130241302413024130241302413024 024130241302413024130241302413024130 4130241302413024130241302413024130241 30241302413024130241302413024130241302 241302413024130241302413024130241302413 1302413024130241302413024130241302413024 02413024130241302413024130241302413024130 413024130241302413024130241302413024130241 3024130241302413024130241302413024130241302 24130241302413024130241302413024130241302413 130241302413024130241302413024130241302413024 0241302413024130241302413024130241302413024130 41302413024130241302413024130241302413024130241 302413024130241302413024130241302413024130241302 2413024130241302413024130241302413024130241302413 13024130241302413024130241302413024130241302413024 024130241302413024130241302413024130241302413024130 4130241302413024130241302413024130241302413024130241 30241302413024130241302413024130241302413024130241302 241302413024130241302413024130241302413024130241302413 1302413024130241302413024130241302413024130241302413024 02413024130241302413024130241302413024130241302413024130 413024130241302413024130241302413024130241302413024130241 3024130241302413024130241302413024130241302413024130241302 24130241302413024130241302413024130241302413024130241302413 130241302413024130241302413024130241302413024130241302413024 0241302413024130241302413024130241302413024130241302413024130 41302413024130241302413024130241302413024130241302413024130241 302413024130241302413024130241302413024130241302413024130241302 2413024130241302413024130241302413024130241302413024130241302413 13024130241302413024130241302413024130241302413024130241302413024 024130241302413024130241302413024130241302413024130241302413024130 4130241302413024130241302413024130241302413024130241302413024130241 30241302413024130241302413024130241302413024130241302413024130241302 241302413024130241302413024130241302413024130241302413024130241302413 1302413024130241302413024130241302413024130241302413024130241302413024 02413024130241302413024130241302413024130241302413024130241302413024130

    Read the article

  • django+uploadify - don't working

    - by Erico
    Hi, I'm trying to use an example posted on the "github" the link is http://github.com/tstone/django-uploadify. And I'm having trouble getting work. can you help me? I followed step by step, but does not work. Accessing the "URL" / upload / the only thing is that returns "True" part of settings.py import os PROJECT_ROOT_PATH = os.path.dirname(os.path.abspath(file)) MEDIA_ROOT = os.path.join(PROJECT_ROOT_PATH, 'media') TEMPLATE_DIRS = ( os.path.join(PROJECT_ROOT_PATH, 'templates')) urls.py from django.conf.urls.defaults import * from django.conf import settings from teste.uploadify.views import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^admin/', include(admin.site.urls)), url(r'upload/$', upload, name='uploadify_upload'), ) views.py from django.http import HttpResponse import django.dispatch upload_received = django.dispatch.Signal(providing_args=['data']) def upload(request, *args, **kwargs): if request.method == 'POST': if request.FILES: upload_received.send(sender='uploadify', data=request.FILES['Filedata']) return HttpResponse('True') models.py from django.db import models def upload_received_handler(sender, data, **kwargs): if file: new_media = Media.objects.create( file = data, new_upload = True, ) new_media.save() upload_received.connect(upload_received_handler, dispatch_uid='uploadify.media.upload_received') class Media(models.Model): file = models.FileField(upload_to='images/upload/', null=True, blank=True) new_upload = models.BooleanField() uploadify_tags.py from django import template from teste import settings register = template.Library() @register.inclusion_tag('uploadify/multi_file_upload.html', takes_context=True) def multi_file_upload(context, upload_complete_url): """ * filesUploaded - The total number of files uploaded * errors - The total number of errors while uploading * allBytesLoaded - The total number of bytes uploaded * speed - The average speed of all uploaded files """ return { 'upload_complete_url' : upload_complete_url, 'uploadify_path' : settings.UPLOADIFY_PATH, # checar essa linha 'upload_path' : settings.UPLOADIFY_UPLOAD_PATH, } template - uploadify/multi_file_upload.html {% load uploadify_tags }{ multi_file_upload '/media/images/upload/' %} <script type="text/javascript" src="{{ MEDIA_URL }}js/swfobject.js"></script> <script type="text/javascript" src="{{ MEDIA_URL }}js/jquery.uploadify.js"></script> <div id="uploadify" class="multi-file-upload"><input id="fileInput" name="fileInput" type="file" /></div> <script type="text/javascript">// <![CDATA[ $(document).ready(function() { $('#fileInput').uploadify({ 'uploader' : '/media/swf/uploadify.swf', 'script' : '{% url uploadify_upload %}', 'cancelImg' : '/media/images/uploadify-remove.png/', 'auto' : true, 'folder' : '/media/images/upload/', 'multi' : true, 'onAllComplete' : allComplete }); }); function allComplete(event, data) { $('#uploadify').load('{{ upload_complete_url }}', { 'filesUploaded' : data.filesUploaded, 'errorCount' : data.errors, 'allBytesLoaded' : data.allBytesLoaded, 'speed' : data.speed }); // raise custom event $('#uploadify') .trigger('allUploadsComplete', data); } // ]]</script>

    Read the article

  • UrlRewriter.net Expression Examples

    - by Tarik
    Hello, I need some web.config examples for each expression types below : $number The last substring matched by group number number. $<name> The last substring matched by group named name matched by (?< name ). ${property} The value of the property when the expression is evaluated. ${transform(value)} The result of calling the transform on the specified value. ${map:value} The result of mapping the specified value using the map. Replaced with empty string if no mapping exists. ${map:value|default} The result of mapping the specified value using the map. Replaced with the default if no mapping exists. Sample: <rewriter> <if url="/tags/(.+)" rewrite="/tagcloud.aspx?tag=$1" /> <!-- same thing as <rewrite url="/tags/(.+)" to="/tagcloud.aspx?tag=$1" /> --> </rewriter> Thank you very much !

    Read the article

  • Notification CeSetUserNotificationEx with custom sound

    - by inTagger
    Hail all! I want to display notification and play custom sound on my Windows Mobile 5/6 device. I have tried something like that, but my custom sound does not play, though message is displayed with standart sound. If i edit Wave key in [HKEY_CURRENT_USER\ControlPanel\Notifications{15F11F90-8A5F-454c-89FC-BA9B7AAB0CAD}] to sound file i need then it plays okay. But why there are flag NotificationAction.Sound and property UserNotification.Sound? It doesn't work. Also Vibration and Led don't work, if i use such flags. (You can obtain full project sources from http://dl.dropbox.com/u/1758206/Code/Thunder.zip) var trigger = new UserNotificationTrigger { StartTime = DateTime.Now + TimeSpan.FromSeconds(1), Type = NotificationType.ClassicTime }; var userNotification = new UserNotification { Sound = @"\Windows\Alarm1.wma", Text = "Hail from Penza, Russia!", Action = NotificationAction.Dialog | NotificationAction.Sound, Title = string.Empty, MaxSound = 16384 }; NotificationTools.SetUserNotification(0, trigger, userNotification); UserNotificationTrigger.cs: using System; using System.Runtime.InteropServices; namespace Thunder.Lib.ThunderMethod1 { /// <summary> /// Specifies the type of notification. /// </summary> public enum NotificationType { /// <summary> /// Equivalent to using the SetUserNotification function. /// The standard command line is supplied. /// </summary> ClassicTime = 4, /// <summary> /// System event notification. /// </summary> Event = 1, /// <summary> /// Time-based notification that is active for the time period between StartTime and EndTime. /// </summary> Period = 3, /// <summary> /// Time-based notification. /// </summary> Time = 2 } /// <summary> /// System Event Flags /// </summary> public enum NotificationEvent { None, TimeChange, SyncEnd, OnACPower, OffACPower, NetConnect, NetDisconnect, DeviceChange, IRDiscovered, RS232Detected, RestoreEnd, Wakeup, TimeZoneChange, MachineNameChange, RndisFNDetected, InternetProxyChange } /// <summary> /// Defines what event activates a notification. /// </summary> [StructLayout(LayoutKind.Sequential)] public class UserNotificationTrigger { internal int dwSize = 52; private int dwType; private int dwEvent; [MarshalAs(UnmanagedType.LPWStr)] private string lpszApplication = string.Empty; [MarshalAs(UnmanagedType.LPWStr)] private string lpszArguments; internal SYSTEMTIME stStartTime; internal SYSTEMTIME stEndTime; /// <summary> /// Specifies the type of notification. /// </summary> public NotificationType Type { get { return (NotificationType) dwType; } set { dwType = (int) value; } } /// <summary> /// Specifies the type of event should Type = Event. /// </summary> public NotificationEvent Event { get { return (NotificationEvent) dwEvent; } set { dwEvent = (int) value; } } /// <summary> /// Name of the application to execute. /// </summary> public string Application { get { return lpszApplication; } set { lpszApplication = value; } } /// <summary> /// Command line (without the application name). /// </summary> public string Arguments { get { return lpszArguments; } set { lpszArguments = value; } } /// <summary> /// Specifies the beginning of the notification period. /// </summary> public DateTime StartTime { get { return stStartTime.ToDateTime(); } set { stStartTime = SYSTEMTIME.FromDateTime(value); } } /// <summary> /// Specifies the end of the notification period. /// </summary> public DateTime EndTime { get { return stEndTime.ToDateTime(); } set { stEndTime = SYSTEMTIME.FromDateTime(value); } } } } UserNotification.cs: using System.Runtime.InteropServices; namespace Thunder.Lib.ThunderMethod1 { /// <summary> /// Contains information used for a user notification. /// </summary> [StructLayout(LayoutKind.Sequential)] public class UserNotification { private int ActionFlags; [MarshalAs(UnmanagedType.LPWStr)] private string pwszDialogTitle; [MarshalAs(UnmanagedType.LPWStr)] private string pwszDialogText; [MarshalAs(UnmanagedType.LPWStr)] private string pwszSound; private int nMaxSound; private int dwReserved; /// <summary> /// Any combination of the <see cref="T:Thunder.Lib.NotificationAction" /> members. /// </summary> /// <value>Flags which specifies the action(s) to be taken when the notification is triggered.</value> /// <remarks>Flags not valid on a given hardware platform will be ignored.</remarks> public NotificationAction Action { get { return (NotificationAction) ActionFlags; } set { ActionFlags = (int) value; } } /// <summary> /// Required if NotificationAction.Dialog is set, ignored otherwise /// </summary> public string Title { get { return pwszDialogTitle; } set { pwszDialogTitle = value; } } /// <summary> /// Required if NotificationAction.Dialog is set, ignored otherwise. /// </summary> public string Text { get { return pwszDialogText; } set { pwszDialogText = value; } } /// <summary> /// Sound string as supplied to PlaySound. /// </summary> public string Sound { get { return pwszSound; } set { pwszSound = value; } } public int MaxSound { get { return nMaxSound; } set { nMaxSound = value; } } } } NativeMethods.cs: using System; using System.Runtime.InteropServices; namespace Thunder.Lib.ThunderMethod1 { [StructLayout(LayoutKind.Sequential)] public struct SYSTEMTIME { public short wYear; public short wMonth; public short wDayOfWeek; public short wDay; public short wHour; public short wMinute; public short wSecond; public short wMillisecond; public static SYSTEMTIME FromDateTime(DateTime dt) { return new SYSTEMTIME { wYear = (short) dt.Year, wMonth = (short) dt.Month, wDayOfWeek = (short) dt.DayOfWeek, wDay = (short) dt.Day, wHour = (short) dt.Hour, wMinute = (short) dt.Minute, wSecond = (short) dt.Second, wMillisecond = (short) dt.Millisecond }; } public DateTime ToDateTime() { if ((((wYear == 0) && (wMonth == 0)) && ((wDay == 0) && (wHour == 0))) && ((wMinute == 0) && (wSecond == 0))) return DateTime.MinValue; return new DateTime(wYear, wMonth, wDay, wHour, wMinute, wSecond, wMillisecond); } } /// <summary> /// Specifies the action to take when a notification event occurs. /// </summary> [Flags] public enum NotificationAction { /// <summary> /// Displays the user notification dialog box. /// </summary> Dialog = 4, /// <summary> /// Flashes the LED. /// </summary> Led = 1, /// <summary> /// Dialog box z-order flag. /// Set if the notification dialog box should come up behind the password. /// </summary> Private = 32, /// <summary> /// Repeats the sound for 10–15 seconds. /// </summary> Repeat = 16, /// <summary> /// Plays the sound specified. /// </summary> Sound = 8, /// <summary> /// Vibrates the device. /// </summary> Vibrate = 2 } internal class NativeMethods { [DllImport("coredll.dll", CallingConvention = CallingConvention.Winapi, CharSet = CharSet.Unicode, SetLastError = true)] internal static extern int CeSetUserNotificationEx(int hNotification, UserNotificationTrigger lpTrigger, UserNotification lpUserNotification); } } NotificationTools.cs: using System.ComponentModel; using System.Runtime.InteropServices; namespace Thunder.Lib.ThunderMethod1 { public static class NotificationTools { /// <summary> /// This function modifies an existing user notification. /// </summary> /// <param name="handle">Handle of the Notification to be modified</param> /// <param name="trigger">A UserNotificationTrigger that defines what event activates a notification.</param> /// <param name="notification">A UserNotification that defines how the system should respond when a notification occurs.</param> /// <returns>Handle to the notification event if successful.</returns> public static int SetUserNotification(int handle, UserNotificationTrigger trigger, UserNotification notification) { int num = NativeMethods.CeSetUserNotificationEx(handle, trigger, notification); if (num == 0) throw new Win32Exception(Marshal.GetLastWin32Error(), "Error setting UserNotification"); return num; } } }

    Read the article

  • Flex AdvancedDataGrid - ColumnOrder With Formatter and ItemRenderer Question For Experts

    - by robin1126
    I have a advanceddatagrid that has around 15 columns. Some are string, some are numbers. I have shown 4 columns below. The number columns have formatting done for zero precision and 2 digits precision. The itemRenderer is just to show Blue Color if the number is +ve and Red Color if the number is -ve. It looks something like below ... I am trying to save the users setting of column order when he closes the application and reload with the same order when the user opens the applications. I am using SharedObjects and below is the code. for(var i:int=0; i< adgrid.columns.length;i++){ var columnObject:Object = new Object(); columnObject.columnDataField = adgrid.columns[i].dataField as String; columnObject.columnHeader =adgrid.columns[i].headerText as String; columnObject.width = adgrid.columns[i].width; columnArray.push(columnObject); } and then I save the columnArray to SharedObject. I retrive them using the below code for(var i:int=0;i adgrid.columns[i].dataField =columnArray[i].columnDataField; adgrid.columns[i].headerText =columnArray[i].columnHeader; adgrid.columns[i].width = columnArray[i].width; } How can I save and reload the Formatter and ItemRenderer data . I am having trouble saving the formatter and itemrenderer and reloading it again. I would really appreciate if you can shown the code. How can I reshuffle the columns but can preserve all the properties of it to the sharedobject and recover it again.

    Read the article

  • Dynamically created "CheckBoxList" in placeholder controls throwing null reference exception error d

    - by newName
    I have a web form that will dynamically create a number of checkboxlist filled with data based on the number of row in database. Then this is added to a placeholder to be displayed. theres a button when clicked, will add the selected check boxes value into database but right now when the button is clicked and after the postback, the page will show the error "System.NullReferenceException". the following code is written in Page_Load in (!Page.IsNotPostBack) and in a loop that will dynamically create a number of checkboxlist: CheckBoxLis chkContent = new CheckBoxList(); chkContent.ID = chkIDString; //chkIDString is an incremental int based on the row count chkContent.RepeatDirection = RepeatDirection.Horizontal; foreach (List<t> contentList in List<t>) //data retrieved as List<t> using LINQ { ListItem contents = new ListItem(); contents.Text = contentList.Title; contents.Value = contentList.contentID.ToString(); chkContent.Items.Add(contents); } plcSchool.Controls.Add(chkContent); //plcSchool is my placeholder plcSchool.Controls.Add(new LiteralControl("<br>")); protected void btnAdd_Click(object sender, EventArgs e) { CheckBoxList cbl = Page.FindControl("chkContent4") as CheckBoxList; Response.Write(cbl.SelectedValue.ToString()); // now im just testing to get the value from one of the checkboxlist } Anyone able to help, as it seems the controls are not recreated after the postback, therefore it can't find the control and throwing the null reference exception.

    Read the article

  • Whats wrong with this HQL query?

    - by ManBugra
    did i encounter a hibernate bug or do i have an error i dont see: select enty.number from EntityAliasName enty where enty.myId in ( select cons.myId from Consens cons where cons.number in ( select ord.number from Orders ord where ord.customer = :customer and ord.creationDate < ( select max(ord.creationDate) from Orders ord where ord.customer = :customer ) ) ) what i do get is the following: org.hibernate.util.StringHelper.root(StringHelper.java:257) Caused by: java.lang.NullPointerException at org.hibernate.util.StringHelper.root(StringHelper.java:257) at org.hibernate.persister.entity.AbstractEntityPersister.getSubclassPropertyTableNumber(AbstractEntityPersister.java:1391) at org.hibernate.persister.entity.BasicEntityPropertyMapping.toColumns(BasicEntityPropertyMapping.java:54) at org.hibernate.persister.entity.AbstractEntityPersister.toColumns(AbstractEntityPersister.java:1367) at org.hibernate.hql.ast.tree.FromElement.getIdentityColumn(FromElement.java:320) at org.hibernate.hql.ast.tree.IdentNode.resolveAsAlias(IdentNode.java:154) at org.hibernate.hql.ast.tree.IdentNode.resolve(IdentNode.java:100) at org.hibernate.hql.ast.tree.FromReferenceNode.resolve(FromReferenceNode.java:117) at org.hibernate.hql.ast.tree.FromReferenceNode.resolve(FromReferenceNode.java:113) at org.hibernate.hql.ast.HqlSqlWalker.resolve(HqlSqlWalker.java:854) at org.hibernate.hql.antlr.HqlSqlBaseWalker.propertyRef(HqlSqlBaseWalker.java:1172) at org.hibernate.hql.antlr.HqlSqlBaseWalker.propertyRefLhs(HqlSqlBaseWalker.java:5167) at org.hibernate.hql.antlr.HqlSqlBaseWalker.propertyRef(HqlSqlBaseWalker.java:1133) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectExpr(HqlSqlBaseWalker.java:1993) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectExprList(HqlSqlBaseWalker.java:1932) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectClause(HqlSqlBaseWalker.java:1476) at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:580) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectStatement(HqlSqlBaseWalker.java:288) at org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:231) at org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:254) at org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:185) at org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:136) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:101) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:80) at org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:94) at org.hibernate.impl.SessionFactoryImpl.checkNamedQueries(SessionFactoryImpl.java:484) at org.hibernate.impl.SessionFactoryImpl.<init>(SessionFactoryImpl.java:394) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1341) using: Hibernate 3.3.2.GA / postgresql

    Read the article

  • Database structure - is mySQL the right choice?

    - by Industrial
    Hi everyone, We are currently planning the database structure of a quite complex e-commerce web app that has flexibility as it's main cornerstone. Our app features a large amount of data (products) and we have run into a slight headache trying to keep performance high without compromizing normalization rules in the database, or leaving our highly beloved flexibility concept behind when integrating product options (also widely known as product attributes or parameters). Based on various references and sources available, we have made up lists on pros and cons of all major and well known database patterns to solve this. After comparing these, we have come up with two final alternatives: EAV (Entity-attribute-value model) : Pros: Database is used for all sorting. Cons: All related queries will include a number of joins between multiple tables in order to complete the collection of data. SLOB (Serialized LOB, also known as Facade?) : Pros: Very flexible. Keeping the number of necessary joins low compared to a EAV design pattern. Easy to update/add/remove data from each product. Cons: All sorting will be done by the application instead of the database. Will use lots of performance (memory?) when big datasets is processed by a large number of users. Our main questions: Which pattern/structure would you use, or maybe even a different solution? Is there better databases besides mySQL available nowadays to accomplish what we want? Thanks a lot! Reference: http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters

    Read the article

  • car race game collision condition.

    - by ashok patidar
    in this how can rotate car when it goes to collied with the track side. package { import flash.display.MovieClip; import flash.events.Event; import flash.events.KeyboardEvent; import flash.text.TextField; import flash.ui.Keyboard; import Math; /** * ... * @author Ashok */ public class F1race extends MovieClip { public var increment:Number = 0; //amount the car moves each frame public var posNeg:Number = 1; public var acceleration:Number = .05; //acceleration of the car, or the amount increment gets increased by. public var speed:Number = 0; //the speed of the car that will be displayed on screen public var maxSpeed:Number = 100; public var keyLeftPressed:Boolean; public var keyRightPressed:Boolean; public var keyUpPressed:Boolean; public var keyDownPressed:Boolean; public var spedometer:TextField = new TextField(); public var carRotation:Number ; public var txt_hit:TextField = new TextField(); public function F1race() { carRotation = carMC.rotation; trace(carMC.rotation); //addChild(spedometer); //spedometer.x = 0; //spedometer.y = 0; addChild(txt_hit); txt_hit.x = 0; txt_hit.y = 100; //rotation of the car addEventListener(Event.ENTER_FRAME, onEnterFrameFunction); stage.addEventListener(KeyboardEvent.KEY_DOWN, keyPressed,false); stage.addEventListener(KeyboardEvent.KEY_UP, keyReleased,false); carMC.addEventListener(Event.ENTER_FRAME, carOver_road) } public function carOver_road(event:Event):void { //trace(texture.hitTestPoint(carMC.x,carMC.y,true),"--"); /* if(!texture.hitTestPoint(carMC.x,carMC.y,true)) { txt_hit.text = "WRONG WAY"; if(increment!=0) { increment=1; } } else { txt_hit.text = ""; //increment++; }*/ if (roadless.hitTestPoint(carMC.x - carMC.width / 2, carMC.y,true)) { trace("left Hit" + carMC.rotation); //acceleration = .005; //if(carMC.rotation>90 || carMC.rotation>90 //carMC.rotation += 2; if ((carMC.rotation >= 90) && (carMC.rotation <= 180)) { carMC.rotation += 3; carMC.x += 3; } if ((carMC.rotation <= -90) && (carMC.rotation >= -180)) { carMC.rotation += 3; texture.y -= 3; } if ((carMC.rotation > -90) && (carMC.rotation <= -1)) { carMC.rotation += 3; texture.y -= 3; } if(increment<0) { increment += 1.5 * acceleration; } if(increment>0) { increment -= 1.5 * acceleration; } } if (roadless.hitTestPoint(carMC.x + carMC.width / 2, carMC.y,true)) { trace("left right"); //carMC.rotation -= 2; if(increment<0) { increment += 1.5 * acceleration; } if(increment>0) { increment -= 1.5 * acceleration; } } if (roadless.hitTestPoint(carMC.x, carMC.y- carMC.height / 2,true)) { trace("left right"); //carMC.rotation -= 2; if(increment<0) { increment += 1.5 * acceleration; } if(increment>0) { increment -= 1.5 * acceleration; } } if (roadless.hitTestPoint(carMC.x, carMC.y+ carMC.height / 2,true)) { trace("left right"); //carMC.rotation -= 2; if(increment<0) { increment += 1.5 * acceleration; } if(increment>0) { increment -= 1.5 * acceleration; } } if ((!roadless.hitTestPoint(carMC.x - carMC.width / 2, carMC.y, true)) && (!roadless.hitTestPoint(carMC.x, carMC.y- carMC.height / 2,true)) && (!roadless.hitTestPoint(carMC.x, carMC.y+ carMC.height / 2,true)) && (!roadless.hitTestPoint(carMC.x, carMC.y+ carMC.height / 2,true))) { //acceleration = .05; } } public function onEnterFrameFunction(events:Event):void { speed = Math.round((increment) * 5); spedometer.text = String(speed); if ((carMC.rotation < 180)&&(carMC.rotation >= 0)){ carRotation = carMC.rotation; posNeg = 1; } if ((carMC.rotation < 0)&&(carMC.rotation > -180)){ carRotation = -1 * carMC.rotation; posNeg = -1; } if (keyRightPressed) { carMC.rotation += .5 * increment; carMC.LWheel.rotation = 8; carMC.RWheel.rotation = 8; steering.gotoAndStop(2); } if (keyLeftPressed) { carMC.rotation -= .5 * increment; carMC.LWheel.rotation = -8; carMC.RWheel.rotation = -8; steering.gotoAndStop(3); } if (keyDownPressed) { steering.gotoAndStop(1); carMC.LWheel.rotation = 0; carMC.RWheel.rotation = 0; increment -= 0.5 * acceleration; texture.y -= ((90 - carRotation) / 90) * increment; roadless.y = texture.y; if (((carMC.rotation > 90)&&(carMC.rotation < 180))||((carMC.rotation < -90)&&(carMC.rotation > -180))) { texture.x += posNeg * (((((1 - (carRotation / 360)) * 360) - 180) / 90) * increment); roadless.x = texture.x; } if (((carMC.rotation <= 90)&&(carMC.rotation > 0))||((carMC.rotation >= -90)&&(carMC.rotation < -1))) { texture.x += posNeg * ((carRotation) / 90) * increment; roadless.x = texture.x; } increment -= 1 * acceleration; if ((Math.abs(speed)) < (Math.abs(maxSpeed))) { increment += acceleration; } if ((Math.abs(speed)) == (Math.abs(maxSpeed))) { trace("hello"); } } if (keyUpPressed) { steering.gotoAndStop(1); carMC.LWheel.rotation = 0; carMC.RWheel.rotation = 0; //trace(carMC.rotation); texture.y -= ((90 - carRotation) / 90) * increment; roadless.y = texture.y; if (((carMC.rotation > 90)&&(carMC.rotation < 180))||((carMC.rotation < -90)&&(carMC.rotation > -180))) { texture.x += posNeg * (((((1 - (carRotation / 360)) * 360) - 180) / 90) * increment); roadless.x = texture.x; } if (((carMC.rotation <= 90)&&(carMC.rotation > 0))||((carMC.rotation >= -90)&&(carMC.rotation < -1))) { texture.x += posNeg * ((carRotation) / 90) * increment; roadless.x = texture.x; } increment += 1 * acceleration; if ((Math.abs(speed)) < (Math.abs(maxSpeed))) { increment += acceleration; } } if ((!keyUpPressed) && (!keyDownPressed)){ /*if (increment > 0 && (!keyUpPressed)&& (!keyDownPressed)) { //texture.y -= ((90-carRotation)/90)*increment; increment -= 1.5 * acceleration; } if((increment==0)&&(!keyUpPressed)&& (!keyDownPressed)) { increment = 0; } if((increment<0)&&(!keyUpPressed)&& (!keyDownPressed)) { increment += 1.5 * acceleration; }*/ if (increment > 0) { increment -= 1.5 * acceleration; texture.y -= ((90 - carRotation) / 90) * increment; roadless.y = texture.y; if (((carMC.rotation > 90)&&(carMC.rotation < 180))||((carMC.rotation < -90)&&(carMC.rotation > -180))) { texture.x += posNeg * (((((1 - (carRotation / 360)) * 360) - 180) / 90) * increment); roadless.x = texture.x; } if (((carMC.rotation <= 90)&&(carMC.rotation > 0))||((carMC.rotation >= -90)&&(carMC.rotation < -1))) { texture.x += posNeg * ((carRotation) / 90) * increment; roadless.x = texture.x; } } if (increment == 0) { increment = 0; } if (increment < 0) { increment += 1.5 * acceleration; texture.y -= ((90 - carRotation) / 90) * increment; roadless.y = texture.y; if (((carMC.rotation > 90)&&(carMC.rotation < 180))||((carMC.rotation < -90)&&(carMC.rotation > -180))) { texture.x += posNeg * (((((1 - (carRotation / 360)) * 360) - 180) / 90) * increment); roadless.x = texture.x; } if (((carMC.rotation <= 90)&&(carMC.rotation > 0))||((carMC.rotation >= -90)&&(carMC.rotation < -1))) { texture.x += posNeg * ((carRotation) / 90) * increment; roadless.x = texture.x; } } } } public function keyPressed(event:KeyboardEvent):void { trace("keyPressed"); if (event.keyCode == Keyboard.LEFT) { keyLeftPressed = true; } if (event.keyCode == Keyboard.RIGHT) { keyRightPressed = true; } if (event.keyCode == Keyboard.UP) { keyUpPressed = true; } if (event.keyCode == Keyboard.DOWN) { keyDownPressed = true; } } public function keyReleased(event:KeyboardEvent):void { trace("keyReleased..."); //increment -= 1.5 * acceleration; //increment--; if (event.keyCode == Keyboard.LEFT) { keyLeftPressed = false; } if (event.keyCode == Keyboard.RIGHT) { keyRightPressed = false; } if (event.keyCode == Keyboard.UP) { keyUpPressed = false; } if (event.keyCode == Keyboard.DOWN) { keyDownPressed = false; } } } }

    Read the article

  • Get JVM to grow memory demand as needed up to size of VM limit?

    - by Ira Baxter
    We ship a Java application whose memory demand can vary quite a lot depending on the size of the data it is processing. If you don't set the max VM (virtual memory) size, quite often the JVM quits with an GC failure on big data. What we'd like to see, is the JVM requesting more memory, as GC fails to provide enough, until the total available VM is exhausted. e.g., start with 128Mb, and increase geometrically (or some other step) whenever the GC failed. The JVM ("Java") command line allows explicit setting of max VM sizes (various -Xm* commands), and you'd think that would be designed to be adequate. We try to do this in a .cmd file that we ship with the application. But if you pick any specific number, you get one of two bad behaviors: 1) if your number is small enough to work on most target systems (e.g., 1Gb), it isn't big enough for big data, or 2) if you make it very large, the JVM refuses to run on those systems whose actual VM is smaller than specified. How does one set up Java to use the available VM when needed, without knowing that number in advance, and without grabbing it all on startup?

    Read the article

  • Can't get any speedup from parallelizing Quicksort using Pthreads

    - by Murat Ayfer
    I'm using Pthreads to create a new tread for each partition after the list is split into the right and left halves (less than and greater than the pivot). I do this recursively until I reach the maximum number of allowed threads. When I use printfs to follow what goes on in the program, I clearly see that each thread is doing its delegated work in parallel. However using a single process is always the fastest. As soon as I try to use more threads, the time it takes to finish almost doubles, and keeps increasing with number of threads. I am allowed to use up to 16 processors on the server I am running it on. The algorithm goes like this: Split array into right and left by comparing the elements to the pivot. Start a new thread for the right and left, and wait until the threads join back. If there are more available threads, they can create more recursively. Each thread waits for its children to join. Everything makes sense to me, and sorting works perfectly well, but more threads makes it slow down immensely. I tried setting a minimum number of elements per partition for a thread to be started (e.g. 50000). I tried an approach where when a thread is done, it allows another thread to be started, which leads to hundreds of threads starting and finishing throughout. I think the overhead was way too much. So I got rid of that, and if a thread was done executing, no new thread was created. I got a little more speedup but still a lot slower than a single process. The code I used is below. http://pastebin.com/UaGsjcq2 Does anybody have any clue as to what I could be doing wrong?

    Read the article

  • Python: speed up removal of every n-th element from list.

    - by ChristopheD
    I'm trying to solve this programming riddle and althought the solution (see code below) works correct, it is too slow for succesful submission. Any pointers as how to make this run faster? (removal of every n-th element from a list)? Or suggestions for a better algorithm to calculate the same; seems I can't think of anything else then brute-force for now... Basically the task at hand is: GIVEN: L = [2,3,4,5,6,7,8,9,10,11,........] 1. Take the first remaining item in list L (in the general case 'n'). Move it to the 'lucky number list'. Then drop every 'n-th' item from the list. 2. Repeat 1 TASK: Calculate the n-th number from the 'lucky number list' ( 1 <= n <= 3000) My current code (it calculates the 3000 first lucky numbers in about a second on my machine - but unfortunately too slow): """ SPOJ Problem Set (classical) 1798. Assistance Required URL: http://www.spoj.pl/problems/ASSIST/ """ sieve = range(3, 33900, 2) luckynumbers = [2] while True: wanted_n = input() if wanted_n == 0: break while len(luckynumbers) < wanted_n: item = sieve[0] luckynumbers.append(item) items_to_delete = set(sieve[::item]) sieve = filter(lambda x: x not in items_to_delete, sieve) print luckynumbers[wanted_n-1]

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >