Search Results

Search found 12428 results on 498 pages for 'wait types'.

Page 79/498 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Possibility of language data type not mapped to shipped .NET Framework?

    - by John K
    Does anybody know of a managed programming language implemented on .NET that contains a specialized data type that is not mapped through to the Common Type System/FCL/BCL or one that does not have a shipped .NET equivalent (e.g. shipped standard types like System.String, System.Int32)? This question would likely come from the perspective of someone porting a compiler (although I'm not doing that). Is it as simple as the language creating a new data type outside the BCL/FCL for its specialized type? If so does this hinder interoperability between programming languages that are otherwise accustomed to mapping all their built-in data types to what's in the BCL/FCL, like Visual Basic and C#? I can imagine this situation might come about if an obscure language compiler of some kind is ported to .NET for which there is no direct mapping of one of its implicit data types to the shipped Framework. How is this situation supported or allowed in general? What would be the expectation of the compiler and the Common Language Runtime?

    Read the article

  • Dynamically adding @property in python

    - by rz
    I know that I can dynamically add an instance method to an object by doing something like: import types def my_method(self): # logic of method # ... # instance is some instance of some class instance.my_method = types.MethodType(my_method, instance) Later on I can call instance.my_method() and self will be bound correctly and everything works. Now, my question: how to do the exact same thing to obtain the behavior that decorating the new method with @property would give? I would guess something like: instance.my_method = types.MethodType(my_method, instance) instance.my_method = property(instance.my_method) But, doing that instance.my_method returns a property object.

    Read the article

  • Is there a limit to the number of DataContracts that can be used by a WCF Service?

    - by Chris
    Using WCF3.5SP1, VS2008. Building a WCF service that exposes about 10 service methods. We have defined about 40 [DataContract] types that are used by the service. We now experience that adding an additional [DataContract] type to the project (in the same namespace as the other existing types) does not get properly exposed. The new type is not in the XSD schemas generated with the WSDL. We have gone so far as to copy and rename an existing (and working) type, but it too is not present in the generated WSDL/XSD. We've tried this on two different developer machines, same problem. Is there a limit to the number of types that can exposed as [DataContract] for a Service? per Namespace?

    Read the article

  • Best practices about creating a generic object dictionary in C#? Is this bad?

    - by JimDaniel
    For clarity I am using C# 3.5/Asp.Net MVC 2 Here is what I have done: I wanted the ability to add/remove functionality to an object at run-time. So I simply added a generic object dictionary to my class like this: public Dictionary<int, object> Components { get; set; } Then I can add/remove any kind of .Net object into this dictionary at run-time. To insert an object I do something like this: var tag = new Tag(); myObject.Components.Add((int)Types.Components.Tag, tag); Then to retrieve I just do this: if(myObject.Components.ContainsKey((int)Types.Components.Tag)) { var tag = myObject.Components[(int)Types.Components.Tag] as Tag; if(tag != null) { //do stuff } } Somehow I feel sneaky doing this. It works okay, but I am wondering what you guys think about it as a best practice. Thanks for your input, Daniel

    Read the article

  • Creating dynamic generics at runtime using Reflection

    - by MPhlegmatic
    I'm trying to convert a Dictionary< dynamic, dynamic to a statically-typed one by examining the types of the keys and values and creating a new Dictionary of the appropriate types using Reflection. If I know the key and value types, I can do the following: Type dictType = typeof(Dictionary<,>); newDict = Activator.CreateInstance(dictType.MakeGenericType(new Type[] { keyType, valueType })); However, I may need to create, for example, a Dictionary< MyKeyType, dynamic if the values are not all of the same type, and I can't figure out how to specify the dynamic type, since typeof(dynamic) isn't viable. How would I go about doing this, and/or is there a simpler way to accomplish what I'm trying to do?

    Read the article

  • C#: Oracle Data Type Equivalence with OracleDbType

    - by Partial
    Situation: I am creating an app in C# that uses Oracle.DataAccess.Client (11g) to do certain operations on a Oracle database with stored procedures. I am aware that there is a certain enum (OracleDbType) that contains the Oracle data types, but I am not sure which one to use for certain types. Questions: What is the equivalent Oracle PL/SQL data type for each enumerated type in the OracleDbType enumeration? There are three types of integer (Int16, Int32, Int64) in the OracleDbType... how to know which one to use or are they all suppose to work?

    Read the article

  • What is a common name for inheritance, composition, aggregation, delegation?

    - by Eye of Hell
    Hello. After program is separated into small object, these objects must be connected with each over. Where are different types of connection. Inheritance, composition, aggregation, delegation. These types has many kinds and patterns like loose coupling, tight coupling, inversion of control, delegation via interfaces etc. What is a correct common name for mentioned types of connections? I can suggest that they all are called 'coupling', but i can't find any good classification in google, so maybe i'm trying to use a wrong term? Maybe anyone knows a solid, trusted classification that i can user for terminology?

    Read the article

  • Django - User account with multiple identities

    - by Scott Willman
    Synopsis: Each User account has a UserProfile to hold extended info like phone numbers, addresses, etc. Then, a User account can have multiple Identities. There are multiple types of identities that hold different types of information. The structure would be like so: User |<-FK- UserProfile | |<-FK- IdentityType1 |<-FK- IdentityType1 |<-FK- IdentityType2 |<-FK- IdentityType3 (current) |<-FK- IdentityType3 |<-FK- IdentityType3 The User account can be connected to n number of Identities of different types but can only use one Identity at a time. Seemingly, the Django way would be to collect all of the connected identities (user.IdentityType1_set.select_related()) into a QuerySet and then check each one for some kind of 'current' field. Question: Can anyone think of a better way to select the 'current' marked Identity than doing three DB queries (one for each IdentityType)?

    Read the article

  • Is there any way to retrieve a float from a varargs function's parameters?

    - by Jared P
    If the function was defined with a prototype which explicitly stated the types of the parameters, eg. void somefunc(int arg1, float arg2); but is implemented as void somefunc(int arg1, ...) { ... } is it possible to use va_arg to retrieve a float? It's normally prevented from doing this because varargs functions have implicit type promotions, like float to double, so trying to retrieve an unpromoted type is unsupported, even though the function is being called with the unpromoted type do to the more specific function prototype. The reason for this is to retrieve arguments of different types at runtime, as part of an obj-c interpreter, where one function will be reused for all different types of methods. This would be best as architecture independent (so that if nothing else the same code works on simulator and on device), although if there is no way to do this then device specific fixes will be accepted.

    Read the article

  • How to reliably specialize template with intptr_t in 32 and 64 bit environments?

    - by vava
    I have a template I want to specialize with two int types, one of them plain old int and another one is intptr_t. On 64 bit platform they have different sizes and I can do that with ease but on 32 bit both types are the same and compiler throws an error about redefinition. What can I do to fix it except for disabling one of definitions off with preprocessor? Some code as an example: template<typename T> type * convert(); template<> type * convert<int>() { return getProperIntType(sizeof(int)); } template<> type * convert<intptr_t>() { return getProperIntType(sizeof(intptr_t)); } //this template can be specialized with non-integral types as well, // so I can't just use sizeof() as template parameter. template<> type * convert<void>() { return getProperVoidType(); }

    Read the article

  • Cross-referencing UML models in VS 2010

    - by cheaster
    I am just starting to explore/use the UML modeling support in Visual Studio 2010 Umltimate. I have created two model projects within a single solution. Let's call them Model A and Model B. I have some data types (classes) defined in Model B. I want to use them as return types for operations in Model A. However, I cannot figure out how to make the types defined in Model B show up in Model A when attempting to set return type on an operation. Any help/suggestions would be greatly appreciated! Thanks!

    Read the article

  • powershell folder stats

    - by huppy_doodoo
    Hi all, I keep all modules of our system in one dir (e.g. All\ModuleA; All\ModuleB). I want to see what types of files are most numerous and take up the most space, by module. So, I'd like output along the lines of: ModName,java-count,java-size,xml-count,xml-size,png-count,png-size... ModuleA,30,0.2,100,2.3,0,0,... ModuleB,21,0.1,20,0.7,1,1.2 Not all modules have files of all types, so this will only work if I list all types for all module (with lots of zeros). I have something that almost works, but it's hideous, verbose and inefficient. I'm sure someone can help me see the light :-) (which, by the way, can be a piece of freeware software that does this out of the box; I only chose to do this in powershell out of interest). thanks

    Read the article

  • Persisting Serializable Objects in Hibernate

    - by VeeArr
    I am attempting to persist objects that contain some large Serializable types. I want Hibernate to automatically generate my DDL (using Hibernate annotations). For the most part, this works, but the default database column type used by Hibernate when persisting these types is tinyblob. Unfortunately, this causes crashes when attempting to persist my class, because these types will not fit within the length of tinyblob. However, if I manually set the type (using @Column(columnDefinition="longblob")), it works fine. Is there any way to make the default binary type longblob instead of tinyblob, so that I don't need to manually specify the @Column annotation on each field?

    Read the article

  • MINSCN?Cache Fusion Read Consistent

    - by Liu Maclean(???)
    ????? ???Ask Maclean Home ???  RAC ? Past Image PI????, ?????????,???11.2.0.3 2 Node RAC??????????: SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> drop table test purge; Table dropped. SQL> alter system flush buffer_cache; System altered. SQL> create table test(id number); insert into test values(1); insert into test values(2); commit; /* ???? rowid??TEST????2????????? */ select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from test; DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) ------------------------------------ ------------------------------------                                89233                                    1                                89233                                    1 SQL> alter system flush buffer_cache; System altered. Instance 1 Session A ??UPDATE??: SQL> update test set id=id+1 where id=1; 1 row updated. Instance 1 Session B ??x$BH buffer header?? ?? ??Buffer??? SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0;      STATE CR_SCN_BAS ---------- ----------          1          0          3    1227595 X$BH ??? STATE????Buffer???, ???????: STATE NUMBER KCBBHFREE 0 buffer free KCBBHEXLCUR 1 buffer current (and if DFS locked X) KCBBHSHRCUR 2 buffer current (and if DFS locked S) KCBBHCR 3 buffer consistant read KCBBHREADING 4 Being read KCBBHMRECOVERY 5 media recovery (current & special) KCBBHIRECOVERY 6 Instance recovery (somewhat special) ????????????? : state =1 Xcurrent ? state=2 Scurrent ? state=3 CR ??? Instance 2  ?? ????????????? ,???? gc current block 2 way  ??Current Block ??? Instance 2, ?? Instance 1 ??”Current Block” Convert ? Past Image: Instance 2 Session C SQL> update test set id=id+1 where id=2; 1 row updated. Instance 2 Session D SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 1 0 3 1227641 3 1227638 STATE =1 ?Xcurrent block???? Instance 2 , ??? Instance 1 ??? GC??: Instance 1 Session B SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0;      STATE CR_SCN_BAS ---------- ----------          3    1227641          3    1227638          8          0          3    1227595 ???????, ??????Instance 1??session A????TEST??SELECT??? ,????? 3? State=3?CR ? ??????1?: Instance 1 session A ?????UPDATE? session SQL> alter session set events '10046 trace name context forever,level 8:10708 trace name context forever,level 103: trace[rac.*] disk high'; Session altered. SQL> select * from test;         ID ----------          2          2 select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0;       STATE CR_SCN_BAS ---------- ----------          3    1227716          3    1227713          8          0 ?????????v$BH ????CR????,?????SELECT??? CR????????,???????? ?????????? ??X$BH?????? , ?????CR??SCN Version???: 1227641?1227638?1227595, ?SELECT?????? 2???SCN version?CR? 1227716? 1227713 ???, Oracle????????? ?????????SELECT??????event 10708? rac.*???TRACE,??????TRACE??: PARSING IN CURSOR #140444679938584 len=337 dep=1 uid=0 oct=3 lid=0 tim=1335698913632292 hv=3345277572 ad='bc0e68c8' sqlid='baj7tjm3q9sn4' SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE NO_PARALLEL(SAMPLESUB) opt_param('parallel_execution_enabled', 'false') NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ NO_PARALLEL("TEST") FULL("TEST") NO_PARALLEL_INDEX("TEST") */ 1 AS C1, 1 AS C2 FROM "SYS"."TEST" "TEST") SAMPLESUB END OF STMT PARSE #140444679938584:c=1000,e=27630,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=1,plh=1950795681,tim=1335698913632252 EXEC #140444679938584:c=0,e=44,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=1950795681,tim=1335698913632390 *** 2012-04-29 07:28:33.632 kclscrs: req=0 block=1/89233 *** 2012-04-29 07:28:33.632 kclscrs: bid=1:3:1:0:7:80:1:0:4:0:0:0:1:2:4:1:26:0:0:0:70:1a:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:4:3:2:1:2:0:3f:0:1c:86:2d:4:0:0:0:0:a2:3c:7c:b:70:1a:0:0:0:0:1:0:7a:f8:76:1d:1:2:dc:5:a9:fe:17:75:0:0:0:0:0:0:0:0:0:0:0:0:63:e5:0:0:0:0:0:0:10:0:0:0 2012-04-29 07:28:33.632578 : kjbcrc[0x15c91.1 76896.0][9] 2012-04-29 07:28:33.632616 : GSIPC:GMBQ: buff 0xba1e8f90, queue 0xbb79f278, pool 0x60013fa0, freeq 1, nxt 0xbb79f278, prv 0xbb79f278 2012-04-29 07:28:33.632634 : kjbmscrc(0x15c91.1)seq 0x2 reqid=0x1c(shadow 0xb4bb4458,reqid x1c)mas@2(infosz 200)(direct 1) 2012-04-29 07:28:33.632654 : kjbsentscn[0x0.12bbc1][to 2] 2012-04-29 07:28:33.632669 : GSIPC:SENDM: send msg 0xba1e9000 dest x20001 seq 24026 type 32 tkts xff0000 mlen x17001a0 2012-04-29 07:28:33.633385 : GSIPC:KSXPCB: msg 0xba1e9000 status 30, type 32, dest 2, rcvr 1 *** 2012-04-29 07:28:33.633 kclwcrs: wait=0 tm=689 *** 2012-04-29 07:28:33.633 kclwcrs: got 1 blocks from ksxprcv WAIT #140444679938584: nam='gc cr block 2-way' ela= 689 p1=1 p2=89233 p3=1 obj#=76896 tim=1335698913633418 2012-04-29 07:28:33.633490 : kjbcrcomplete[0x15c91.1 76896.0][0] 2012-04-29 07:28:33.633510 : kjbrcvdscn[0x0.12bbc1][from 2][idx 2012-04-29 07:28:33.633527 : kjbrcvdscn[no bscn <= rscn 0x0.12bbc1][from 2] *** 2012-04-29 07:28:33.633 kclwcrs: req=0 typ=cr(2) wtyp=2hop tm=689 ??TRACE???? ?????????TEST??????, ???????Dynamic Sampling?????,???????TEST?? CR???,???????’gc cr block 2-way’ ??: 2012-04-29 07:28:33.632654 : kjbsentscn[0x0.12bbc1][to 2] 12bbc1= 1227713  ???X$BH????CR???,kjbsentscn[0x0.12bbc1][to 2] ????? ? Instance 2 ???SCN=12bbc1=1227713   DBA=0x15c91.1 76896.0 ?  CR Request(obj#=76896) ??kjbrcvdscn????? [no bscn <= rscn 0x0.12bbc1][from 2] ,??? ??receive? SCN Version =12bbc1 ???Best Version CR Server Arch ??????????????????SELECT??: PARSING IN CURSOR #140444682869592 len=18 dep=0 uid=0 oct=3 lid=0 tim=1335698913635874 hv=1689401402 ad='b1a188f0' sqlid='c99yw1xkb4f1u' select * from test END OF STMT PARSE #140444682869592:c=4999,e=34017,p=0,cr=7,cu=0,mis=1,r=0,dep=0,og=1,plh=1357081020,tim=1335698913635870 EXEC #140444682869592:c=0,e=23,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1357081020,tim=1335698913635939 WAIT #140444682869592: nam='SQL*Net message to client' ela= 7 driver id=1650815232 #bytes=1 p3=0 obj#=0 tim=1335698913636071 *** 2012-04-29 07:28:33.636 kclscrs: req=0 block=1/89233 *** 2012-04-29 07:28:33.636 kclscrs: bid=1:3:1:0:7:83:1:0:4:0:0:0:1:2:4:1:26:0:0:0:70:1a:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:4:3:2:1:2:0:2:0:1c:86:2d:4:0:0:0:0:a2:3c:7c:b:70:1a:0:0:0:0:1:0:7d:f8:76:1d:1:2:dc:5:a9:fe:17:75:0:0:0:0:0:0:0:0:0:0:0:0:63:e5:0:0:0:0:0:0:10:0:0:0 2012-04-29 07:28:33.636209 : kjbcrc[0x15c91.1 76896.0][9] 2012-04-29 07:28:33.636228 : GSIPC:GMBQ: buff 0xba0e5d50, queue 0xbb79f278, pool 0x60013fa0, freeq 1, nxt 0xbb79f278, prv 0xbb79f278 2012-04-29 07:28:33.636244 : kjbmscrc(0x15c91.1)seq 0x3 reqid=0x1d(shadow 0xb4bb4458,reqid x1d)mas@2(infosz 200)(direct 1) 2012-04-29 07:28:33.636252 : kjbsentscn[0x0.12bbc4][to 2] 2012-04-29 07:28:33.636358 : GSIPC:SENDM: send msg 0xba0e5dc0 dest x20001 seq 24029 type 32 tkts xff0000 mlen x17001a0 2012-04-29 07:28:33.636861 : GSIPC:KSXPCB: msg 0xba0e5dc0 status 30, type 32, dest 2, rcvr 1 *** 2012-04-29 07:28:33.637 kclwcrs: wait=0 tm=865 *** 2012-04-29 07:28:33.637 kclwcrs: got 1 blocks from ksxprcv WAIT #140444682869592: nam='gc cr block 2-way' ela= 865 p1=1 p2=89233 p3=1 obj#=76896 tim=1335698913637294 2012-04-29 07:28:33.637356 : kjbcrcomplete[0x15c91.1 76896.0][0] 2012-04-29 07:28:33.637374 : kjbrcvdscn[0x0.12bbc4][from 2][idx 2012-04-29 07:28:33.637389 : kjbrcvdscn[no bscn <= rscn 0x0.12bbc4][from 2] *** 2012-04-29 07:28:33.637 kclwcrs: req=0 typ=cr(2) wtyp=2hop tm=865 ???, “SELECT * FROM TEST”??????’gc cr block 2-way’??:2012-04-29 07:28:33.637374 : kjbrcvdscn[0x0.12bbc4][from 2][idx 2012-04-29 07:28:33.637389 : kjbrcvdscn[no bscn ??Foreground Process? Remote LMS??got?? SCN=1227716 Version?CR, ??? ?????X$BH ?????scn??? ??????????Instance 1????2?SCN???CR?, ???????????Instance 1 Buffer Cache?? ??SCN Version ???CR ??????? ?????????: SQL> alter system set "_enable_minscn_cr"=false scope=spfile; System altered. SQL> alter system set "_db_block_max_cr_dba"=20 scope=spfile; System altered. SQL> startup force; ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance ORACLE instance started. Total System Global Area 1570009088 bytes Fixed Size 2228704 bytes Variable Size 989859360 bytes Database Buffers 570425344 bytes Redo Buffers 7495680 bytes Database mounted. Database opened. ???? “_enable_minscn_cr”=false ? “_db_block_max_cr_dba”=20 ???RAC????, ??????: ?Instance 2 Session C ?update??????? ?????Instance 1 ????? ,????Instance 1?Request CR SQL> update test set id=id+1 where id=2; -- Instance 2 1 row updated. SQL> select * From test; -- Instance 1 ID ---------- 1 2 ??? Instance 1? X$BH?? select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0;  STATE CR_SCN_BAS ---------- ---------- 3 1273080 3 1273071 3 1273041 3 1273039 8 0 SQL> update test set id=id+1 where id=3; 1 row updated. SQL> select * From test; ID ---------- 1 2 SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 3 1273091 3 1273080 3 1273071 3 1273041 3 1273039 8 0 ................... SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 3 1273793 3 1273782 3 1273780 3 1273769 3 1273734 3 1273715 3 1273691 3 1273679 3 1273670 3 1273643 3 1273635 3 1273623 3 1273106 3 1273091 3 1273080 3 1273071 3 1273041 3 1273039 3 1273033 19 rows selected. SQL> select state,cr_scn_bas from x$bh where file#=1 and dbablk=89233 and state!=0; STATE CR_SCN_BAS ---------- ---------- 3 1274993 ????? ???? “_enable_minscn_cr”(enable/disable minscn optimization for CR)=false ? “_db_block_max_cr_dba”=20 (Maximum Allowed Number of CR buffers per dba) 2? ??? ????? Instance 1 ??????????? ?? 19????CR?? “_enable_minscn_cr”?11g??????????,???Oracle????CR????SCN,?Foreground Process Receive????????????(SCN??)?SCN Version CR Block??????CBC?? SCN??????CR? , ?????????Buffer Cache??????? ????SCN Version?CR????,????? ?????????? ?????Snap_Scn ?? SCN?? ?????????Current SCN, ??????CR??????????????????????, ????Buffer Cache? ?????????? CR?????????, ?????? “_db_block_max_cr_dba” ???????, ???????????20 ,??????Buffer Cache?????19????CR?; ???”_db_block_max_cr_dba” ???????6 , ?????Buffer cache????????CR ???????6?? ??”_enable_minscn_cr” ??CR???MINSCN ??????, ?????????CR???????, ????? Foreground Process??????CR Request , ?? Holder Instance LMS ?build?? BEST CR ??, ?????????

    Read the article

  • Android edtftpj/PRo SFTP heap worker problem

    - by Mr. Kakakuwa Bird
    Hi I am using edtftpj-pro3.1 trial copy in my android app to make SFTP connection with the server. After few connections with the server with 5-6 file transfers, my app is crashing with following exception. Is it causing the problem or what could be the problem?? I tried setParallelMode(false) in SSHFTPClient, but it is not working. Exception i'm getting is, 05-31 18:28:12.661: ERROR/dalvikvm(589): HeapWorker is wedged: 10173ms spent inside Lcom/enterprisedt/net/j2ssh/sftp/SftpFileInputStream;.finalize()V 05-31 18:28:12.661: INFO/dalvikvm(589): DALVIK THREADS: 05-31 18:28:12.661: INFO/dalvikvm(589): "main" prio=5 tid=3 WAIT 05-31 18:28:12.661: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x4001b260 self=0xbd18 05-31 18:28:12.661: INFO/dalvikvm(589): | sysTid=589 nice=0 sched=0/0 cgrp=default handle=-1343993192 05-31 18:28:12.661: INFO/dalvikvm(589): at java.lang.Object.wait(Native Method) 05-31 18:28:12.661: INFO/dalvikvm(589): - waiting on <0x122d70 (a android.os.MessageQueue) 05-31 18:28:12.661: INFO/dalvikvm(589): at java.lang.Object.wait(Object.java:288) 05-31 18:28:12.661: INFO/dalvikvm(589): at android.os.MessageQueue.next(MessageQueue.java:148) 05-31 18:28:12.661: INFO/dalvikvm(589): at android.os.Looper.loop(Looper.java:110) 05-31 18:28:12.661: INFO/dalvikvm(589): at android.app.ActivityThread.main(ActivityThread.java:4363) 05-31 18:28:12.661: INFO/dalvikvm(589): at java.lang.reflect.Method.invokeNative(Native Method) 05-31 18:28:12.661: INFO/dalvikvm(589): at java.lang.reflect.Method.invoke(Method.java:521) 05-31 18:28:12.661: INFO/dalvikvm(589): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 05-31 18:28:12.661: INFO/dalvikvm(589): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 05-31 18:28:12.661: INFO/dalvikvm(589): at dalvik.system.NativeStart.main(Native Method) 05-31 18:28:12.671: INFO/dalvikvm(589): "Transport protocol 1" daemon prio=5 tid=29 NATIVE 05-31 18:28:12.671: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x44774768 self=0x3a7938 05-31 18:28:12.671: INFO/dalvikvm(589): | sysTid=605 nice=0 sched=0/0 cgrp=default handle=3834600 05-31 18:28:12.671: INFO/dalvikvm(589): at org.apache.harmony.luni.platform.OSNetworkSystem.receiveStreamImpl(Native Method) 05-31 18:28:12.671: INFO/dalvikvm(589): at org.apache.harmony.luni.platform.OSNetworkSystem.receiveStream(OSNetworkSystem.java:478) 05-31 18:28:12.671: INFO/dalvikvm(589): at org.apache.harmony.luni.net.PlainSocketImpl.read(PlainSocketImpl.java:565) 05-31 18:28:12.671: INFO/dalvikvm(589): at org.apache.harmony.luni.net.SocketInputStream.read(SocketInputStream.java:87) 05-31 18:28:12.671: INFO/dalvikvm(589): at org.apache.harmony.luni.net.SocketInputStream.read(SocketInputStream.java:67) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.io.BufferedInputStream.fillbuf(BufferedInputStream.java:157) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.io.BufferedInputStream.read(BufferedInputStream.java:346) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.io.BufferedInputStream.read(BufferedInputStream.java:341) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.transport.A.A((null):-1) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.transport.A.B((null):-1) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.transport.TransportProtocolCommon.processMessages((null):-1) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.transport.TransportProtocolCommon.startBinaryPacketProtocol((null):-1) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.transport.TransportProtocolCommon.run((null):-1) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Thread.run(Thread.java:1096) 05-31 18:28:12.671: INFO/dalvikvm(589): "StreamFrameSender" prio=5 tid=27 TIMED_WAIT 05-31 18:28:12.671: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x44750a60 self=0x3964d8 05-31 18:28:12.671: INFO/dalvikvm(589): | sysTid=603 nice=0 sched=0/0 cgrp=default handle=3761648 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Object.wait(Native Method) 05-31 18:28:12.671: INFO/dalvikvm(589): - waiting on <0x399478 (a com.corventis.gateway.ppp.StreamFrameSender) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Object.wait(Object.java:326) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.corventis.gateway.ppp.StreamFrameSender.run(StreamFrameSender.java:154) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.corventis.gateway.util.MonitoredRunnable.run(MonitoredRunnable.java:41) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Thread.run(Thread.java:1096) 05-31 18:28:12.671: INFO/dalvikvm(589): "SftpActiveWorker" prio=5 tid=25 TIMED_WAIT 05-31 18:28:12.671: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x447522b0 self=0x398e00 05-31 18:28:12.671: INFO/dalvikvm(589): | sysTid=604 nice=0 sched=0/0 cgrp=default handle=3762704 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Object.wait(Native Method) 05-31 18:28:12.671: INFO/dalvikvm(589): - waiting on <0x3962d8 (a com.corventis.gateway.hostcommunicator.SftpActiveWorker) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Object.wait(Object.java:326) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.corventis.gateway.hostcommunicator.SftpActiveWorker.run(SftpActiveWorker.java:151) 05-31 18:28:12.671: INFO/dalvikvm(589): at com.corventis.gateway.util.MonitoredRunnable.run(MonitoredRunnable.java:41) 05-31 18:28:12.671: INFO/dalvikvm(589): at java.lang.Thread.run(Thread.java:1096) 05-31 18:28:12.671: INFO/dalvikvm(589): "Thread-12" prio=5 tid=23 NATIVE 05-31 18:28:12.671: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x4474aca8 self=0x115690 05-31 18:28:12.671: INFO/dalvikvm(589): | sysTid=602 nice=0 sched=0/0 cgrp=default handle=878120 05-31 18:28:12.671: INFO/dalvikvm(589): at android.bluetooth.BluetoothSocket.acceptNative(Native Method) 05-31 18:28:12.681: INFO/dalvikvm(589): at android.bluetooth.BluetoothSocket.accept(BluetoothSocket.java:287) 05-31 18:28:12.681: INFO/dalvikvm(589): at android.bluetooth.BluetoothServerSocket.accept(BluetoothServerSocket.java:105) 05-31 18:28:12.681: INFO/dalvikvm(589): at android.bluetooth.BluetoothServerSocket.accept(BluetoothServerSocket.java:91) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.bluetooth.BluetoothManager.openPort(BluetoothManager.java:215) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.bluetooth.BluetoothManager.open(BluetoothManager.java:84) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.patchcommunicator.PatchCommunicator.open(PatchCommunicator.java:123) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.patchcommunicator.PatchCommunicatorRunnable.run(PatchCommunicatorRunnable.java:134) 05-31 18:28:12.681: INFO/dalvikvm(589): at java.lang.Thread.run(Thread.java:1096) 05-31 18:28:12.681: INFO/dalvikvm(589): "HfGatewayApplication" prio=5 tid=21 RUNNABLE 05-31 18:28:12.681: INFO/dalvikvm(589): | group="main" sCount=0 dsCount=0 s=N obj=0x4472d9b0 self=0x120928 05-31 18:28:12.681: INFO/dalvikvm(589): | sysTid=601 nice=0 sched=0/0 cgrp=default handle=1264672 05-31 18:28:12.681: INFO/dalvikvm(589): at com.jcraft.jzlib.Deflate.deflateInit2(Deflate.java:~1361) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.jcraft.jzlib.Deflate.deflateInit(Deflate.java:1316) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.jcraft.jzlib.ZStream.deflateInit(ZStream.java:127) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.jcraft.jzlib.ZStream.deflateInit(ZStream.java:120) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.jcraft.jzlib.ZOutputStream.(ZOutputStream.java:62) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.zipfile.ZipStorer.addStream(ZipStorer.java:211) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.zipfile.ZipStorer.createZip(ZipStorer.java:127) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.hostcommunicator.HostCommunicator.scanAndCompress(HostCommunicator.java:453) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.hostcommunicator.HostCommunicator.doWork(HostCommunicator.java:1434) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.hf.HfGatewayApplication.doWork(HfGatewayApplication.java:621) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.hf.HfGatewayApplication.run(HfGatewayApplication.java:546) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.util.MonitoredRunnable.run(MonitoredRunnable.java:41) 05-31 18:28:12.681: INFO/dalvikvm(589): at java.lang.Thread.run(Thread.java:1096) 05-31 18:28:12.681: INFO/dalvikvm(589): "Thread-10" prio=5 tid=19 TIMED_WAIT 05-31 18:28:12.681: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x447287f8 self=0x1451b8 05-31 18:28:12.681: INFO/dalvikvm(589): | sysTid=598 nice=0 sched=0/0 cgrp=default handle=1331920 05-31 18:28:12.681: INFO/dalvikvm(589): at java.lang.VMThread.sleep(Native Method) 05-31 18:28:12.681: INFO/dalvikvm(589): at java.lang.Thread.sleep(Thread.java:1306) 05-31 18:28:12.681: INFO/dalvikvm(589): at java.lang.Thread.sleep(Thread.java:1286) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.util.Watchdog.run(Watchdog.java:167) 05-31 18:28:12.681: INFO/dalvikvm(589): at java.lang.Thread.run(Thread.java:1096) 05-31 18:28:12.681: INFO/dalvikvm(589): "Thread-9" prio=5 tid=17 RUNNABLE 05-31 18:28:12.681: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=Y obj=0x44722c90 self=0x114e20 05-31 18:28:12.681: INFO/dalvikvm(589): | sysTid=597 nice=0 sched=0/0 cgrp=default handle=1200048 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.time.Time.currentTimeMillis(Time.java:~77) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.patchcommunicator.PatchCommunicatorState$1.run(PatchCommunicatorState.java:27) 05-31 18:28:12.681: INFO/dalvikvm(589): "Thread-8" prio=5 tid=15 RUNNABLE 05-31 18:28:12.681: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=Y obj=0x44722430 self=0x124dd0 05-31 18:28:12.681: INFO/dalvikvm(589): | sysTid=596 nice=0 sched=0/0 cgrp=default handle=1199848 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.time.Time.currentTimeMillis(Time.java:~80) 05-31 18:28:12.681: INFO/dalvikvm(589): at com.corventis.gateway.hostcommunicator.HostCommunicatorState$1.run(HostCommunicatorState.java:35) 05-31 18:28:12.681: INFO/dalvikvm(589): "Binder Thread #2" prio=5 tid=13 NATIVE 05-31 18:28:12.681: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x4471ccc0 self=0x149b60 05-31 18:28:12.681: INFO/dalvikvm(589): | sysTid=595 nice=0 sched=0/0 cgrp=default handle=1317992 05-31 18:28:12.681: INFO/dalvikvm(589): at dalvik.system.NativeStart.run(Native Method) 05-31 18:28:12.681: INFO/dalvikvm(589): "Binder Thread #1" prio=5 tid=11 NATIVE 05-31 18:28:12.681: INFO/dalvikvm(589): | group="main" sCount=1 dsCount=0 s=N obj=0x447159a8 self=0x123298 05-31 18:28:12.681: INFO/dalvikvm(589): | sysTid=594 nice=0 sched=0/0 cgrp=default handle=1164896 05-31 18:28:12.681: INFO/dalvikvm(589): at dalvik.system.NativeStart.run(Native Method) 05-31 18:28:12.691: INFO/dalvikvm(589): "JDWP" daemon prio=5 tid=9 VMWAIT 05-31 18:28:12.691: INFO/dalvikvm(589): | group="system" sCount=1 dsCount=0 s=N obj=0x4470f2a0 self=0x141a90 05-31 18:28:12.691: INFO/dalvikvm(589): | sysTid=593 nice=0 sched=0/0 cgrp=default handle=1316864 05-31 18:28:12.691: INFO/dalvikvm(589): at dalvik.system.NativeStart.run(Native Method) 05-31 18:28:12.691: INFO/dalvikvm(589): "Signal Catcher" daemon prio=5 tid=7 VMWAIT 05-31 18:28:12.691: INFO/dalvikvm(589): | group="system" sCount=1 dsCount=0 s=N obj=0x4470f1e8 self=0x124970 05-31 18:28:12.691: INFO/dalvikvm(589): | sysTid=592 nice=0 sched=0/0 cgrp=default handle=1316800 05-31 18:28:12.691: INFO/dalvikvm(589): at dalvik.system.NativeStart.run(Native Method) 05-31 18:28:12.691: INFO/dalvikvm(589): "HeapWorker" daemon prio=5 tid=5 MONITOR 05-31 18:28:12.691: INFO/dalvikvm(589): | group="system" sCount=1 dsCount=0 s=N obj=0x431b4550 self=0x141670 05-31 18:28:12.691: INFO/dalvikvm(589): | sysTid=591 nice=0 sched=0/0 cgrp=default handle=1316400 05-31 18:28:12.691: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.sftp.SftpSubsystemClient.closeHandle((null):~-1) 05-31 18:28:12.691: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.sftp.SftpSubsystemClient.closeFile((null):-1) 05-31 18:28:12.691: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.sftp.SftpFile.close((null):-1) 05-31 18:28:12.691: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.sftp.SftpFileInputStream.close((null):-1) 05-31 18:28:12.691: INFO/dalvikvm(589): at com.enterprisedt.net.j2ssh.sftp.SftpFileInputStream.finalize((null):-1) 05-31 18:28:12.691: INFO/dalvikvm(589): at dalvik.system.NativeStart.run(Native Method) 05-31 18:28:12.691: ERROR/dalvikvm(589): VM aborting 05-31 18:28:12.801: INFO/DEBUG(49): * ** * ** * ** * ** * ** * 05-31 18:28:12.801: INFO/DEBUG(49): Build fingerprint: 'google/passion/passion/mahimahi:2.1-update1/ERE27/24178:user/release-keys' 05-31 18:28:12.801: INFO/DEBUG(49): pid: 589, tid: 601 com.corventis.gateway.hf <<< 05-31 18:28:12.801: INFO/DEBUG(49): signal 11 (SIGSEGV), fault addr deadd00d 05-31 18:28:12.801: INFO/DEBUG(49): r0 00000026 r1 afe13329 r2 afe13329 r3 00000000 05-31 18:28:12.801: INFO/DEBUG(49): r4 ad081f50 r5 400091e8 r6 009b3a6a r7 00000000 05-31 18:28:12.801: INFO/DEBUG(49): r8 000002e8 r9 ad082ba0 10 ad082ba0 fp 00000000 05-31 18:28:12.801: INFO/DEBUG(49): ip deadd00d sp 46937c58 lr afe14373 pc ad035b4c cpsr 20000030 05-31 18:28:12.851: INFO/DEBUG(49): #00 pc 00035b4c /system/lib/libdvm.so 05-31 18:28:12.861: INFO/DEBUG(49): #01 pc 00044d7c /system/lib/libdvm.so 05-31 18:28:12.861: INFO/DEBUG(49): #02 pc 000162e4 /system/lib/libdvm.so 05-31 18:28:12.861: INFO/DEBUG(49): #03 pc 00016b60 /system/lib/libdvm.so 05-31 18:28:12.861: INFO/DEBUG(49): #04 pc 00016ce0 /system/lib/libdvm.so 05-31 18:28:12.861: INFO/DEBUG(49): #05 pc 00057b64 /system/lib/libdvm.so 05-31 18:28:12.861: INFO/DEBUG(49): #06 pc 00057cc0 /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #07 pc 00057dd4 /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #08 pc 00012ffc /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #09 pc 00019338 /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #10 pc 00018804 /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #11 pc 0004eed0 /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #12 pc 0004eef8 /system/lib/libdvm.so 05-31 18:28:12.871: INFO/DEBUG(49): #13 pc 000426d4 /system/lib/libdvm.so 05-31 18:28:12.881: INFO/DEBUG(49): #14 pc 0000fd74 /system/lib/libc.so 05-31 18:28:12.881: INFO/DEBUG(49): #15 pc 0000f840 /system/lib/libc.so 05-31 18:28:12.881: INFO/DEBUG(49): code around pc: 05-31 18:28:12.881: INFO/DEBUG(49): ad035b3c 58234808 b1036b9b f8df4798 2026c01c 05-31 18:28:12.881: INFO/DEBUG(49): ad035b4c 0000f88c ef52f7d8 0004c428 fffe631c 05-31 18:28:12.881: INFO/DEBUG(49): ad035b5c fffe94f4 000002f8 deadd00d f8dfb40e 05-31 18:28:12.881: INFO/DEBUG(49): code around lr: 05-31 18:28:12.881: INFO/DEBUG(49): afe14360 686768a5 f9b5e008 b120000c 46289201 05-31 18:28:12.881: INFO/DEBUG(49): afe14370 9a014790 35544306 37fff117 6824d5f3 05-31 18:28:12.881: INFO/DEBUG(49): afe14380 d1ed2c00 bdfe4630 00026ab0 000000b4 05-31 18:28:12.881: INFO/DEBUG(49): stack: 05-31 18:28:12.881: INFO/DEBUG(49): 46937c18 00000015 05-31 18:28:12.881: INFO/DEBUG(49): 46937c1c afe13359 /system/lib/libc.so 05-31 18:28:12.881: INFO/DEBUG(49): 46937c20 afe3b02c /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c24 afe3afd8 /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c28 00000000 05-31 18:28:12.891: INFO/DEBUG(49): 46937c2c afe14373 /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c30 afe13329 /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c34 afe13329 /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c38 afe13380 /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c3c ad081f50 /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c40 400091e8 /dev/ashmem/mspace/dalvik-heap/zygote/0 (deleted) 05-31 18:28:12.891: INFO/DEBUG(49): 46937c44 009b3a6a 05-31 18:28:12.891: INFO/DEBUG(49): 46937c48 00000000 05-31 18:28:12.891: INFO/DEBUG(49): 46937c4c afe1338d /system/lib/libc.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c50 df002777 05-31 18:28:12.891: INFO/DEBUG(49): 46937c54 e3a070ad 05-31 18:28:12.891: INFO/DEBUG(49): #00 46937c58 ad06f573 /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c5c ad044d81 /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): #01 46937c60 000027bd 05-31 18:28:12.891: INFO/DEBUG(49): 46937c64 00000000 05-31 18:28:12.891: INFO/DEBUG(49): 46937c68 463b6ab4 /data/dalvik-cache/data@[email protected]@classes.dex 05-31 18:28:12.891: INFO/DEBUG(49): 46937c6c 463d1ecf /data/dalvik-cache/data@[email protected]@classes.dex 05-31 18:28:12.891: INFO/DEBUG(49): 46937c70 00140450 [heap] 05-31 18:28:12.891: INFO/DEBUG(49): 46937c74 ad041d2b /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c78 ad082f2c /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c7c ad06826c /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c80 00140450 [heap] 05-31 18:28:12.891: INFO/DEBUG(49): 46937c84 00000000 05-31 18:28:12.891: INFO/DEBUG(49): 46937c88 000002f8 05-31 18:28:12.891: INFO/DEBUG(49): 46937c8c 400091e8 /dev/ashmem/mspace/dalvik-heap/zygote/0 (deleted) 05-31 18:28:12.891: INFO/DEBUG(49): 46937c90 ad081f50 /system/lib/libdvm.so 05-31 18:28:12.891: INFO/DEBUG(49): 46937c94 000002f8 05-31 18:28:12.891: INFO/DEBUG(49): 46937c98 00002710 05-31 18:28:12.891: INFO/DEBUG(49): 46937c9c ad0162e8 /system/lib/libdvm.so Thanks & Regards,

    Read the article

  • tile_static, tile_barrier, and tiled matrix multiplication with C++ AMP

    - by Daniel Moth
    We ended the previous post with a mechanical transformation of the C++ AMP matrix multiplication example to the tiled model and in the process introduced tiled_index and tiled_grid. This is part 2. tile_static memory You all know that in regular CPU code, static variables have the same value regardless of which thread accesses the static variable. This is in contrast with non-static local variables, where each thread has its own copy. Back to C++ AMP, the same rules apply and each thread has its own value for local variables in your lambda, whereas all threads see the same global memory, which is the data they have access to via the array and array_view. In addition, on an accelerator like the GPU, there is a programmable cache, a third kind of memory type if you'd like to think of it that way (some call it shared memory, others call it scratchpad memory). Variables stored in that memory share the same value for every thread in the same tile. So, when you use the tiled model, you can have variables where each thread in the same tile sees the same value for that variable, that threads from other tiles do not. The new storage class for local variables introduced for this purpose is called tile_static. You can only use tile_static in restrict(direct3d) functions, and only when explicitly using the tiled model. What this looks like in code should be no surprise, but here is a snippet to confirm your mental image, using a good old regular C array // each tile of threads has its own copy of locA, // shared among the threads of the tile tile_static float locA[16][16]; Note that tile_static variables are scoped and have the lifetime of the tile, and they cannot have constructors or destructors. tile_barrier In amp.h one of the types introduced is tile_barrier. You cannot construct this object yourself (although if you had one, you could use a copy constructor to create another one). So how do you get one of these? You get it, from a tiled_index object. Beyond the 4 properties returning index objects, tiled_index has another property, barrier, that returns a tile_barrier object. The tile_barrier class exposes a single member, the method wait. 15: // Given a tiled_index object named t_idx 16: t_idx.barrier.wait(); 17: // more code …in the code above, all threads in the tile will reach line 16 before a single one progresses to line 17. Note that all threads must be able to reach the barrier, i.e. if you had branchy code in such a way which meant that there is a chance that not all threads could reach line 16, then the code above would be illegal. Tiled Matrix Multiplication Example – part 2 So now that we added to our understanding the concepts of tile_static and tile_barrier, let me obfuscate rewrite the matrix multiplication code so that it takes advantage of tiling. Before you start reading this, I suggest you get a cup of your favorite non-alcoholic beverage to enjoy while you try to fully understand the code. 01: void MatrixMultiplyTiled(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: static const int TS = 16; 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M,N,vC); 07: parallel_for_each(c.grid.tile< TS, TS >(), 08: [=] (tiled_index< TS, TS> t_idx) restrict(direct3d) 09: { 10: int row = t_idx.local[0]; int col = t_idx.local[1]; 11: float sum = 0.0f; 12: for (int i = 0; i < W; i += TS) { 13: tile_static float locA[TS][TS], locB[TS][TS]; 14: locA[row][col] = a(t_idx.global[0], col + i); 15: locB[row][col] = b(row + i, t_idx.global[1]); 16: t_idx.barrier.wait(); 17: for (int k = 0; k < TS; k++) 18: sum += locA[row][k] * locB[k][col]; 19: t_idx.barrier.wait(); 20: } 21: c[t_idx.global] = sum; 22: }); 23: } Notice that all the code up to line 9 is the same as per the changes we made in part 1 of tiling introduction. If you squint, the body of the lambda itself preserves the original algorithm on lines 10, 11, and 17, 18, and 21. The difference being that those lines use new indexing and the tile_static arrays; the tile_static arrays are declared and initialized on the brand new lines 13-15. On those lines we copy from the global memory represented by the array_view objects (a and b), to the tile_static vanilla arrays (locA and locB) – we are copying enough to fit a tile. Because in the code that follows on line 18 we expect the data for this tile to be in the tile_static storage, we need to synchronize the threads within each tile with a barrier, which we do on line 16 (to avoid accessing uninitialized memory on line 18). We also need to synchronize the threads within a tile on line 19, again to avoid the race between lines 14, 15 (retrieving the next set of data for each tile and overwriting the previous set) and line 18 (not being done processing the previous set of data). Luckily, as part of the awesome C++ AMP debugger in Visual Studio there is an option that helps you find such races, but that is a story for another blog post another time. May I suggest reading the next section, and then coming back to re-read and walk through this code with pen and paper to really grok what is going on, if you haven't already? Cool. Why would I introduce this tiling complexity into my code? Funny you should ask that, I was just about to tell you. There is only one reason we tiled our extent, had to deal with finding a good tile size, ensure the number of threads we schedule are correctly divisible with the tile size, had to use a tiled_index instead of a normal index, and had to understand tile_barrier and to figure out where we need to use it, and double the size of our lambda in terms of lines of code: the reason is to be able to use tile_static memory. Why do we want to use tile_static memory? Because accessing tile_static memory is around 10 times faster than accessing the global memory on an accelerator like the GPU, e.g. in the code above, if you can get 150GB/second accessing data from the array_view a, you can get 1500GB/second accessing the tile_static array locA. And since by definition you are dealing with really large data sets, the savings really pay off. We have seen tiled implementations being twice as fast as their non-tiled counterparts. Now, some algorithms will not have performance benefits from tiling (and in fact may deteriorate), e.g. algorithms that require you to go only once to global memory will not benefit from tiling, since with tiling you already have to fetch the data once from global memory! Other algorithms may benefit, but you may decide that you are happy with your code being 150 times faster than the serial-version you had, and you do not need to invest to make it 250 times faster. Also algorithms with more than 3 dimensions, which C++ AMP supports in the non-tiled model, cannot be tiled. Also note that in future releases, we may invest in making the non-tiled model, which already uses tiling under the covers, go the extra step and use tile_static memory on your behalf, but it is obviously way to early to commit to anything like that, and we certainly don't do any of that today. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • MySQL Syslog Audit Plugin

    - by jonathonc
    This post shows the construction process of the Syslog Audit plugin that was presented at MySQL Connect 2012. It is based on an environment that has the appropriate development tools enabled including gcc,g++ and cmake. It also assumes you have downloaded the MySQL source code (5.5.16 or higher) and have compiled and installed the system into the /usr/local/mysql directory ready for use.  The information provided below is designed to show the different components that make up a plugin, and specifically an audit type plugin, and how it comes together to be used within the MySQL service. The MySQL Reference Manual contains information regarding the plugin API and how it can be used, so please refer there for more detailed information. The code in this post is designed to give the simplest information necessary, so handling every return code, managing race conditions etc is not part of this example code. Let's start by looking at the most basic implementation of our plugin code as seen below: /*    Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.    Author:  Jonathon Coombes    Licence: GPL    Description: An auditing plugin that logs to syslog and                 can adjust the loglevel via the system variables. */ #include <stdio.h> #include <string.h> #include <mysql/plugin_audit.h> #include <syslog.h> There is a commented header detailing copyright/licencing and meta-data information and then the include headers. The two important include statements for our plugin are the syslog.h plugin, which gives us the structures for syslog, and the plugin_audit.h include which has details regarding the audit specific plugin api. Note that we do not need to include the general plugin header plugin.h, as this is done within the plugin_audit.h file already. To implement our plugin within the current implementation we need to add it into our source code and compile. > cd /usr/local/src/mysql-5.5.28/plugin > mkdir audit_syslog > cd audit_syslog A simple CMakeLists.txt file is created to manage the plugin compilation: MYSQL_ADD_PLUGIN(audit_syslog audit_syslog.cc MODULE_ONLY) Run the cmake  command at the top level of the source and then you can compile the plugin using the 'make' command. This results in a compiled audit_syslog.so library, but currently it is not much use to MySQL as there is no level of api defined to communicate with the MySQL service. Now we need to define the general plugin structure that enables MySQL to recognise the library as a plugin and be able to install/uninstall it and have it show up in the system. The structure is defined in the plugin.h file in the MySQL source code.  /*   Plugin library descriptor */ mysql_declare_plugin(audit_syslog) {   MYSQL_AUDIT_PLUGIN,           /* plugin type                    */   &audit_syslog_descriptor,     /* descriptor handle               */   "audit_syslog",               /* plugin name                     */   "Author Name",                /* author                          */   "Simple Syslog Audit",        /* description                     */   PLUGIN_LICENSE_GPL,           /* licence                         */   audit_syslog_init,            /* init function     */   audit_syslog_deinit,          /* deinit function */   0x0001,                       /* plugin version                  */   NULL,                         /* status variables        */   NULL,                         /* system variables                */   NULL,                         /* no reserves                     */   0,                            /* no flags                        */ } mysql_declare_plugin_end; The general plugin descriptor above is standard for all plugin types in MySQL. The plugin type is defined along with the init/deinit functions and interface methods into the system for sharing information, and various other metadata information. The descriptors have an internally recognised version number so that plugins can be matched against the api on the running server. The other details are usually related to the type-specific methods and structures to implement the plugin. Each plugin has a type-specific descriptor as well which details how the plugin is implemented for the specific purpose of that plugin type. /*   Plugin type-specific descriptor */ static struct st_mysql_audit audit_syslog_descriptor= {   MYSQL_AUDIT_INTERFACE_VERSION,                        /* interface version    */   NULL,                                                 /* release_thd function */   audit_syslog_notify,                                  /* notify function      */   { (unsigned long) MYSQL_AUDIT_GENERAL_CLASSMASK |                     MYSQL_AUDIT_CONNECTION_CLASSMASK }  /* class mask           */ }; In this particular case, the release_thd function has not been defined as it is not required. The important method for auditing is the notify function which is activated when an event occurs on the system. The notify function is designed to activate on an event and the implementation will determine how it is handled. For the audit_syslog plugin, the use of the syslog feature sends all events to the syslog for recording. The class mask allows us to determine what type of events are being seen by the notify function. There are currently two major types of event: 1. General Events: This includes general logging, errors, status and result type events. This is the main one for tracking the queries and operations on the database. 2. Connection Events: This group is based around user logins. It monitors connections and disconnections, but also if somebody changes user while connected. With most audit plugins, the principle behind the plugin is to track changes to the system over time and counters can be an important part of this process. The next step is to define and initialise the counters that are used to track the events in the service. There are 3 counters defined in total for our plugin - the # of general events, the # of connection events and the total number of events.  static volatile int total_number_of_calls; /* Count MYSQL_AUDIT_GENERAL_CLASS event instances */ static volatile int number_of_calls_general; /* Count MYSQL_AUDIT_CONNECTION_CLASS event instances */ static volatile int number_of_calls_connection; The init and deinit functions for the plugin are there to be called when the plugin is activated and when it is terminated. These offer the best option to initialise the counters for our plugin: /*  Initialize the plugin at server start or plugin installation. */ static int audit_syslog_init(void *arg __attribute__((unused))) {     openlog("mysql_audit:",LOG_PID|LOG_PERROR|LOG_CONS,LOG_USER);     total_number_of_calls= 0;     number_of_calls_general= 0;     number_of_calls_connection= 0;     return(0); } The init function does a call to openlog to initialise the syslog functionality. The parameters are the service to log under ("mysql_audit" in this case), the syslog flags and the facility for the logging. Then each of the counters are initialised to zero and a success is returned. If the init function is not defined, it will return success by default. /*  Terminate the plugin at server shutdown or plugin deinstallation. */ static int audit_syslog_deinit(void *arg __attribute__((unused))) {     closelog();     return(0); } The deinit function will simply close our syslog connection and return success. Note that the syslog functionality is part of the glibc libraries and does not require any external factors.  The function names are what we define in the general plugin structure, so these have to match otherwise there will be errors. The next step is to implement the event notifier function that was defined in the type specific descriptor (audit_syslog_descriptor) which is audit_syslog_notify. /* Event notifier function */ static void audit_syslog_notify(MYSQL_THD thd __attribute__((unused)), unsigned int event_class, const void *event) { total_number_of_calls++; if (event_class == MYSQL_AUDIT_GENERAL_CLASS) { const struct mysql_event_general *event_general= (const struct mysql_event_general *) event; number_of_calls_general++; syslog(audit_loglevel,"%lu: User: %s Command: %s Query: %s\n", event_general->general_thread_id, event_general->general_user, event_general->general_command, event_general->general_query ); } else if (event_class == MYSQL_AUDIT_CONNECTION_CLASS) { const struct mysql_event_connection *event_connection= (const struct mysql_event_connection *) event; number_of_calls_connection++; syslog(audit_loglevel,"%lu: User: %s@%s[%s] Event: %d Status: %d\n", event_connection->thread_id, event_connection->user, event_connection->host, event_connection->ip, event_connection->event_subclass, event_connection->status ); } }   In the case of an event, the notifier function is called. The first step is to increment the total number of events that have occurred in our database.The event argument is then cast into the appropriate event structure depending on the class type, of general event or connection event. The event type counters are incremented and details are sent via the syslog() function out to the system log. There are going to be different line formats and information returned since the general events have different data compared to the connection events, even though some of the details overlap, for example, user, thread id, host etc. On compiling the code now, there should be no errors and the resulting audit_syslog.so can be loaded into the server and ready to use. Log into the server and type: mysql> INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so'; This will install the plugin and will start updating the syslog immediately. Note that the audit plugin attaches to the immediate thread and cannot be uninstalled while that thread is active. This means that you cannot run the UNISTALL command until you log into a different connection (thread) on the server. Once the plugin is loaded, the system log will show output such as the following: Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: show tables Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: show tables Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: select * from t1 Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: select * from t1 It appears that two of each event is being shown, but in actuality, these are two separate event types - the result event and the status event. This could be refined further by changing the audit_syslog_notify function to handle the different event sub-types in a different manner.  So far, it seems that the logging is working with events showing up in the syslog output. The issue now is that the counters created earlier to track the number of events by type are not accessible when the plugin is being run. Instead there needs to be a way to expose the plugin specific information to the service and vice versa. This could be done via the information_schema plugin api, but for something as simple as counters, the obvious choice is the system status variables. This is done using the standard structure and the declaration: /*  Plugin status variables for SHOW STATUS */ static struct st_mysql_show_var audit_syslog_status[]= {   { "Audit_syslog_total_calls",     (char *) &total_number_of_calls,     SHOW_INT },   { "Audit_syslog_general_events",     (char *) &number_of_calls_general,     SHOW_INT },   { "Audit_syslog_connection_events",     (char *) &number_of_calls_connection,     SHOW_INT },   { 0, 0, SHOW_INT } };   The structure is simply the name that will be displaying in the mysql service, the address of the associated variables, and the data type being used for the counter. It is finished with a blank structure to show that there are no more variables. Remember that status variables may have the same name for variables from other plugin, so it is considered appropriate to add the plugin name at the start of the status variable name to avoid confusion. Looking at the status variables in the mysql client shows something like the following: mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 2     | | Audit_syslog_total_calls       | 3     | +--------------------------------+-------+ 3 rows in set (0.00 sec) The final connectivity piece for the plugin is to allow the interactive change of the logging level between the plugin and the system. This requires the ability to send changes via the mysql service through to the plugin. This is done using the system variables interface and defining a single variable to keep track of the active logging level for the facility. /* Plugin system variables for SHOW VARIABLES */ static MYSQL_SYSVAR_STR(loglevel, audit_loglevel,                         PLUGIN_VAR_RQCMDARG,                         "User can specify the log level for auditing",                         audit_loglevel_check, audit_loglevel_update, "LOG_NOTICE"); static struct st_mysql_sys_var* audit_syslog_sysvars[] = {     MYSQL_SYSVAR(loglevel),     NULL }; So now the system variable 'loglevel' is defined for the plugin and associated to the global variable 'audit_loglevel'. The check or validation function is defined to make sure that no garbage values are attempted in the update of the variable. The update function is used to save the new value to the variable. Note that the audit_syslog_sysvars structure is defined in the general plugin descriptor to associate the link between the plugin and the system and how much they interact. Next comes the implementation of the validation function and the update function for the system variable. It is worth noting that if you have a simple numeric such as integers for the variable types, the validate function is often not required as MySQL will handle the automatic check and validation of simple types. /* longest valid value */ #define MAX_LOGLEVEL_SIZE 100 /* hold the valid values */ static const char *possible_modes[]= { "LOG_ERROR", "LOG_WARNING", "LOG_NOTICE", NULL };  static int audit_loglevel_check(     THD*                        thd,    /*!< in: thread handle */     struct st_mysql_sys_var*    var,    /*!< in: pointer to system                                         variable */     void*                       save,   /*!< out: immediate result                                         for update function */     struct st_mysql_value*      value)  /*!< in: incoming string */ {     char buff[MAX_LOGLEVEL_SIZE];     const char *str;     const char **found;     int length;     length= sizeof(buff);     if (!(str= value->val_str(value, buff, &length)))         return 1;     /*         We need to return a pointer to a locally allocated value in "save".         Here we pick to search for the supplied value in an global array of         constant strings and return a pointer to one of them.         The other possiblity is to use the thd_alloc() function to allocate         a thread local buffer instead of the global constants.     */     for (found= possible_modes; *found; found++)     {         if (!strcmp(*found, str))         {             *(const char**)save= *found;             return 0;         }     }     return 1; } The validation function is simply to take the value being passed in via the SET GLOBAL VARIABLE command and check if it is one of the pre-defined values allowed  in our possible_values array. If it is found to be valid, then the value is assigned to the save variable ready for passing through to the update function. static void audit_loglevel_update(     THD*                        thd,        /*!< in: thread handle */     struct st_mysql_sys_var*    var,        /*!< in: system variable                                             being altered */     void*                       var_ptr,    /*!< out: pointer to                                             dynamic variable */     const void*                 save)       /*!< in: pointer to                                             temporary storage */ {     /* assign the new value so that the server can read it */     *(char **) var_ptr= *(char **) save;     /* assign the new value to the internal variable */     audit_loglevel= *(char **) save; } Since all the validation has been done already, the update function is quite simple for this plugin. The first part is to update the system variable pointer so that the server can read the value. The second part is to update our own global plugin variable for tracking the value. Notice that the save variable is passed in as a void type to allow handling of various data types, so it must be cast to the appropriate data type when assigning it to the variables. Looking at how the latest changes affect the usage of the plugin and the interaction within the server shows: mysql> show global variables like "audit%"; +-----------------------+------------+ | Variable_name         | Value      | +-----------------------+------------+ | audit_syslog_loglevel | LOG_NOTICE | +-----------------------+------------+ 1 row in set (0.00 sec) mysql> set global audit_syslog_loglevel="LOG_ERROR"; Query OK, 0 rows affected (0.00 sec) mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 11    | | Audit_syslog_total_calls       | 12    | +--------------------------------+-------+ 3 rows in set (0.00 sec) mysql> show global variables like "audit%"; +-----------------------+-----------+ | Variable_name         | Value     | +-----------------------+-----------+ | audit_syslog_loglevel | LOG_ERROR | +-----------------------+-----------+ 1 row in set (0.00 sec)   So now we have a plugin that will audit the events on the system and log the details to the system log. It allows for interaction to see the number of different events within the server details and provides a mechanism to change the logging level interactively via the standard system methods of the SET command. A more complex auditing plugin may have more detailed code, but each of the above areas is what will be involved and simply expanded on to add more functionality. With the above skeleton code, it is now possible to create your own audit plugins to implement your own auditing requirements. If, however, you are not of the coding persuasion, then you could always consider the option of the MySQL Enterprise Audit plugin that is available to purchase.

    Read the article

  • NHibernate 2 Beginner's Guide Review

    - by Ricardo Peres
    OK, here's the review I promised a while ago. This is a beginner's introduction to NHibernate, so if you have already some experience with NHibernate, you will notice it lacks a lot of concepts and information. It starts with a good description of NHibernate and why would we use it. It goes on describing basic mapping scenarios having primary keys generated with the HiLo or Identity algorithms, without actually explaining why would we choose one over the other. As for mapping, the book talks about XML mappings and provides a simple example of Fluent NHibernate, comparing it to its XML counterpart. When it comes to relations, it covers one-to-many/many-to-one and many-to-many, not one-to-one relations, but only talks briefly about lazy loading, which is, IMO, an important concept. Only Bags are described, not any of the other collection types. The log4net configuration description gets it's own chapter, which I find excessive. The chapter on configuration merely lists the most common properties for configuring NHibernate, both in XML and in code. Querying only talks about loading by ID (using Get, not Load) and using Criteria API, on which a paging example is presented as well as some common filtering options (property equals/like/between to, no examples on conjunction/disjunction, however). There's a chapter fully dedicated to ASP.NET, which explains how we can use NHibernate in web applications. It basically talks about ASP.NET concepts, though. Following it, another chapter explains how we can build our own ASP.NET providers using NHibernate (Membership, Role). The available entity generators for NHibernate are referred and evaluated on a chapter of their own, the list is fine (CodeSmith, nhib-gen, AjGenesis, Visual NHibernate, MyGeneration, NGen, NHModeler, Microsoft T4 (?) and hbm2net), examples are provided whenever possible, however, I have some problems with some of the evaluations: for example, Visual NHibernate scores 5 out of 5 on Visual Studio integration, which simply does not exist! I suspect the author means to say that it can be launched from inside Visual Studio, but then, what can't? Finally, there's a chapter I really don't understand. It seems like a bag where a lot of things are thrown in, like NHibernate Burrow (which actually isn't explained at all), Blog.Net components, CSS template conversion and web.config settings related to the maximum request length for file uploads and ending with XML configuration, with the help of GhostDoc. Like I said, the book is only good for absolute beginners, it does a fair job in explaining the very basics, but lack a lot of not-so-basic concepts. Among other things, it lacks: Inheritance mapping strategies (table per class hierarchy, table per class, table per concrete class) Load versus Get usage Other usefull ISession methods First level cache (Identity Map pattern) Other collection types other that Bag (Set, List, Map, IdBag, etc Fetch options User Types Filters Named queries LINQ examples HQL examples And that's it! I hope you find this review useful. The link to the book site is https://www.packtpub.com/nhibernate-2-x-beginners-guide/book

    Read the article

  • VSDB to SSDT Part 2 : SQL Server 2008 Server Project &hellip; with SSDT

    - by Etienne Giust
    With Visual Studio 2012 and the use of SSDT technology, there is only one type of database project : SQL Server Database Project. With Visual Studio 2010, we used to have SQL Server 2008 Server Project which we used to define server-level objects, mostly logins and linked servers. A convenient wizard allowed for creation of this type of projects. It does not exists anymore. Here is how to create an equivalent of the SQL Server 2008 Server Project  with Visual Studio 2012: Create a new SQL Server Database Project : it will be created empty Create a new SQL Schema Compare ( SQL menu item > Schema Compare > New Schema Comparison ) As a source, select any database on the SQL server you want to mimic Set the target to be your newly Database Project In the Schema Compare options (cog-like icon), Object Types pane, set the options as below. You might want to tweak those and select only the object types you want. Then, run the comparison, review and select your changes and apply them to the project.

    Read the article

  • Online video tutorials for HTML 5

    - by Albers
    Here are some of the best introductory HTML5 videos I have found online/for free. Mix 2011: HTML5 for Skeptics - Scott Stansfield channel9.msdn.com/Events/MIX/MIX11/EXT21 Filling the HTML5 Gaps with Polyfills and Shims - Ray Bango channel9.msdn.com/Events/MIX/MIX11/HTM04 50 Performance Tricks to Make Your HTML5 Web Sites Faster - Jason Weber channel9.msdn.com/Events/MIX/MIX11/HTM01 TechEd 2011 HTML5 and CSS3 Techniques You Can Use Today - Todd Anglin channel9.msdn.com/Events/TechEd/NorthAmerica/2011/DEV334 Google IO HTML5 Showcase for Web Developers: The Wow and the How www.youtube.com/watch?v=WlwY6_W4VG8 css-tricks localStorage for Forms - Chris Coyier css-tricks.com/video-screencasts/96-localstorage-for-forms/ Best Practices with Dynamic Content - Chris Coyier This one talks about Hash Tags - take a look at the History API too css-tricks.com/video-screencasts/85-best-practices-dynamic-content/ localStorage for Forms - Chris Coyier css-tricks.com/video-screencasts/96-localstorage-for-forms/ Overview of HTML5 Forms Types, Attributes, and Elements - Chris Coyier css-tricks.com/video-screencasts/99-overview-of-html5-forms-types-attributes-and-elements/ Bruce Lawson - HTML5: Who, What, When, Why www.ubelly.com/2011/10/bruce-lawson-html5-who-what-when-why/ Bruce Lawson is an evangelist for Opera, and in this video he provides an overview including the history & philosophy of HTML5.

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >