Search Results

Search found 8301 results on 333 pages for 'types'.

Page 177/333 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Keyboard for programming

    - by exhuma
    This may seem a bit a tangential topic. It's not directly related to actual code, but is important for our line of work nevertheless. Over the years, I've switched keyboards a few times. All of them had slightly different key layouts. And I'm not talking about the language/locale layout, but the physical layout! Why not the locale layout? Well, quite frankly, that's easy to change via software. I personally have a German keyboard but have it set to the UK layout. Why? It's quite hard to find different layouts in the shops where I live. Even ordering is not always easy in the shops. So that leaves me with Internet shops. But I prefer to "test" my keyboards before buying. The most notable changes are: Mangled "Home Key Block" I've seen this first on a Logitech keyboard, but it may have originated elsewhere. Shape of the "Enter" key I've seen three different cases so far: Two lines high, wider at the top Two lines high, wider at the bottom One line high Shape of the Backspace button I've seen two types so far: One "character" wide Two "characters" wide OS Keys For Macs, you have the Option and Command buttons, for Windows you have the Windows and Context Menu buttons. Cherry even produced a Linux keyboard once (unfortunately I cannot find many details except news results). I assume a dedicated Linux keyboard would sport a Compose key and had the SysRq always labelled as well (note that some standard layouts do this already). Obviously... .. all these differences entail that some keys have to be moved around the board a lot. Which means, if you are used to one and have to work on another one, you happen to hit the wrong keys quite often. As it happens, this is much more annoying for programmers as it is for people who write texts. Mainly because the keys which are moved around are special character keys, often used in programming. Often these hardware layouts depend also indirectly on where you buy the keyboards. Honestly, I haven't seen a keyboard with a one-line "Enter" key in Germany, nor Luxembourg. I may just have missed it but that's how it looks to me at least. A survey I've seen some attempts at surveys in the style "which keyboard is best for programming". But they all - in my opinion - are not using comparable sets. So I was wondering if it was possible to concoct a survey taking the above criteria into account. But ignoring key dimensions that one would be a bit overkill I guess ;) From what I can see there are the following types of physical layout: Backspace: 2-characters wide Enter: 2-Lines, wider top Backspace: 2-characters wide Enter: 1-Line Backspace: 1-character wide Enter: 2-Lines, wider bottom Then there are the other possible permutations (home-key block, os-keys), which in total makes for quite a large list of categories. Now, I wonder... Would anyone be interested in such a survey? I personally would. Because I am looking for the perfect fit for me. If yes, then I could really use the help of anyone here to propose some models to include in the survey. Once I have some models for each category (I'd say at least 3 per category) I could go ahead and write up a survey, put it on-line and let the it collect data for a while. What do you think?

    Read the article

  • drupal_get_form is not passing along node array

    - by ElectronicBlacksmith
    I have not been able to get drupal_get_form to pass on the node data. Code snippets are below. The drupal_get_form documentation (api.drupal.org) states that it will pass on the extra parameters. I am basing the node data not being passed because (apparently) $node['language'] is not defined in hook_form which causes $form['qqq'] not to be created and thus the preview button shows up. My goal here is that the preview button show up using path "node/add/author" but doesn't show up for "milan/author/add". Any alternative methods for achieving this goal would be helpful but the question I want answered is in the preceding paragraph. Everything I've read indicates that it should work. This menu item $items['milan/author/add'] = array( 'title' = 'Add Author', 'page callback' = 'get_author_form', 'access arguments' = array('access content'), 'file' = 'author.pages.inc', ); calls this code function get_author_form() { //return node_form(NULL,NULL); //return drupal_get_form('author_form'); return author_ajax_form('author'); } function author_ajax_form($type) { global $user; module_load_include('inc', 'node', 'node.pages'); $types = node_get_types(); $type = isset($type) ? str_replace('-', '_', $type) : NULL; // If a node type has been specified, validate its existence. if (isset($types[$type]) && node_access('create', $type)) { // Initialize settings: $node = array('uid' = $user-uid, 'name' = (isset($user-name) ? $user-name : ''), 'type' = $type, 'language' = 'bbb','bbb' = 'TRUE'); $output = drupal_get_form($type .'_node_form', $node); } return $output; } And here is the hook_form and hook_form_alter code function author_form_author_node_form_alter(&$form, &$form_state) { $form['author']=NULL; $form['taxonomy']=NULL; $form['options']=NULL; $form['menu']=NULL; $form['comment_settings']=NULL; $form['files']=NULL; $form['revision_information']=NULL; $form['attachments']=NULL; if($form["qqq"]) { $form['buttons']['preview']=NULL; } } function author_form(&$node) { return make_author_form(&$node); } function make_author_form(&$node) { global $user; $type = node_get_types('type', $node); $node = author_make_title($node); drupal_set_breadcrumb(array(l(t('Home'), NULL), l(t($node-title), 'node/' . $node-nid))); $form['authorset'] = array( '#type' = 'fieldset', '#title' = t('Author'), '#weight' = -50, '#collapsible' = FALSE, '#collapsed' = FALSE, ); $form['author_id'] = array( '#access' = user_access('create pd_recluse entries'), '#type' = 'hidden', '#default_value' = $node-author_id, '#weight' = -20 ); $form['authorset']['last_name'] = array( '#type' = 'textfield', '#title' = t('Last Name'), '#maxlength' = 60, '#default_value' = $node-last_name ); $form['authorset']['first_name'] = array( '#type' = 'textfield', '#title' = t('First Name'), '#maxlength' = 60, '#default_value' = $node-first_name ); $form['authorset']['middle_name'] = array( '#type' = 'textfield', '#title' = t('Middle Name'), '#maxlength' = 60, '#default_value' = $node-middle_name ); $form['authorset']['suffix_name'] = array( '#type' = 'textfield', '#title' = t('Suffix Name'), '#maxlength' = 14, '#default_value' = $node-suffix_name ); $form['authorset']['body_filter']['body'] = array( '#access' = user_access('create pd_recluse entries'), '#type' = 'textarea', '#title' = 'Describe Author', '#default_value' = $node-body, '#required' = FALSE, '#weight' = -19 ); $form['status'] = array( '#type' = 'hidden', '#default_value' = '1' ); $form['promote'] = array( '#type' = 'hidden', '#default_value' = '1' ); $form['name'] = array( '#type' = 'hidden', '#default_value' = $user-name ); $form['format'] = array( '#type' = 'hidden', '#default_value' = '1' ); // NOTE in node_example there is some addition code here not needed for this simple node-type $thepath='milan/author'; if($_REQUEST["theletter"]) { $thepath .= "/" . $_REQUEST["theletter"]; } if($node['language']) { $thepath='milan/authorajaxclose'; $form['qqq'] = array( '#type' = 'hidden', '#default_value' = '1' ); } $form['#redirect'] = $thepath; return $form; } That menu path coincides with this theme (PHPTemplate)

    Read the article

  • Covariance and Contravariance in C#

    - by edalorzo
    I will start by saying that I am Java developer learning to program in C#. As such I do comparisons of what I know with what I am learning. I have been playing with C# generics for a few hours now, and I have been able to reproduce the same things I know in Java in C#, with the exception of a couple of examples using covariance and contravariance. The book I am reading is not very good in the subject. I will certainly seek more info on the web, but while I do that, perhaps you can help me find a C# implementation for the following Java code. An example is worth a thousand words, and I was hoping that by looking a good code sample I will be able to assimilate this more rapidly. Covariance In Java I can do something like this: public static double sum(List<? extends Number> numbers) { double summation = 0.0; for(Number number : numbers){ summation += number.doubleValue(); } return summation; } I can use this code as follows: List<Integer> myInts = asList(1,2,3,4,5); List<Double> myDoubles = asList(3.14, 5.5, 78.9); List<Long> myLongs = asList(1L, 2L, 3L); double result = 0.0; result = sum(myInts); result = sum(myDoubles) result = sum(myLongs); Now I did discover that C# supports covariance/contravariance only on interfaces and as long as they have been explicitly declared to do so (out). I think I was not able to reproduce this case, because I could not find a common ancestor of all numbers, but I believe that I could have used IEnumerable to implement such thing if a common ancestor exists. Since IEnumerable is a covariant type. Right? Any thoughts on how to implement the list above? Just point me into the right direction. Is there any common ancestor of all numeric types? Contravariance The contravariance example I tried was the following. In Java I can do this to copy one list into another. public static void copy(List<? extends Number> source, List<? super Number> destiny){ for(Number number : source) { destiny.add(number); } } Then I could use it with contravariant types as follows: List<Object> anything = new ArrayList<Object>(); List<Integer> myInts = asList(1,2,3,4,5); copy(myInts, anything); My basic problem, trying to implement this in C# is that I could not find an interface that was both covariant and contravariant at the same time, as it is case of List in my example above. Maybe it can be done with two different interface in C#. Any thoughts on how to implement this? Thank you very much to everyone for any answers you can contribute. I am pretty sure I will learn a lot from any example you can provide.

    Read the article

  • NHibernate Proxy Creation

    - by Chris Meek
    I have a class structure like the following class Container { public virtual int Id { get; set; } public IList<Base> Bases { get; set; } } class Base { public virtual int Id { get; set; } public virtual string Name { get; set; } } class EnemyBase : Base { public virtual int EstimatedSize { get; set; } } class FriendlyBase : Base { public virtual int ActualSize { get; set; } } Now when I ask the session for a particular Container it normally gives me the concrete EnemyBase and FriendlyBase objects in the Bases collection. I can then (if I so choose) cast them to their concrete types and do something specific with them. However, sometime I get a proxy of the "Base" class which is not castable to the concrete types. The same method is used both times with the only exception being that in the case that I get proxies I have added some related entities to the session (think the friendly base having a collection of people or something like that). Is there any way I can prevent it from doing the proxy creating and why would it choose to do this in some scenarios? UPDATE The mappings are generated with the automap feature of fluentnhibernate but look something like this when exported <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-access="property" auto-import="true" default-cascade="none" default-lazy="true"> <class xmlns="urn:nhibernate-mapping-2.2" mutable="true" name="Base" table="`Base`"> <id name="Id" type="System.Int32, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Id" /> <generator class="MyIdGenerator" /> </id> <property name="Name" type="String"> <column name="Name" /> </property> <joined-subclass name="EnemyBase"> <key> <column name="Id" /> </key> <property name="EstimatedSize" type="Int"> <column name="EstimatedSize" /> </property> </joined-subclass> <joined-subclass name="FriendlyBase"> <key> <column name="Id" /> </key> <property name="ActualSize" type="Int"> <column name="ActualSize" /> </property> </joined-subclass> </class> </hibernate-mapping> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" default-access="property" auto-import="true" default-cascade="none" default-lazy="true"> <class xmlns="urn:nhibernate-mapping-2.2" mutable="true" name="Container" table="`Container`"> <id name="Id" type="System.Int32, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Id" /> <generator class="MyIdGenerator" /> </id> <bag cascade="all-delete-orphan" inverse="true" lazy="false" name="Bases" mutable="true"> <key> <column name="ContainerId" /> </key> <one-to-many class="Base" /> </bag> </class> </hibernate-mapping>

    Read the article

  • Problem in transfering file from server to client using C sockets

    - by coolrockers2007
    I want to ask, why I cannot transfer file from server to client? When I start to send the file from server, the client side program will have problem. So, I spend some times to check the code, But I still cannot find out the problem Can anyone point out the problem for me? CLIENTFILE.C #include stdio.h #include stdlib.h #include time.h #include netinet/in.h #include fcntl.h #include sys/types.h #include string.h #include stdarg.h #define PORT 5678 #define MLEN 1000 int main(int argc, char *argv []) { int sockfd; int number,message; char outbuff[MLEN],inbuff[MLEN]; //char PWD_buffer[_MAX_PATH]; struct sockaddr_in servaddr; FILE *fp; int numbytes; char buf[2048]; if (argc != 2) fprintf(stderr, "error"); if ( (sockfd = socket(AF_INET, SOCK_STREAM, 0)) < 0) fprintf(stderr, "socket error"); memset(&servaddr, 0, sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_port = htons(PORT); if (connect(sockfd, (struct sockaddr *) &servaddr, sizeof(servaddr)) < 0) fprintf(stderr, "connect error"); if ( (fp = fopen("/home/na/nall9047/write.txt", "w")) == NULL){ perror("fopen"); exit(1); } printf("Still NO PROBLEM!\n"); //Receive file from server while(1){ numbytes = read(sockfd, buf, sizeof(buf)); printf("read %d bytes, ", numbytes); if(numbytes == 0){ printf("\n"); break; } numbytes = fwrite(buf, sizeof(char), numbytes, fp); printf("fwrite %d bytes\n", numbytes); } fclose(fp); close(sockfd); return 0; } SERVERFILE.C #include stdio.h #include fcntl.h #include stdlib.h #include time.h #include string.h #include netinet/in.h #include errno.h #include sys/types.h #include sys/socket.h #includ estdarg.h #define PORT 5678 #define MLEN 1000 int main(int argc, char *argv []) { int listenfd, connfd; int number, message, numbytes; int h, i, j, alen; int nread; struct sockaddr_in servaddr; struct sockaddr_in cliaddr; FILE *in_file, *out_file, *fp; char buf[4096]; listenfd = socket(AF_INET, SOCK_STREAM, 0); if (listenfd < 0) fprintf(stderr,"listen error") ; memset(&servaddr, 0, sizeof(servaddr)); servaddr.sin_family = AF_INET; servaddr.sin_addr.s_addr = htonl(INADDR_ANY); servaddr.sin_port = htons(PORT); if (bind(listenfd, (struct sockaddr *) &servaddr, sizeof(servaddr)) < 0) fprintf(stderr,"bind error") ; alen = sizeof(struct sockaddr); connfd = accept(listenfd, (struct sockaddr *) &cliaddr, &alen); if (connfd < 0) fprintf(stderr,"error connecting") ; printf("accept one client from %s!\n", inet_ntoa(cliaddr.sin_addr)); fp = fopen ("/home/na/nall9047/read.txt", "r"); // open file stored in server if (fp == NULL) { printf("\nfile NOT exist"); } //Sending file while(!feof(fp)){ numbytes = fread(buf, sizeof(char), sizeof(buf), fp); printf("fread %d bytes, ", numbytes); numbytes = write(connfd, buf, numbytes); printf("Sending %d bytes\n",numbytes); } fclose (fp); close(listenfd); close(connfd); return 0; }

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • What does the `forall` keyword in Haskell/GHC do?

    - by JUST MY correct OPINION
    I've been banging my head on this one for (quite literally) years now. I'm beginning to kinda/sorta understand how the foreach keyword is used in so-called "existential types" like this: data ShowBox = forall s. Show s => SB s (This despite the confusingly-worded explanations of it in the fragments found all around the web.) This is only a subset, however, of how foreach is used and I simply cannot wrap my mind around its use in things like this: runST :: forall a. (forall s. ST s a) -> a Or explaining why these are different: foo :: (forall a. a -> a) -> (Char,Bool) bar :: forall a. ((a -> a) -> (Char, Bool)) Or the whole RankNTypes stuff that breaks my brain when "explained" in a way that makes me want to do that Samuel L. Jackson thing from Pulp Fiction. (Don't follow that link if you're easily offended by strong language.) The problem, really, is that I'm a dullard. I can't fathom the chicken scratchings (some call them "formulae") of the elite mathematicians that created this language seeing as my university years are over two decades behind me and I never actually had to put what I learnt into use in practice. I also tend to prefer clear, jargon-free English rather than the kinds of language which are normal in academic environments. Most of the explanations I attempt to read on this (the ones I can find through search engines) have these problems: They're incomplete. They explain one part of the use of this keyword (like "existential types") which makes me feel happy until I read code that uses it in a completely different way (like runST, foo and bar above). They're densely packed with assumptions that I've read the latest in whatever branch of discrete math, category theory or abstract algebra is popular this week. (If I never read the words "consult the paper whatever for details of implementation" again, it will be too soon.) They're written in ways that frequently turn even simple concepts into tortuously twisted and fractured grammar and semantics. (I suspect that the last two items are the biggest problem. I wouldn't know, though, since I'm too much a dullard to comprehend them.) It's been asked why Haskell never really caught on in industry. I suspect, in my own humble, unintelligent way, that my experience in figuring out one stupid little keyword -- a keyword that is increasingly ubiquitous in the libraries being written these days -- are also part of the answer to that question. It's hard for a language to catch on when even its individual keywords cause years-long quests to comprehend. Years-long quests which end in failure. So... On to the actual question. Can anybody completely explain the foreach keyword in clear, plain English (or, if it exists somewhere, point to such a clear explanation which I've missed) that doesn't assume I'm a mathematician steeped in the jargon?

    Read the article

  • Problem with parsing strings

    - by Peter Small
    I am trying to put a line of dialog on each of a series of images. To match the dialog line with the correct image, I end each line with a forward slash (/) followed by a number to identify the matching image. I then parse each line to get the dialog and then the reference number for the image. It all works fine except that when I put the dialog line into a textView I get the whole line in the textView instead of the dialog part. What is confusing is that the console seems to indicate that the parsing of the dialog line has been carried out correctly. Here are the details of my coding: @interface DialogSequence_1ViewController : UIViewController { IBOutlet UIImageView *theImage; IBOutlet UITextView *fullDialog; IBOutlet UITextView *selectedDialog; IBOutlet UIButton *test_1; IBOutlet UIButton *test_2; IBOutlet UIButton *test_3; NSArray *arrayLines; IBOutlet UISlider *readingSpeed; NSArray *cartoonViews; NSMutableString *dialog; NSMutableArray *dialogLineSections; int lNum; } @property (retain,nonatomic) UITextView *fullDialog; @property (retain,nonatomic) UITextView *selectedDialog; @property (retain,nonatomic) UIButton *test_1; @property (retain,nonatomic) UIButton *test_2; @property (retain,nonatomic) UIButton *test_3; @property (retain,nonatomic) NSArray *arrayLines; @property (retain,nonatomic) NSMutableString *dialog; @property (retain,nonatomic) NSMutableArray *dialogLineSections; @property (retain,nonatomic) UIImageView *theImage; @property (retain,nonatomic) UISlider *readingSpeed; -(IBAction)start:(id)sender; -(IBAction)counter:(id)sender; -(IBAction)runNextLine:(id)sender; @end @implementation DialogSequence_1ViewController @synthesize fullDialog; @synthesize selectedDialog; @synthesize test_1; @synthesize test_2; @synthesize test_3; @synthesize arrayLines; @synthesize dialog; @synthesize theImage; @synthesize readingSpeed; @synthesize dialogLineSections; -(IBAction)runNextLine:(id)sender{ //Get dialog line to display from the arrayLines array NSMutableString *dialogLineDetails; dialogLineDetails =[arrayLines objectAtIndex:lNum]; NSLog(@"dialogLineDetails = %@",dialogLineDetails); //Parse the dialog line dialogLineSections = [dialogLineDetails componentsSeparatedByString: @"/"]; selectedDialog.text =[dialogLineSections objectAtIndex: 0]; NSLog(@"Dialog part of line = %@",[dialogLineSections objectAtIndex: 0]); NSMutableString *imageBit; imageBit = [dialogLineSections objectAtIndex: 1]; NSLog(@"Image code = %@",imageBit); //Select right image int im = [imageBit intValue]; NSLog(@"imageChoiceInteger = %i",im); //------more code } I get a warning on the line: dialogLineSections = [dialogLineDetails componentsSeparatedByString: @"/"]; warning: incompatible Objective-C types assigning 'struct NSArray *', expected 'struct NSMutableArray *' I don't quite understand this and have tried to change the types but to no avail. Would be grateful for some advice here.

    Read the article

  • Break in Class Module vs. Break on Unhandled Errors (VB6 Error Trapping, Options Setting in IDE)

    - by Erx_VB.NExT.Coder
    Basically, I'm trying to understand the difference between the "Break in Class Module" and "Break on Unhandled Errors" that appear in the Visual Basic 6.0 IDE under the following path: Tools --> Options --> General --> Error Trapping The three options appear to be: Break on All Errors Break in Class Module Break on Unhandled Errors Now, apparently, according to MSDN, the second option (Break in Class Module) really just means "Break on Unhandled Errors in Class Modules". Also, this option appears to be set by default (ie: I think its set to this out of the box). What I am trying to figure out is, if I have the second option selected, do I get the third option (Break on Unhandled Errors) for free? In that, does it come included by default for all scenarios outside of the Class Module spectrum? To advise, I don't have any Class Modules in my currently active project. I have .bas modules though. Also, is it possible that by Class Mdules they may be referring to normal .bas Modules as well? (this is my second sub-question). Basically, I just want the setting to ensure there won't be any surprises once the exe is released. I want as many errors to display as possible while I am developing, and non to be displayed when in release mode. Normally, I have two types of On Error Resume Next on my forms where there isn't explicit error handling, they are as follows: On Error Resume Next ' REQUIRED On Error Resume Next ' NOT REQUIRED The required ones are things like, checking to see if an array has any length, if a call to its UBound errors out, that means it has no length, if it returns a value 0 or more, then it does have length (and therefore, exists). These types of Error Statements need to remain active even while I am developing. However, the NOT REQUIRED ones shouldn't remain active while I am developing, so I have them all commented out to ensure that I catch all the errors that exist. Once I am ready to release the exe, I do a CTRL+H to find all occurrences of: 'On Error Resume Next ' NOT REQUIRED (You may have noticed they are commented out)... And replace them with: On Error Resume Next ' NOT REQUIRED ... The uncommented version, so that in release mode, if there are any leftover errors, they do not show to users. For more on the description by MSDN on the three options (which I've read twice and still don't find adequate) you can visit the following link: http://webcache.googleusercontent.com/search?q=cache:yUQZZK2n2IYJ:support.microsoft.com/kb/129876&hl=en&lr=lang_en%7Clang_tr&gl=au&tbs=lr:lang_1en%7Clang_1tr&prmd=imvns&strip=1 I’m also interested in hearing your thoughts if you feel like volunteering them (and this would be my tentative/totally optional third sub-question, that being, your thoughts on fall-back error handling techniques). Just to summarize, the first two questions were, do we get option 3 included in all non-class scenarios if we choose option 2? And, is it possible that when they use the term "Class Module" they may be referring to .bas Modules as well? (Since a .bad Module is really just a class module that is pre-instantiated in the background during start-up). Thank you.

    Read the article

  • Is it safe to assert a functions return type?

    - by wb
    This question is related to this question. I have several models stored in a collection. I loop through the collection and validate each field. Based on the input, a field can either be successful, have an error or a warning. Is it safe to unit test each decorator and assert that the type of object returned is what you would expect based on the given input? I could perhaps see this being an issue for a language with function return types since my validation function can return one of 3 types. This is the code I'm fiddling with: <!-- #include file = "../lib/Collection.asp" --> <style type="text/css"> td { padding: 4px; } td.error { background: #F00F00; } td.warning { background: #FC0; } </style> <% Class UserModel Private m_Name Private m_Age Private m_Height Public Property Let Name(value) m_Name = value End Property Public Property Get Name() Name = m_Name End Property Public Property Let Age(value) m_Age = value End Property Public Property Get Age() Age = m_Age End Property Public Property Let Height(value) m_Height = value End Property Public Property Get Height() Height = m_Height End Property End Class Class NameValidation Private m_Name Public Function Init(name) m_Name = name End Function Public Function Validate() Dim validationObject If Len(m_Name) < 5 Then Set validationObject = New ValidationError Else Set validationObject = New ValidationSuccess End If validationObject.CellValue = m_Name Set Validate = validationObject End Function End Class Class AgeValidation Private m_Age Public Function Init(age) m_Age = age End Function Public Function Validate() Dim validationObject If m_Age < 18 Then Set validationObject = New ValidationError ElseIf m_Age = 18 Then Set validationObject = New ValidationWarning Else Set validationObject = New ValidationSuccess End If validationObject.CellValue = m_Age Set Validate = validationObject End Function End Class Class HeightValidation Private m_Height Public Function Init(height) m_Height = height End Function Public Function Validate() Dim validationObject If m_Height > 400 Then Set validationObject = New ValidationError ElseIf m_Height = 324 Then Set validationObject = New ValidationWarning Else Set validationObject = New ValidationSuccess End If validationObject.CellValue = m_Height Set Validate = validationObject End Function End Class Class ValidationError Private m_CSSClass Private m_CellValue Public Property Get CSSClass() CSSClass = "error" End Property Public Property Let CellValue(value) m_CellValue = value End Property Public Property Get CellValue() CellValue = m_CellValue End Property End Class Class ValidationWarning Private m_CSSClass Private m_CellValue Public Property Get CSSClass() CSSClass = "warning" End Property Public Property Let CellValue(value) m_CellValue = value End Property Public Property Get CellValue() CellValue = m_CellValue End Property End Class Class ValidationSuccess Private m_CSSClass Private m_CellValue Public Property Get CSSClass() CSSClass = "" End Property Public Property Let CellValue(value) m_CellValue = value End Property Public Property Get CellValue() CellValue = m_CellValue End Property End Class Class ModelValidator Public Function ValidateModel(model) Dim modelValidation : Set modelValidation = New CollectionClass ' Validate name Dim name : Set name = New NameValidation name.Init model.Name modelValidation.Add name ' Validate age Dim age : Set age = New AgeValidation age.Init model.Age modelValidation.Add age ' Validate height Dim height : Set height = New HeightValidation height.Init model.Height modelValidation.Add height Dim validatedProperties : Set validatedProperties = New CollectionClass Dim modelVal For Each modelVal In modelValidation.Items() validatedProperties.Add modelVal.Validate() Next Set ValidateModel = validatedProperties End Function End Class Dim modelCollection : Set modelCollection = New CollectionClass Dim user1 : Set user1 = New UserModel user1.Name = "Mike" user1.Age = 12 user1.Height = 32 modelCollection.Add user1 Dim user2 : Set user2 = New UserModel user2.Name = "Phil" user2.Age = 18 user2.Height = 432 modelCollection.Add user2 Dim user3 : Set user3 = New UserModel user3.Name = "Michele" user3.Age = 32 user3.Height = 324 modelCollection.Add user3 ' Validate all models in the collection Dim modelValue Dim validatedModels : Set validatedModels = New CollectionClass For Each modelValue In modelCollection.Items() Dim objModelValidator : Set objModelValidator = New ModelValidator validatedModels.Add objModelValidator.ValidateModel(modelValue) Next %> <table> <tr> <td>Name</td> <td>Age</td> <td>Height</td> </tr> <% Dim r, c For Each r In validatedModels.Items() %><tr><% For Each c In r.Items() %><td class="<%= c.CSSClass %>"><%= c.CellValue %></td><% Next %></tr><% Next %> </table> Thank you.

    Read the article

  • Step by Step / Deep explain: The Power of (Co)Yoneda (preferably in scala) through Coroutines

    - by Mzk
    some background code /** FunctorStr: ? F[-]. (? A B. (A -> B) -> F[A] -> F[B]) */ trait FunctorStr[F[_]] { self => def map[A, B](f: A => B): F[A] => F[B] } trait Yoneda[F[_], A] { yo => def apply[B](f: A => B): F[B] def run: F[A] = yo(x => x) def map[B](f: A => B): Yoneda[F, B] = new Yoneda[F, B] { def apply[X](g: B => X) = yo(f andThen g) } } object Yoneda { implicit def yonedafunctor[F[_]]: FunctorStr[({ type l[x] = Yoneda[F, x] })#l] = new FunctorStr[({ type l[x] = Yoneda[F, x] })#l] { def map[A, B](f: A => B): Yoneda[F, A] => Yoneda[F, B] = _ map f } def apply[F[_]: FunctorStr, X](x: F[X]): Yoneda[F, X] = new Yoneda[F, X] { def apply[Y](f: X => Y) = Functor[F].map(f) apply x } } trait Coyoneda[F[_], A] { co => type I def fi: F[I] def k: I => A final def map[B](f: A => B): Coyoneda.Aux[F, B, I] = Coyoneda(fi)(f compose k) } object Coyoneda { type Aux[F[_], A, B] = Coyoneda[F, A] { type I = B } def apply[F[_], B, A](x: F[B])(f: B => A): Aux[F, A, B] = new Coyoneda[F, A] { type I = B val fi = x val k = f } implicit def coyonedaFunctor[F[_]]: FunctorStr[({ type l[x] = Coyoneda[F, x] })#l] = new CoyonedaFunctor[F] {} trait CoyonedaFunctor[F[_]] extends FunctorStr[({type l[x] = Coyoneda[F, x]})#l] { override def map[A, B](f: A => B): Coyoneda[F, A] => Coyoneda[F, B] = x => apply(x.fi)(f compose x.k) } def liftCoyoneda[T[_], A](x: T[A]): Coyoneda[T, A] = apply(x)(a => a) } Now I thought I understood yoneda and coyoneda a bit just from the types – i.e. that they quantify / abstract over map fixed in some type constructor F and some type a, to any type B returning F[B] or (Co)Yoneda[F, B]. Thus providing map fusion for free (? is this kind of like a cut rule for map ?). But I see that Coyoneda is a functor for any type constructor F regardless of F being a Functor, and that I don't fully grasp. Now I'm in a situation where I'm trying to define a Coroutine type, (I'm looking at https://www.fpcomplete.com/school/to-infinity-and-beyond/pick-of-the-week/coroutines-for-streaming/part-2-coroutines for the types to get started with) case class Coroutine[S[_], M[_], R](resume: M[CoroutineState[S, M, R]]) sealed trait CoroutineState[S[_], M[_], R] object CoroutineState { case class Run[S[_], M[_], R](x: S[Coroutine[S, M, R]]) extends CoroutineState[S, M, R] case class Done[R](x: R) extends CoroutineState[Nothing, Nothing, R] class CoroutineStateFunctor[S[_], M[_]](F: FunctorStr[S]) extends FunctorStr[({ type l[x] = CoroutineState[S, M, x]})#l] { override def map[A, B](f : A => B) : CoroutineState[S, M, A] => CoroutineState[S, M, B] = { ??? } } } and I think that if I understood Coyoneda better I could leverage it to make S & M type constructors functors way easy, plus I see Coyoneda potentially playing a role in defining recursion schemes as the functor requirement is pervasive. So how could I use coyoneda to make type constructors functors like for example coroutine state? or something like a Pause functor ?

    Read the article

  • Having trouble storing a CRTP based class in a vector

    - by user366834
    Hi, Im not sure if this can be done, im just delving into templates so perhaps my understanding is a bit wrong. I have a Platoon of soldiers, the platoon inherits from a formation to pick up the formations properties, but because i could have as many formations as i can think of I chose to use the CRTP to create the formations, hoping that i could make a vector or array of Platoon to store the platoons in. but, of course, when i make a Platoon, it wont store it in the vector, "types are unrelated" Is there any way around this ? i read about "Veneers" which are similar and that they work with arrays but i cant get it to work, perhaps im missing something. here's some code: (sorry about the formatting, the code is here in my post but its not showing up for some reason ) template < class TBase > class IFormation { public : ~IFormation(){} bool IsFull() { return m_uiMaxMembers == m_uiCurrentMemCount; } protected: unsigned int m_uiCurrentMemCount; unsigned int m_uiMaxMembers; IFormation( unsigned int _uiMaxMembers ): m_uiMaxMembers( _uiMaxMembers ), m_uiCurrentMemCount( 0 ){} // only allow use as a base class. void SetupFormation( std::vector<MySoldier*>& _soldierList ){}; // must be implemented in derived class }; ///////////////////////////////////////////////////////////////////////////////// // PHALANX FORMATION class Phalanx : public IFormation<Phalanx> { public: Phalanx( ): IFormation( 12 ), m_fDistance( 4.0f ) {} ~Phalanx(){} protected: float m_fDistance; // the distance between soldiers void SetupFormation( std::vector<MySoldier*>& _soldierList ); }; /////////////////////////////////////////////////////////////////////////////////// // COLUMN FORMATINO class Column : public IFormation< Column > { public : Column( int _numOfMembers ): IFormation( _numOfMembers ) {} ~Column(); protected: void SetupFormation( std::vector<MySoldier*>& _soldierList ); }; I then use these formations in the platoon class to derive, so that platoon gets the relevant SetupFormation() function: template < class Formation > class Platoon : public Formation { public: **** platoon code here }; everything works great and as expected up until this point. now, as my general can have multiple platoons, I need to store the platoons. typedef Platoon< IFormation<> > TPlatoon; // FAIL typedef std::vector<TPlatoon*> TPlatoons; TPlatoon m_pPlatoons m_pPlatoons.push_back( new Platoon<Phalanx> ); // FAIL, types unrelated. typedef Platoon< IFormation< TPlatoon; fails because i need to specify a template parameter, yet specifying this will only allow me to store platoons created with the same template parameter. so i then created FormationBase class FormationBase { public: virtual bool IsFull() = 0; virtual void SetupFormation( std::vector<MySoldier*>& _soldierList ) = 0; }; and made IFormation publicly inherit from that, and then changed the typedef to typedef Platoon< IFormation< FormationBase > > TPlatoon; but still no love. now in my searches i have not found info that says this is possible - or not possible.

    Read the article

  • C++ templated factory constructor/de-serialization

    - by KRao
    Hi, I was looking at the boost serialization library, and the intrusive way to provide support for serialization is to define a member function with signature (simplifying): class ToBeSerialized { public: //Define this to support serialization //Notice not virtual function! template<class Archive> void serialize(Archive & ar) {.....} }; Moreover, one way to support serilization of derived class trough base pointers is to use a macro of the type: //No mention to the base class(es) from which Derived_class inherits BOOST_CLASS_EXPORT_GUID(Derived_class, "derived_class") where Derived_class is some class which is inheriting from a base class, say Base_class. Thanks to this macro, it is possible to serialize classes of type Derived_class through pointers to Base_class correctly. The question is: I am used in C++ to write abstract factories implemented through a map from std::string to (pointer to) functions which return objects of the desired type (and eveything is fine thanks to covariant types). Hover I fail to see how I could use the above non-virtual serialize template member function to properly de-serialize (i.e. construct) an object without knowing its type (but assuming that the type information has been stored by the serializer, say in a string). What I would like to do (keeping the same nomenclature as above) is something like the following: XmlArchive xmlArchive; //A type or archive xmlArchive.open("C:/ser.txt"); //Contains type information for the serialized class Base_class* basePtr = Factory<Base_class>::create("derived_class",xmlArchive); with the function on the righ-hand side creating an object on the heap of type Derived_class (via default constructor, this is the part I know how to solve) and calling the serialize function of xmlArchive (here I am stuck!), i.e. do something like: Base_class* Factory<Base_class>::create("derived_class",xmlArchive) { Base_class* basePtr = new Base_class; //OK, doable, usual map string to pointer to function static_cast<Derived_class*>( basePtr )->serialize( xmlArchive ); //De-serialization, how????? return basePtr; } I am sure this can be done (boost serialize does it but its code is impenetrable! :P), but I fail to figure out how. The key problem is that the serialize function is a template function. So I cannot have a pointer to a generic templated function. As the point in writing the templated serialize function is to make the code generic (i.e. not having to re-write the serialize function for different Archivers), it does not make sense then to have to register all the derived classes for all possible archive types, like: MY_CLASS_REGISTER(Derived_class, XmlArchive); MY_CLASS_REGISTER(Derived_class, TxtArchive); ... In fact in my code I relies on overloading to get the correct behaviour: void serialize( XmlArchive& archive, Derived_class& derived ); void serialize( TxtArchive& archive, Derived_class& derived ); ... The key point to keep in mind is that the archive type is always known, i.e. I am never using runtime polymorphism for the archive class...(again I am using overloading on the archive type). Any suggestion to help me out? Thank you very much in advance! Cheers

    Read the article

  • Want to Receive dynamic length data from a message queue in IPC?

    - by user1089679
    Here I have to send and receive dynamic data using a SysV message queue. so in structure filed i have dynamic memory allocation char * because its size may be varies. so how can i receive this type of message at receiver side. Please let me know how can i send dynamic length of data with message queue. I am getting problem in this i posted my code below. send.c /*filename : send.c *To compile : gcc send.c -o send */ #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <string.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/msg.h> struct my_msgbuf { long mtype; char *mtext; }; int main(void) { struct my_msgbuf buf; int msqid; key_t key; static int count = 0; char temp[5]; int run = 1; if ((key = ftok("send.c", 'B')) == -1) { perror("ftok"); exit(1); } printf("send.c Key is = %d\n",key); if ((msqid = msgget(key, 0644 | IPC_CREAT)) == -1) { perror("msgget"); exit(1); } printf("Enter lines of text, ^D to quit:\n"); buf.mtype = 1; /* we don't really care in this case */ int ret = -1; while(run) { count++; buf.mtext = malloc(50); strcpy(buf.mtext,"Hi hello test message here"); snprintf(temp, sizeof (temp), "%d",count); strcat(buf.mtext,temp); int len = strlen(buf.mtext); /* ditch newline at end, if it exists */ if (buf.mtext[len-1] == '\n') buf.mtext[len-1] = '\0'; if (msgsnd(msqid, &buf, len+1, IPC_NOWAIT) == -1) /* +1 for '\0' */ perror("msgsnd"); if(count == 100) run = 0; usleep(1000000); } if (msgctl(msqid, IPC_RMID, NULL) == -1) { perror("msgctl"); exit(1); } return 0; } receive.c /* filename : receive.c * To compile : gcc receive.c -o receive */ #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/msg.h> struct my_msgbuf { long mtype; char *mtext; }; int main(void) { struct my_msgbuf buf; int msqid; key_t key; if ((key = ftok("send.c", 'B')) == -1) { /* same key as send.c */ perror("ftok"); exit(1); } if ((msqid = msgget(key, 0644)) == -1) { /* connect to the queue */ perror("msgget"); exit(1); } printf("test: ready to receive messages, captain.\n"); for(;;) { /* receive never quits! */ buf.mtext = malloc(50); if (msgrcv(msqid, &buf, 50, 0, 0) == -1) { perror("msgrcv"); exit(1); } printf("test: \"%s\"\n", buf.mtext); } return 0; }

    Read the article

  • about getadrrinfo() C++?

    - by Isavel
    I'm reading this book called beej's guide to network programming and there's a part in the book were it provide a sample code which illustrate the use of getaddrinfo(); the book state that the code below "will print the IP addresses for whatever host you specify on the command line" - beej's guide to network programming. now I'm curious and want to try it out and run the code, but I guess the code was develop in UNIX environment and I'm using visual studio 2012 windows 7 OS, and most of the headers was not supported so I did a bit of research and find out that I need to include the winsock.h and ws2_32.lib for windows, for it to get working, fortunately everything compiled no errors, but when I run it using the debugger and put in 'www.google.com' as command argument I was disappointed that it did not print any ipaddress, the output that I got from the console is "getaddrinfo: E" what does the letter E mean? Do I need to configure something out of the debugger? Interestingly I left the command argument blank and the output changed to "usage: showip hostname" Any help would be appreciated. #ifdef _WIN32 #endif #include <sys/types.h> #include <winsock2.h> #include <ws2tcpip.h> #include <iostream> using namespace std; #include <stdio.h> #include <string.h> #include <sys/types.h> #include <winsock.h> #pragma comment(lib, "ws2_32.lib") int main(int argc, char *argv[]) { struct addrinfo hints, *res, *p; int status; char ipstr[INET6_ADDRSTRLEN]; if (argc != 2) { fprintf(stderr,"usage: showip hostname\n"); system("PAUSE"); return 1; } memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; // AF_INET or AF_INET6 to force version hints.ai_socktype = SOCK_STREAM; if ((status = getaddrinfo(argv[1], NULL, &hints, &res)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(status)); system("PAUSE"); return 2; } printf("IP addresses for %s:\n\n", argv[1]); for(p = res;p != NULL; p = p->ai_next) { void *addr; char *ipver; // get the pointer to the address itself, // different fields in IPv4 and IPv6: if (p->ai_family == AF_INET) { // IPv4 struct sockaddr_in *ipv4 = (struct sockaddr_in *)p->ai_addr; addr = &(ipv4->sin_addr); ipver = "IPv4"; } else { // IPv6 struct sockaddr_in6 *ipv6 = (struct sockaddr_in6 *)p->ai_addr; addr = &(ipv6->sin6_addr); ipver = "IPv6"; } // convert the IP to a string and print it: inet_ntop(p->ai_family, addr, ipstr, sizeof ipstr); printf(" %s: %s\n", ipver, ipstr); } freeaddrinfo(res); // free the linked list system("PAUSE"); return 0; }

    Read the article

  • Referencing variables in a structure / C++

    - by user1628622
    Below, I provided a minimal example of code I created. I managed to get this code working, but I'm not sure if the practice being employed is sound. In essence, what I am trying to do is have the 'Parameter' class reference select elements in the 'States' class, so variables in States can be changed via Parameters. Questions I have: is the approach taken OK? If not, is there a better way to achieve what I am aiming for? Example code: struct VAR_TYPE{ public: bool is_fixed; // If is_fixed = true, then variable is a parameter double value; // Numerical value std::string name; // Description of variable (to identify it by name) }; struct NODE{ public: VAR_TYPE X, Y, Z; /* VAR_TYPE is a structure of primitive types */ }; class States{ private: std::vector <NODE_ptr> node; // shared ptr to struct NODE std::vector <PROP_DICTIONARY_ptr> property; // CAN NOT be part of Parameter std::vector <ELEMENT_ptr> element; // CAN NOT be part of Parameter public: /* ect */ void set_X_reference ( Parameter &T , int i ) { T.push_var( &node[i]->X ); } void set_Y_reference ( Parameter &T , int i ) { T.push_var( &node[i]->Y ); } void set_Z_reference ( Parameter &T , int i ) { T.push_var( &node[i]->Z ); } bool get_node_bool_X( int i ) { return node[i]->X.is_fixed; } // repeat for Y and Z }; class Parameter{ private: std::vector <VAR_TYPE*> var; public: /* ect */ }; int main(){ States S; Parameter P; /* Here I initialize and set S, and do other stuff */ // Now I assign components in States to Parameters for(int n=0 ; n<S.size_of_nodes() ; n++ ){ if ( S.get_node_bool_X(n)==true ){ S.set_X_reference ( P , n ); }; // repeat if statement for Y and Z }; /* Now P points selected to data in S, and I can * modify the contents of S through P */ return 0; }; Update The reason this issue cropped up is I am working with Fortran legacy code. To sum up this Fotran code - it's a numerical simulation of a flight vehicle. This code has a fairly rigid procedural framework one must work within, which comes with a pre-defined list of allowable Fortran types. The Fortran glue code can create an instance of a C++ object (in actuality, a reference from the perspective of Fortran), but is not aware what is contained in it (other means are used to extract C++ data into Fortran). The problem that I encountered is when a C++ module is dynamically linked to the Fortran glue code, C++ objects have to be initialized each instance the C++ code is called. This happens by virtue of how the Fortran template is defined. To avoid this cycle of re-initializing objects, I plan to use 'State' as a container class. The Fortran code allows a 'State' object, which has an arbitrary definition; but I plan to use it to harness all relevant information about the model. The idea is to use the Parameters class (which is exposed and updated by the Fortran code) to update variables in States.

    Read the article

  • How to resolve: 'cmd' is not recognized as an internal or external command?

    - by qwer1234
    I have searched other forums to solve this error where it would either end with: 1.) re-install OS 2.) Setting path variable C:/Windows/System32 The latter did not work, and as you can probably imagine, I do not want to have to re-install my OS... I am running the command "mvn jetty:run" and the following is my stack trace, finishing with the message: "'cmd' is not recognized as an internal or external command, operable problem or batch file" as stated in the title of this question. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building Test Tool [INFO] task-segment: [jetty:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing jetty:run [WARNING] Removing: run from forked lifecycle, to prevent recursive invocation. [INFO] [resources:resources] [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 32 resources [INFO] Copying 192 resources [INFO] [compiler:compile] [INFO] Compiling 1854 source files to C:\Development\global_stock_record\test\java\Turtle\target\classes [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Compilation failure C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\compilers\JavaScriptClassCompiler.java:[45,29] cannot find symbol symbol : class CompilerEnvirons location: package org.mozilla.javascript C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\compilers\JavaScriptClassCompiler.java:[47,29] cannot find symbol symbol : class ContextFactory location: package org.mozilla.javascript C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\compilers\JavaScriptClassCompiler.java:[49,39] cannot find symbol symbol : class ClassCompiler location: package org.mozilla.javascript.optimizer C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\compilers\JavaScriptClassCompiler.java:[181,55] cannot find symbol symbol : class CompilerEnvirons location: class net.sf.jasperreports.compilers.JavaScriptClassCompiler C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\export\JRXmlExporter.java:[99,26] package org.w3c.tools.codec does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\JRBaseFactory.java:[26,34] package org.apache.commons.digester does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\JRBaseFactory.java:[27,34] package org.apache.commons.digester does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\JRBaseFactory.java:[34,47] cannot find symbol symbol: class ObjectCreationFactory public abstract class JRBaseFactory implements ObjectCreationFactory C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\JRBaseFactory.java:[41,21] cannot find symbol symbol : class Digester location: class net.sf.jasperreports.engine.xml.JRBaseFactory C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\JRBaseFactory.java:[47,8] cannot find symbol symbol : class Digester location: class net.sf.jasperreports.engine.xml.JRBaseFactory C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\JRBaseFactory.java:[56,25] cannot find symbol symbol : class Digester location: class net.sf.jasperreports.engine.xml.JRBaseFactory C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\Code39Component.java:[28,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\BarcodeComponent.java:[41,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\Code39Component.java:[66,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.Code39Component C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\BarcodeComponent.java:[179,29] cannot find symbol symbol : class HumanReadablePlacement location: class net.sf.jasperreports.components.barcode4j.BarcodeComponent C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\EAN128Component.java:[26,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\DataMatrixComponent.java:[26,45] package org.krysalis.barcode4j.impl.datamatrix does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\FourStateBarcodeComponent.java:[26,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\UPCAComponent.java:[28,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\UPCEComponent.java:[28,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\EAN13Component.java:[28,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\EAN8Component.java:[28,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\Interleaved2Of5Component.java:[28,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\EAN128Component.java:[57,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.EAN128Component C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\DataMatrixComponent.java:[62,22] cannot find symbol symbol : class SymbolShapeHint location: class net.sf.jasperreports.components.barcode4j.DataMatrixComponent C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\FourStateBarcodeComponent.java:[76,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.FourStateBarcodeComponent C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\UPCAComponent.java:[56,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.UPCAComponent C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\UPCEComponent.java:[56,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.UPCEComponent C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\EAN13Component.java:[56,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.EAN13Component C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\EAN8Component.java:[56,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.EAN8Component C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\Interleaved2Of5Component.java:[60,29] cannot find symbol symbol : class ChecksumMode location: class net.sf.jasperreports.components.barcode4j.Interleaved2Of5Component C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRHibernateAbstractDataSource.java:[36,25] package org.hibernate.type does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[49,20] package org.hibernate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[50,20] package org.hibernate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[51,20] package org.hibernate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[52,20] package org.hibernate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[53,20] package org.hibernate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[54,25] package org.hibernate.type does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRHibernateAbstractDataSource.java:[173,38] cannot find symbol symbol : class Type location: class net.sf.jasperreports.engine.data.JRHibernateAbstractDataSource C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[66,35] cannot find symbol symbol : class Type location: class net.sf.jasperreports.engine.query.JRHibernateQueryExecuter C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[89,9] cannot find symbol symbol : class Session location: class net.sf.jasperreports.engine.query.JRHibernateQueryExecuter C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[90,9] cannot find symbol symbol : class Query location: class net.sf.jasperreports.engine.query.JRHibernateQueryExecuter C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[92,9] cannot find symbol symbol : class ScrollableResults location: class net.sf.jasperreports.engine.query.JRHibernateQueryExecuter C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[359,8] cannot find symbol symbol : class Type location: class net.sf.jasperreports.engine.query.JRHibernateQueryExecuter C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\query\JRHibernateQueryExecuter.java:[474,8] cannot find symbol symbol : class ScrollableResults location: class net.sf.jasperreports.engine.query.JRHibernateQueryExecuter C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barbecue\BarbecueFillComponent.java:[40,31] package net.sourceforge.barbecue does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[38,27] package org.apache.tools.ant does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[39,27] package org.apache.tools.ant does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[40,27] package org.apache.tools.ant does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[41,33] package org.apache.tools.ant.types does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[42,33] package org.apache.tools.ant.types does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[43,43] package org.apache.tools.ant.types.resources does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[44,32] package org.apache.tools.ant.util does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[45,32] package org.apache.tools.ant.util does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRBaseAntTask.java:[34,36] package org.apache.tools.ant.taskdefs does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRBaseAntTask.java:[41,35] cannot find symbol symbol: class MatchingTask public class JRBaseAntTask extends MatchingTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[74,9] cannot find symbol symbol : class Path location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[76,9] cannot find symbol symbol : class Path location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[86,23] cannot find symbol symbol : class Path location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[104,8] cannot find symbol symbol : class Path location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[131,8] cannot find symbol symbol : class Path location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[145,30] cannot find symbol symbol : class BuildException location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[183,41] cannot find symbol symbol : class BuildException location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[211,33] cannot find symbol symbol : class BuildException location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\ant\JRAntXmlExportTask.java:[276,32] cannot find symbol symbol : class BuildException location: class net.sf.jasperreports.ant.JRAntXmlExportTask C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\TransformedPropertyRule.java:[27,34] package org.apache.commons.digester does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\xml\TransformedPropertyRule.java:[37,54] cannot find symbol symbol: class Rule public abstract class TransformedPropertyRule extends Rule C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\data\mondrian\MondrianDataAdapterService.java:[29,20] package mondrian.olap does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\data\mondrian\MondrianDataAdapterService.java:[30,20] package mondrian.olap does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\data\mondrian\MondrianDataAdapterService.java:[31,20] package mondrian.olap does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\data\mondrian\MondrianDataAdapterService.java:[45,9] cannot find symbol symbol : class Connection location: class net.sf.jasperreports.data.mondrian.MondrianDataAdapterService C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRXlsDataSource.java:[40,10] package jxl does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRXlsDataSource.java:[41,10] package jxl does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRXlsDataSource.java:[42,10] package jxl does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRXlsDataSource.java:[43,20] package jxl.read.biff does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRXlsDataSource.java:[66,9] cannot find symbol symbol : class Workbook location: class net.sf.jasperreports.engine.data.JRXlsDataSource C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\engine\data\JRXlsDataSource.java:[83,24] cannot find symbol symbol : class Workbook location: class net.sf.jasperreports.engine.data.JRXlsDataSource C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\olap\xmla\JRXmlaMember.java:[26,20] package mondrian.olap does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\olap\result\JROlapMember.java:[26,20] package mondrian.olap does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\olap\xmla\JRXmlaMember.java:[89,8] cannot find symbol symbol : class Member location: class net.sf.jasperreports.olap.xmla.JRXmlaMember C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\olap\result\JROlapMember.java:[46,1] cannot find symbol symbol : class Member location: interface net.sf.jasperreports.olap.result.JROlapMember C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\web\actions\AbstractAction.java:[43,36] package org.codehaus.jackson.annotate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\web\actions\AbstractAction.java:[49,1] cannot find symbol symbol: class JsonTypeInfo @JsonTypeInfo(use=JsonTypeInfo.Id.NAME, include=JsonTypeInfo.As.PROPERTY, property="actionName") C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[32,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[33,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[34,29] package org.krysalis.barcode4j does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[35,34] package org.krysalis.barcode4j.impl does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[36,42] package org.krysalis.barcode4j.impl.codabar does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[37,42] package org.krysalis.barcode4j.impl.code128 does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[38,42] package org.krysalis.barcode4j.impl.code128 does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[39,41] package org.krysalis.barcode4j.impl.code39 does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[40,45] package org.krysalis.barcode4j.impl.datamatrix does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[41,45] package org.krysalis.barcode4j.impl.datamatrix does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[42,44] package org.krysalis.barcode4j.impl.fourstate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[43,44] package org.krysalis.barcode4j.impl.fourstate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[44,44] package org.krysalis.barcode4j.impl.fourstate does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[45,42] package org.krysalis.barcode4j.impl.int2of5 does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[46,41] package org.krysalis.barcode4j.impl.pdf417 does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[47,42] package org.krysalis.barcode4j.impl.postnet does not exist C:\Development\global_stock_record\test\java\Turtle\src\main\java\net\sf\jasperreports\components\barcode4j\AbstractBarcodeEvaluator.java:[48,41] package org.krysalis.barcode4j.impl.upcean does not exist [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 17 seconds [INFO] Finished at: Fri Dec 07 11:46:28 EST 2012 [INFO] Final Memory: 27M/63M [INFO] ------------------------------------------------------------------------

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • SOA Suite 11g Native Format Builder Complex Format Example

    - by bob.webster
    This rather long posting details the steps required to process a grouping of fixed length records using Format Builder.   If it’s 10 pm and you’re feeling beat you might want to leave this until tomorrow.  But if it’s 10 pm and you need to get a Format Builder Complex template done, read on… The goal is to process individual orders from a file using the 11g File Adapter and Format Builder Sample Data =========== 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 301Hexagon Widget         1150.98 ORD6735502/01/2011 The records are fixed length records representing a number of logical Order records. Each order record consists of a number of item records starting with a 3 digit number, followed by a single Summary Record which starts with the constant ORD. How can this file be processed so that the first poll returns the first order? 001Square Widget            0245.98 102Triagular Widget         1120.00 403Circular Widget           0099.45 ORD8898302/01/2011 And the second poll returns the second order? 301Hexagon Widget           1150.98 ORD6735502/01/2011 Note: if you need more than one order per poll, that’s also possible, see the “Multiple Messages” field in the “File Adapter Step 6 of 9” snapshot further down.   To follow along with this example you will need - Studio Edition Version 11.1.1.4.0    with the   - SOA Extension for JDeveloper 11.1.1.4.0 installed Both can be downloaded from here:  http://www.oracle.com/technetwork/middleware/soasuite/downloads/index.html You will not need a running WebLogic Server domain to complete the steps and Format Builder tests in this article.     Start with a SOA Composite containing a File Adapter The Format Builder is part of the File Adapter so start by creating a new SOA Project and Composite. Here is a quick summary for those not familiar with these steps - Start JDeveloper - From the Main Menu choose File->New - In the New Gallery window that opens Expand the “General” category and Select the Applications node.   Then choose SOA Application from the Items section on the right.  Finally press the OK button. - In Step 1 of the “Create SOA Application wizard” that appears enter an Application Name and an Directory of your     choice,   then press the Next button. - In Step 2 of the “Create SOA Application wizard”, press the Next button leaving all entries as defaulted. - In Step 3 of the “Create SOA Application wizard”, Enter a composite name of your choice and Press the Finish   Button These steps result in a new Application and SOA Project. The SOA Project contains a composite.xml file which is opened and shown below. For our example we have not defined a Mediator or a BPEL process to minimize the steps, but one or the other would eventually be needed to use the File Adapter we are about to create. Drag and drop the File Adapter icon from the Component Pallette onto either the LEFT side of the diagram under “Exposed Services” or the right side under “External References”.  (See the Green Circle in the image below).  Placing the adapter on the left side would indicate the file being processed is inbound to the composite, if the adapter is placed on the right side then the data is outbound to a file.     Note that the same Format Builder definition can be used in both directions.  For example we could use the format with a File Adapter on the left side of the composite to parse fixed data into XML, modify the data in our Composite or BPEL process and then use the same Format Builder definition with a File adapter on the right side of the composite to write the data back out in the same fixed data format When the File Adapter is dropped on the Composite the File Adapter Wizard Appears. Skip Past the first page, Step 1 of 9 by pressing the Next button. In Step 2 enter a service name of your choice as shown below, then press Next   When the Native Format Builder appears, skip the welcome page by pressing next. Also press the Next button to accept the settings on Step 3 of 9 On Step 4, select Read File and press the Next button as shown below.   On Step 5 enter a directory that will contain a file with the input data, then  Press the Next button as shown below. In step 6, enter *.txt or another file format to select input files from the input directory mentioned in step 5. ALSO check the “Files contain Multiple Messages” checkbox and set the “Publish Messages in Batches of” field to 1.  The value can be set higher to increase the number of logical order group records returned on each poll of the file adapter.  In other words, it determines the number of Orders that will be sent to each instance of a Mediator or Composite processing using the File Adapter.   Skip Step 7 by pressing the Next button In Step 8 press the Gear Icon on the right side to load the Native Format Builder.       Native Format Builder  appears Before diving into the format, here is an overview of the process. Approach - Bottom up Assuming an Order is a grouping of item records and a summary record…. - Define a separate  Complex Type for each Record Type found in the group.    (One for itemRecord and one for summaryRecord) - Define a Complex Type to contain the Group of Record types defined above   (LogicalOrderRecord) - Define a top level element to represent an order.  (order)   The order element will be of type LogicalOrderRecord   Defining the Format In Step 1 select   “Create new”  and  “Complex Type” and “Next”   In Step two browse to and select a file containing the test data shown at the start of this article. A link is provided at the end of this article to download a file containing the test data. Press the Next button     In Step 3 Complex types must be define for each type of input record. Select the Root-Element and Click on the Add Complex Type icon This creates a new empty complex type definition shown below. The fastest way to create the definition is to highlight the first line of the Sample File data and drag the line onto the  <new_complex_type> Format Builder introspects the data and provides a grid to define additional fields. Change the “Complex Type Name” to  “itemRecord” Then click on the ruler to indicate the position of fixed columns.  Drag the red triangle icons to the exact columns if necessary. Double click on an existing red triangle to remove an unwanted entry. In the case below fields are define in columns 0-3, 4-28, 29-eol When the field definitions are correct, press the “Generate Fields” button. Field entries named C1, C2 and C3 will be created as shown below. Click on the field names and rename them from C1->itemNum, C2->itemDesc and C3->itemCost  When all the fields are correctly defined press OK to save the complex type.        Next, the process is repeated to define a Complex Type for the SummaryRecord. Select the Root-Element in the schema tree and press the new complex type icon Then highlight and drag the Summary Record from the sample data onto the <new_complex_type>   Change the complex type name to “summaryRecord” Mark the fixed fields for Order Number and Order Date. Press the Generate Fields button and rename C1 and C2 to itemNum and orderDate respectively.   The last complex type to be defined is a type to hold the group of items and the summary record. Select the Root-Element in the schema tree and click the new complex type icon Select the “<new_complex_type>” entry and click the pencil icon   On the Complex Type Details page change the name and type of each input field. Change line 1 to be named item and set the Type  to “itemRecord” Change line 2 to be named summary and set the Type to “summaryRecord” We also need to indicate that itemRecords repeat in the input file. Click the pencil icon at the right side of the item line. On the Edit Details page change the “Max Occurs” entry from 1 to UNBOUNDED. We also need to indicate how to identify an itemRecord.  Since each item record has “.” in column 32 we can use this fact to differentiate an item record from a summary record. Change the “Look Ahead” field to value 32 and enter a period in the “Look For” field Press the OK button to save entry.     Finally, its time to create a top level element to represent an order. Select the “Root-Element” in the schema tree and press the New element icon Click on the <new_element> and press the pencil icon.   Set the Element Name to “order” and change the Data Type to “logicalOrderRecord” Press the OK button to save the element definition.   The final definition should match the screenshot below. Press the Next Button to view the definition source.     Press the Test Button to test the definition   Press the Green Triangle Icon to run the test.   And we are presented with an unwelcome error. The error states that the processor ran out of data while working through the definition. The processor was unable to differentiate between itemRecords and summaryRecords and therefore treated the entire file as a list of itemRecords.  At end of file, the “summary” portion of the logicalOrderRecord remained unprocessed but mandatory.   This root cause of this error is the loss of our “lookAhead” definition used to identify itemRecords. This appears to be a bug in the  Native Format Builder 11.1.1.4.0 Luckily, a simple workaround exists. Press the Cancel button and return to the “Step 4 of 4” Window. Manually add    nxsd:lookAhead="32" nxsd:lookFor="."   attributes after the maxOccurs attribute of the item element. as shown in the highlighted text below.   When the lookAhead and lookFor attributes have been added Press the Test button and on the Test page press the Green Triangle. The test is now successful, the first order in the file is returned by the File Adapter.     Below is a complete listing of the Result XML from the right column of the screen above   Try running it The downloaded input test file and completed schema file can be used for testing without following all the Native Format Builder steps in this example. Use the following link to download a file containing the sample data. Download Sample Input Data This is the best approach rather than cutting and pasting the input data at the top of the article.  Since the data is fixed length it’s very important to watch out for trailing spaces in the data and to ensure an eol character at the end of every line. The download file is correctly formatted. The final schema definition can be downloaded at the following link Download Completed Schema Definition   - Save the inputData.txt file to a known location like the xsd folder in your project. - Save the inputData_6.xsd file to the xsd folder in your project. - At step 1 in the Native Format Builder wizard  (as shown above) check the “Edit existing” radio button,    then browse and select the inputData_6.xsd file - At step 2 of the Format Builder configuration Wizard (as shown above) supply the path and filename for    the inputData.txt file. - You can then proceed to the test page and run a test. - Remember the wizard bug will drop the lookAhead and lookFor attributes,  you will need to manually add   nxsd:lookAhead="32" nxsd:lookFor="."    after the maxOccurs attribute of the item element in the   LogicalOrderRecord Complex Type.  (as shown above)   Good Luck with your Format Project

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • Enterprise Process Maps: A Process Picture worth a Million Words

    - by raul.goycoolea
    p { margin-bottom: 0.08in; }h1 { margin-top: 0.33in; margin-bottom: 0in; color: rgb(54, 95, 145); page-break-inside: avoid; }h1.western { font-family: "Cambria",serif; font-size: 14pt; }h1.cjk { font-family: "DejaVu Sans"; font-size: 14pt; }h1.ctl { font-size: 14pt; } Getting Started with Business Transformations A well-known proverb states that "A picture is worth a thousand words." In relation to Business Process Management (BPM), a credible analyst might have a few questions. What if the picture was taken from some particular angle, like directly overhead? What if it was taken from only an inch away or a mile away? What if the photographer did not focus the camera correctly? Does the value of the picture depend on who is looking at it? Enterprise Process Maps are analogous in this sense of relative value. Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment, from the biggest picture representations down to the lowest level required or desired for the particular project type, scope and objectives. The Enterprise Process Map serves as an entry point for the process architecture and is defined: the single highest level of process mapping for an organization. It is constructed and evaluated during the Strategy Phase of the Business Process Management Lifecycle. (see Figure 1) Fig. 1: Business Process Management Lifecycle Many organizations view such maps as visual abstractions, constructed for the single purpose of process categorization. This, in turn, results in a lesser focus on the inherent intricacies of the Enterprise Process view, which are explored in the course of this paper. With the main focus of a large scale process documentation effort usually underlying an ERP or other system implementation, it is common for the work to be driven by the desire to "get to the details," and to the type of modeling that will derive near-term tangible results. For instance, a project in American Pharmaceutical Company X is driven by the Director of IT. With 120+ systems in place, and a lack of standardized processes across the United States, he and the VP of IT have decided to embark on a long-term ERP implementation. At the forethought of both are questions, such as: How does my application architecture map to the business? What are each application's functionalities, and where do the business processes utilize them? Where can we retire legacy systems? Well-developed BPM methodologies prescribe numerous model types to capture such information and allow for thorough analysis in these areas. Process to application maps, Event Driven Process Chains, etc. provide this level of detail and facilitate the completion of such project-specific questions. These models and such analysis are appropriately carried out at a relatively low level of process detail. (see figure 2) Fig. 2: The Level Concept, Generic Process HierarchySome of the questions remaining are ones of documentation longevity, the continuation of BPM practice in the organization, process governance and ownership, process transparency and clarity in business process objectives and strategy. The Level Concept in Brief Figure 2 shows a generic, four-level process hierarchy depicting the breakdown of a "Process Area" into progressively more detailed process classifications. The number of levels and the names of these levels are flexible, and can be fit to the standards of the organization's chosen terminology or any other chosen reference model that makes logical sense for both short and long term process description. It is at Level 1 (in this case the Process Area level), that the Enterprise Process Map is created. This map and its contained objects become the foundation for a top-down approach to subsequent mapping, object relationship development, and analysis of the organization's processes and its supporting infrastructure. Additionally, this picture serves as a communication device, at an executive level, describing the design of the business in its service to a customer. It seems, then, imperative that the process development effort, and this map, start off on the right foot. Figuring out just what that right foot is, however, is critical and trend-setting in an evolving organization. Key Considerations Enterprise Process Maps are usually not as living and breathing as other process maps. Just as it would be an extremely difficult task to change the foundation of the Sears Tower or a city plan for the entire city of Chicago, the Enterprise Process view of an organization usually remains unchanged once developed (unless, of course, an organization is at a stage where it is capable of true, high-level process innovation). Regardless, the Enterprise Process map is a key first step, and one that must be taken in a precise way. What makes this groundwork solid depends on not only the materials used to construct it (process areas), but also the layout plan and knowledge base of what will be built (the entire process architecture). It seems reasonable that care and consideration are required to create this critical high level map... but what are the important factors? Does the process modeler need to worry about how many process areas there are? About who is looking at it? Should he only use the color pink because it's his boss' favorite color? Interestingly, and perhaps surprisingly, these are all valid considerations that may just require a bit of structure. Below are Three Key Factors to consider when building an Enterprise Process Map: Company Strategic Focus Process Categorization: Customer is Core End-to-end versus Functional Processes Company Strategic Focus As mentioned above, the Enterprise Process Map is created during the Strategy Phase of the Business Process Management Lifecycle. From Oracle Business Process Management methodology for business transformation, it is apparent that business processes exist for the purpose of achieving the strategic objectives of an organization. In a prescribed, top-down approach to process development, it must be ensured that each process fulfills its objectives, and in an aggregated manner, drives fulfillment of the strategic objectives of the company, whether for particular business segments or in a broader sense. This is a crucial point, as the strategic messages of the company must therefore resound in its process maps, in particular one that spans the processes of the complete business: the Enterprise Process Map. One simple example from Company X is shown below (see figure 3). Fig. 3: Company X Enterprise Process Map In reviewing Company X's Enterprise Process Map, one can immediately begin to understand the general strategic mindset of the organization. It shows that Company X is focused on its customers, defining 10 of its process areas belonging to customer-focused categories. Additionally, the organization views these end-customer-oriented process areas as part of customer-fulfilling value chains, while support process areas do not provide as much contiguous value. However, by including both support and strategic process categorizations, it becomes apparent that all processes are considered vital to the success of the customer-oriented focus processes. Below is an example from Company Y (see figure 4). Fig. 4: Company Y Enterprise Process Map Company Y, although also a customer-oriented company, sends a differently focused message with its depiction of the Enterprise Process Map. Along the top of the map is the company's product tree, overarching the process areas, which when executed deliver the products themselves. This indicates one strategic objective of excellence in product quality. Additionally, the view represents a less linear value chain, with strong overlaps of the various process areas. Marketing and quality management are seen as a key support processes, as they span the process lifecycle. Often, companies may incorporate graphics, logos and symbols representing customers and suppliers, and other objects to truly send the strategic message to the business. Other times, Enterprise Process Maps may show high level of responsibility to organizational units, or the application types that support the process areas. It is possible that hundreds of formats and focuses can be applied to an Enterprise Process Map. What is of vital importance, however, is which formats and focuses are chosen to truly represent the direction of the company, and serve as a driver for focusing the business on the strategic objectives set forth in that right. Process Categorization: Customer is Core In the previous two examples, processes were grouped using differing categories and techniques. Company X showed one support and three customer process categorizations using encompassing chevron objects; Customer Y achieved a less distinct categorization using a gradual color scheme. Either way, and in general, modeling of the process areas becomes even more valuable and easily understood within the context of business categorization, be it strategic or otherwise. But how one categorizes their processes is typically more complex than simply choosing object shapes and colors. Previously, it was stated that the ideal is a prescribed top-down approach to developing processes, to make certain linkages all the way back up to corporate strategy. But what about external influences? What forces push and pull corporate strategy? Industry maturity, product lifecycle, market profitability, competition, etc. can all drive the critical success factors of a particular business segment, or the company as a whole, in addition to previous corporate strategy. This may seem to be turning into a discussion of theory, but that is far from the case. In fact, in years of recent study and evolution of the way businesses operate, cross-industry and across the globe, one invariable has surfaced with such strength to make it undeniable in the game plan of any strategy fit for survival. That constant is the customer. Many of a company's critical success factors, in any business segment, relate to the customer: customer retention, satisfaction, loyalty, etc. Businesses serve customers, and so do a business's processes, mapped or unmapped. The most effective way to categorize processes is in a manner that visualizes convergence to what is core for a company. It is the value chain, beginning with the customer in mind, and ending with the fulfillment of that customer, that becomes the core or the centerpiece of the Enterprise Process Map. (See figure 5) Fig. 5: Company Z Enterprise Process Map Company Z has what may be viewed as several different perspectives or "cuts" baked into their Enterprise Process Map. It has divided its processes into three main categories (top, middle, and bottom) of Management Processes, the Core Value Chain and Supporting Processes. The Core category begins with Corporate Marketing (which contains the activities of beginning to engage customers) and ends with Customer Service Management. Within the value chain, this company has divided into the focus areas of their two primary business lines, Foods and Beverages. Does this mean that areas, such as Strategy, Information Management or Project Management are not as important as those in the Core category? No! In some cases, though, depending on the organization's understanding of high-level BPM concepts, use of category names, such as "Core," "Management" or "Support," can be a touchy subject. What is important to understand, is that no matter the nomenclature chosen, the Core processes are those that drive directly to customer value, Support processes are those which make the Core processes possible to execute, and Management Processes are those which steer and influence the Core. Some common terms for these three basic categorizations are Core, Customer Fulfillment, Customer Relationship Management, Governing, Controlling, Enabling, Support, etc. End-to-end versus Functional Processes Every high and low level of process: function, task, activity, process/work step (whatever an organization calls it), should add value to the flow of business in an organization. Suppose that within the process "Deliver package," there is a documented task titled "Stop for ice cream." It doesn't take a process expert to deduce the room for improvement. Though stopping for ice cream may create gain for the one person performing it, it likely benefits neither the organization nor, more importantly, the customer. In most cases, "Stop for ice cream" wouldn't make it past the first pass of To-Be process development. What would make the cut, however, would be a flow of tasks that, each having their own value add, build up to greater and greater levels of process objective. In this case, those tasks would combine to achieve a status of "package delivered." Figure 3 shows a simple example: Just as the package can only be delivered (outcome of the process) without first being retrieved, loaded, and the travel destination reached (outcomes of the process steps), some higher level of process "Play Practical Joke" (e.g., main process or process area) cannot be completed until a package is delivered. It seems that isolated or functionally separated processes, such as "Deliver Package" (shown in Figure 6), are necessary, but are always part of a bigger value chain. Each of these individual processes must be analyzed within the context of that value chain in order to ensure successful end-to-end process performance. For example, this company's "Create Joke Package" process could be operating flawlessly and efficiently, but if a joke is never developed, it cannot be created, so the end-to-end process breaks. Fig. 6: End to End Process Construction That being recognized, it is clear that processes must be viewed as end-to-end, customer-to-customer, and in the context of company strategy. But as can also be seen from the previous example, these vital end-to-end processes cannot be built without the functionally oriented building blocks. Without one, the other cannot be had, or at least not in a complete and organized fashion. As it turns out, but not discussed in depth here, the process modeling effort, BPM organizational development, and comprehensive coverage cannot be fully realized without a semi-functional, process-oriented approach. Then, an Enterprise Process Map should be concerned with both views, the building blocks, and access points to the business-critical end-to-end processes, which they construct. Without the functional building blocks, all streams of work needed for any business transformation would be lost mess of process disorganization. End-to-end views are essential for utilization in optimization in context, understanding customer impacts, base-lining all project phases and aligning objectives. Including both views on an Enterprise Process Map allows management to understand the functional orientation of the company's processes, while still providing access to end-to-end processes, which are most valuable to them. (See figures 7 and 8). Fig. 7: Simplified Enterprise Process Map with end-to-end Access Point The above examples show two unique ways to achieve a successful Enterprise Process Map. The first example is a simple map that shows a high level set of process areas and a separate section with the end-to-end processes of concern for the organization. This particular map is filtered to show just one vital end-to-end process for a project-specific focus. Fig. 8: Detailed Enterprise Process Map showing connected Functional Processes The second example shows a more complex arrangement and categorization of functional processes (the names of each process area has been removed). The end-to-end perspective is achieved at this level through the connections (interfaces at lower levels) between these functional process areas. An important point to note is that the organization of these two views of the Enterprise Process Map is dependent, in large part, on the orientation of its audience, and the complexity of the landscape at the highest level. If both are not apparent, the Enterprise Process Map is missing an opportunity to serve as a holistic, high-level view. Conclusion In the world of BPM, and specifically regarding Enterprise Process Maps, a picture can be worth as many words as the thought and effort that is put into it. Enterprise Process Maps alone cannot change an organization, but they serve more purposes than initially meet the eye, and therefore must be designed in a way that enables a BPM mindset, business process understanding and business transformation efforts. Every Enterprise Process Map will and should be different when looking across organizations. Its design will be driven by company strategy, a level of customer focus, and functional versus end-to-end orientations. This high-level description of the considerations of the Enterprise Process Maps is not a prescriptive "how to" guide. However, a company attempting to create one may not have the practical BPM experience to truly explore its options or impacts to the coming work of business process transformation. The biggest takeaway is that process modeling, at all levels, is a science and an art, and art is open to interpretation. It is critical that the modeler of the highest level of process mapping be a cognoscente of the message he is delivering and the factors at hand. Without sufficient focus on the design of the Enterprise Process Map, an entire BPM effort may suffer. For additional information please check: Oracle Business Process Management.

    Read the article

  • Coherence - How to develop a custom push replication publisher

    - by cosmin.tudor(at)oracle.com
    CoherencePushReplicationDB.zipIn the example bellow I'm describing a way of developing a custom push replication publisher that publishes data to a database via JDBC. This example can be easily changed to publish data to other receivers (JMS,...) by performing changes to step 2 and small changes to step 3, steps that are presented bellow. I've used Eclipse as the development tool. To develop a custom push replication publishers we will need to go through 6 steps: Step 1: Create a custom publisher scheme class Step 2: Create a custom publisher class that should define what the publisher is doing. Step 3: Create a class data is performing the actions (publish to JMS, DB, etc ) for the custom publisher. Step 4: Register the new publisher against a ContentHandler. Step 5: Add the new custom publisher in the cache configuration file. Step 6: Add the custom publisher scheme class to the POF configuration file. All these steps are detailed bellow. The coherence project is attached and conclusions are presented at the end. Step 1: In the Coherence Eclipse project create a class called CustomPublisherScheme that should implement com.oracle.coherence.patterns.pushreplication.publishers.AbstractPublisherScheme. In this class define the elements of the custom-publisher-scheme element. For instance for a CustomPublisherScheme that looks like that: <sync:publisher> <sync:publisher-name>Active2-JDBC-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:custom-publisher-scheme> <sync:jdbc-string>jdbc:oracle:thin:@machine-name:1521:XE</sync:jdbc-string> <sync:username>hr</sync:username> <sync:password>hr</sync:password> </sync:custom-publisher-scheme> </sync:publisher-scheme> </sync:publisher> the code is: package com.oracle.coherence; import java.io.DataInput; import java.io.DataOutput; import java.io.IOException; import com.oracle.coherence.patterns.pushreplication.Publisher; import com.oracle.coherence.configuration.Configurable; import com.oracle.coherence.configuration.Mandatory; import com.oracle.coherence.configuration.Property; import com.oracle.coherence.configuration.parameters.ParameterScope; import com.oracle.coherence.environment.Environment; import com.tangosol.io.pof.PofReader; import com.tangosol.io.pof.PofWriter; import com.tangosol.util.ExternalizableHelper; @Configurable public class CustomPublisherScheme extends com.oracle.coherence.patterns.pushreplication.publishers.AbstractPublisherScheme { /** * */ private static final long serialVersionUID = 1L; private String jdbcString; private String username; private String password; public String getJdbcString() { return this.jdbcString; } @Property("jdbc-string") @Mandatory public void setJdbcString(String jdbcString) { this.jdbcString = jdbcString; } public String getUsername() { return username; } @Property("username") @Mandatory public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } @Property("password") @Mandatory public void setPassword(String password) { this.password = password; } public Publisher realize(Environment environment, ClassLoader classLoader, ParameterScope parameterScope) { return new CustomPublisher(getJdbcString(), getUsername(), getPassword()); } public void readExternal(DataInput in) throws IOException { super.readExternal(in); this.jdbcString = ExternalizableHelper.readSafeUTF(in); this.username = ExternalizableHelper.readSafeUTF(in); this.password = ExternalizableHelper.readSafeUTF(in); } public void writeExternal(DataOutput out) throws IOException { super.writeExternal(out); ExternalizableHelper.writeSafeUTF(out, this.jdbcString); ExternalizableHelper.writeSafeUTF(out, this.username); ExternalizableHelper.writeSafeUTF(out, this.password); } public void readExternal(PofReader reader) throws IOException { super.readExternal(reader); this.jdbcString = reader.readString(100); this.username = reader.readString(101); this.password = reader.readString(102); } public void writeExternal(PofWriter writer) throws IOException { super.writeExternal(writer); writer.writeString(100, this.jdbcString); writer.writeString(101, this.username); writer.writeString(102, this.password); } } Step 2: Define what the CustomPublisher should basically do by creating a new java class called CustomPublisher that implements com.oracle.coherence.patterns.pushreplication.Publisher package com.oracle.coherence; import com.oracle.coherence.patterns.pushreplication.EntryOperation; import com.oracle.coherence.patterns.pushreplication.Publisher; import com.oracle.coherence.patterns.pushreplication.exceptions.PublisherNotReadyException; import java.io.BufferedWriter; import java.util.Iterator; public class CustomPublisher implements Publisher { private String jdbcString; private String username; private String password; private transient BufferedWriter bufferedWriter; public CustomPublisher() { } public CustomPublisher(String jdbcString, String username, String password) { this.jdbcString = jdbcString; this.username = username; this.password = password; this.bufferedWriter = null; } public String getJdbcString() { return this.jdbcString; } public String getUsername() { return username; } public String getPassword() { return password; } public void publishBatch(String cacheName, String publisherName, Iterator<EntryOperation> entryOperations) { DatabasePersistence databasePersistence = new DatabasePersistence( jdbcString, username, password); while (entryOperations.hasNext()) { EntryOperation entryOperation = (EntryOperation) entryOperations .next(); databasePersistence.databasePersist(entryOperation); } } public void start(String cacheName, String publisherName) throws PublisherNotReadyException { System.err .printf("Started: Custom JDBC Publisher for Cache %s with Publisher %s\n", new Object[] { cacheName, publisherName }); } public void stop(String cacheName, String publisherName) { System.err .printf("Stopped: Custom JDBC Publisher for Cache %s with Publisher %s\n", new Object[] { cacheName, publisherName }); } } In the publishBatch method from above we inform the publisher that he is supposed to persist data to a database: DatabasePersistence databasePersistence = new DatabasePersistence( jdbcString, username, password); while (entryOperations.hasNext()) { EntryOperation entryOperation = (EntryOperation) entryOperations .next(); databasePersistence.databasePersist(entryOperation); } Step 3: The class that deals with the persistence is a very basic one that uses JDBC to perform inserts/updates against a database. package com.oracle.coherence; import com.oracle.coherence.patterns.pushreplication.EntryOperation; import java.sql.*; import java.text.SimpleDateFormat; import com.oracle.coherence.Order; public class DatabasePersistence { public static String INSERT_OPERATION = "INSERT"; public static String UPDATE_OPERATION = "UPDATE"; public Connection dbConnection; public DatabasePersistence(String jdbcString, String username, String password) { this.dbConnection = createConnection(jdbcString, username, password); } public Connection createConnection(String jdbcString, String username, String password) { Connection connection = null; System.err.println("Connecting to: " + jdbcString + " Username: " + username + " Password: " + password); try { // Load the JDBC driver String driverName = "oracle.jdbc.driver.OracleDriver"; Class.forName(driverName); // Create a connection to the database connection = DriverManager.getConnection(jdbcString, username, password); System.err.println("Connected to:" + jdbcString + " Username: " + username + " Password: " + password); } catch (ClassNotFoundException e) { e.printStackTrace(); } // driver catch (SQLException e) { e.printStackTrace(); } return connection; } public void databasePersist(EntryOperation entryOperation) { if (entryOperation.getOperation().toString() .equalsIgnoreCase(INSERT_OPERATION)) { insert(((Order) entryOperation.getPublishableEntry().getValue())); } else if (entryOperation.getOperation().toString() .equalsIgnoreCase(UPDATE_OPERATION)) { update(((Order) entryOperation.getPublishableEntry().getValue())); } } public void update(Order order) { String update = "UPDATE Orders set QUANTITY= '" + order.getQuantity() + "', AMOUNT='" + order.getAmount() + "', ORD_DATE= '" + (new SimpleDateFormat("dd-MMM-yyyy")).format(order .getOrdDate()) + "' WHERE SYMBOL='" + order.getSymbol() + "'"; System.err.println("UPDATE = " + update); try { Statement stmt = getDbConnection().createStatement(); stmt.execute(update); stmt.close(); } catch (SQLException ex) { System.err.println("SQLException: " + ex.getMessage()); } } public void insert(Order order) { String insert = "insert into Orders values('" + order.getSymbol() + "'," + order.getQuantity() + "," + order.getAmount() + ",'" + (new SimpleDateFormat("dd-MMM-yyyy")).format(order .getOrdDate()) + "')"; System.err.println("INSERT = " + insert); try { Statement stmt = getDbConnection().createStatement(); stmt.execute(insert); stmt.close(); } catch (SQLException ex) { System.err.println("SQLException: " + ex.getMessage()); } } public Connection getDbConnection() { return dbConnection; } public void setDbConnection(Connection dbConnection) { this.dbConnection = dbConnection; } } Step 4: Now we need to register our publisher against a ContentHandler. In order to achieve that we need to create in our eclipse project a new class called CustomPushReplicationNamespaceContentHandler that should extend the com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler. In the constructor of the new class we define a new handler for our custom publisher. package com.oracle.coherence; import com.oracle.coherence.configuration.Configurator; import com.oracle.coherence.environment.extensible.ConfigurationContext; import com.oracle.coherence.environment.extensible.ConfigurationException; import com.oracle.coherence.environment.extensible.ElementContentHandler; import com.oracle.coherence.patterns.pushreplication.PublisherScheme; import com.oracle.coherence.environment.extensible.QualifiedName; import com.oracle.coherence.patterns.pushreplication.configuration.PushReplicationNamespaceContentHandler; import com.tangosol.run.xml.XmlElement; public class CustomPushReplicationNamespaceContentHandler extends PushReplicationNamespaceContentHandler { public CustomPushReplicationNamespaceContentHandler() { super(); registerContentHandler("custom-publisher-scheme", new ElementContentHandler() { public Object onElement(ConfigurationContext context, QualifiedName qualifiedName, XmlElement xmlElement) throws ConfigurationException { PublisherScheme publisherScheme = new CustomPublisherScheme(); Configurator.configure(publisherScheme, context, qualifiedName, xmlElement); return publisherScheme; } }); } } Step 5: Now we should define our CustomPublisher in the cache configuration file according to the following documentation. <cache-config xmlns:sync="class:com.oracle.coherence.CustomPushReplicationNamespaceContentHandler" xmlns:cr="class:com.oracle.coherence.environment.extensible.namespaces.InstanceNamespaceContentHandler"> <caching-schemes> <sync:provider pof-enabled="false"> <sync:coherence-provider /> </sync:provider> <caching-scheme-mapping> <cache-mapping> <cache-name>publishing-cache</cache-name> <scheme-name>distributed-scheme-with-publishing-cachestore</scheme-name> <autostart>true</autostart> <sync:publisher> <sync:publisher-name>Active2 Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:remote-cluster-publisher-scheme> <sync:remote-invocation-service-name>remote-site1</sync:remote-invocation-service-name> <sync:remote-publisher-scheme> <sync:local-cache-publisher-scheme> <sync:target-cache-name>publishing-cache</sync:target-cache-name> </sync:local-cache-publisher-scheme> </sync:remote-publisher-scheme> <sync:autostart>true</sync:autostart> </sync:remote-cluster-publisher-scheme> </sync:publisher-scheme> </sync:publisher> <sync:publisher> <sync:publisher-name>Active2-Output-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:stderr-publisher-scheme> <sync:autostart>true</sync:autostart> <sync:publish-original-value>true</sync:publish-original-value> </sync:stderr-publisher-scheme> </sync:publisher-scheme> </sync:publisher> <sync:publisher> <sync:publisher-name>Active2-JDBC-Publisher</sync:publisher-name> <sync:publisher-scheme> <sync:custom-publisher-scheme> <sync:jdbc-string>jdbc:oracle:thin:@machine_name:1521:XE</sync:jdbc-string> <sync:username>hr</sync:username> <sync:password>hr</sync:password> </sync:custom-publisher-scheme> </sync:publisher-scheme> </sync:publisher> </cache-mapping> </caching-scheme-mapping> <!-- The following scheme is required for each remote-site when using a RemoteInvocationPublisher --> <remote-invocation-scheme> <service-name>remote-site1</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>localhost</address> <port>20001</port> </socket-address> </remote-addresses> <connect-timeout>2s</connect-timeout> </tcp-initiator> <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-invocation-scheme> <!-- END: com.oracle.coherence.patterns.pushreplication --> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <acceptor-config> <tcp-acceptor> <local-address> <address>localhost</address> <port>20002</port> </local-address> </tcp-acceptor> </acceptor-config> <autostart>true</autostart> </proxy-scheme> </caching-schemes> </cache-config> As you can see in the red-marked text from above I've:       - set new Namespace Content Handler       - define the new custom publisher that should work together with other publishers like: stderr and remote publishers in our case. Step 6: Add the com.oracle.coherence.CustomPublisherScheme to your custom-pof-config file: <pof-config> <user-type-list> <!-- Built in types --> <include>coherence-pof-config.xml</include> <include>coherence-common-pof-config.xml</include> <include>coherence-messagingpattern-pof-config.xml</include> <include>coherence-pushreplicationpattern-pof-config.xml</include> <!-- Application types --> <user-type> <type-id>1901</type-id> <class-name>com.oracle.coherence.Order</class-name> <serializer> <class-name>com.oracle.coherence.OrderSerializer</class-name> </serializer> </user-type> <user-type> <type-id>1902</type-id> <class-name>com.oracle.coherence.CustomPublisherScheme</class-name> </user-type> </user-type-list> </pof-config> CONCLUSIONSThis approach allows for publishers to publish data to almost any other receiver (database, JMS, MQ, ...). The only thing that needs to be changed is the DatabasePersistence.java class that should be adapted to the chosen receiver. Only minor changes are needed for the rest of the code (to publishBatch method from CustomPublisher class).

    Read the article

  • Silverlight Tree View with Multiple Levels

    - by psheriff
    There are many examples of the Silverlight Tree View that you will find on the web, however, most of them only show you how to go to two levels. What if you have more than two levels? This is where understanding exactly how the Hierarchical Data Templates works is vital. In this blog post, I am going to break down how these templates work so you can really understand what is going on underneath the hood. To start, let’s look at the typical two-level Silverlight Tree View that has been hard coded with the values shown below: <sdk:TreeView>  <sdk:TreeViewItem Header="Managers">    <TextBlock Text="Michael" />    <TextBlock Text="Paul" />  </sdk:TreeViewItem>  <sdk:TreeViewItem Header="Supervisors">    <TextBlock Text="John" />    <TextBlock Text="Tim" />    <TextBlock Text="David" />  </sdk:TreeViewItem></sdk:TreeView> Figure 1 shows you how this tree view looks when you run the Silverlight application. Figure 1: A hard-coded, two level Tree View. Next, let’s create three classes to mimic the hard-coded Tree View shown above. First, you need an Employee class and an EmployeeType class. The Employee class simply has one property called Name. The constructor is created to accept a “name” argument that you can use to set the Name property when you create an Employee object. public class Employee{  public Employee(string name)  {    Name = name;  }   public string Name { get; set; }} Finally you create an EmployeeType class. This class has one property called EmpType and contains a generic List<> collection of Employee objects. The property that holds the collection is called Employees. public class EmployeeType{  public EmployeeType(string empType)  {    EmpType = empType;    Employees = new List<Employee>();  }   public string EmpType { get; set; }  public List<Employee> Employees { get; set; }} Finally we have a collection class called EmployeeTypes created using the generic List<> class. It is in the constructor for this class where you will build the collection of EmployeeTypes and fill it with Employee objects: public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;            type = new EmployeeType("Manager");    type.Employees.Add(new Employee("Michael"));    type.Employees.Add(new Employee("Paul"));    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} You now have a data hierarchy in memory (Figure 2) which is what the Tree View control expects to receive as its data source. Figure 2: A hierachial data structure of Employee Types containing a collection of Employee objects. To connect up this hierarchy of data to your Tree View you create an instance of the EmployeeTypes class in XAML as shown in line 13 of Figure 3. The key assigned to this object is “empTypes”. This key is used as the source of data to the entire Tree View by setting the ItemsSource property as shown in Figure 3, Callout #1. Figure 3: You need to start from the bottom up when laying out your templates for a Tree View. The ItemsSource property of the Tree View control is used as the data source in the Hierarchical Data Template with the key of employeeTypeTemplate. In this case there is only one Hierarchical Data Template, so any data you wish to display within that template comes from the collection of Employee Types. The TextBlock control in line 20 uses the EmpType property of the EmployeeType class. You specify the name of the Hierarchical Data Template to use in the ItemTemplate property of the Tree View (Callout #2). For the second (and last) level of the Tree View control you use a normal <DataTemplate> with the name of employeeTemplate (line 14). The Hierarchical Data Template in lines 17-21 sets its ItemTemplate property to the key name of employeeTemplate (Line 19 connects to Line 14). The source of the data for the <DataTemplate> needs to be a property of the EmployeeTypes collection used in the Hierarchical Data Template. In this case that is the Employees property. In the Employees property there is a “Name” property of the Employee class that is used to display the employee name in the second level of the Tree View (Line 15). What is important here is that your lowest level in your Tree View is expressed in a <DataTemplate> and should be listed first in your Resources section. The next level up in your Tree View should be a <HierarchicalDataTemplate> which has its ItemTemplate property set to the key name of the <DataTemplate> and the ItemsSource property set to the data you wish to display in the <DataTemplate>. The Tree View control should have its ItemsSource property set to the data you wish to display in the <HierarchicalDataTemplate> and its ItemTemplate property set to the key name of the <HierarchicalDataTemplate> object. It is in this way that you get the Tree View to display all levels of your hierarchical data structure. Three Levels in a Tree View Now let’s expand upon this concept and use three levels in our Tree View (Figure 4). This Tree View shows that you now have EmployeeTypes at the top of the tree, followed by a small set of employees that themselves manage employees. This means that the EmployeeType class has a collection of Employee objects. Each Employee class has a collection of Employee objects as well. Figure 4: When using 3 levels in your TreeView you will have 2 Hierarchical Data Templates and 1 Data Template. The EmployeeType class has not changed at all from our previous example. However, the Employee class now has one additional property as shown below: public class Employee{  public Employee(string name)  {    Name = name;    ManagedEmployees = new List<Employee>();  }   public string Name { get; set; }  public List<Employee> ManagedEmployees { get; set; }} The next thing that changes in our code is the EmployeeTypes class. The constructor now needs additional code to create a list of managed employees. Below is the new code. public class EmployeeTypes : List<EmployeeType>{  public EmployeeTypes()  {    EmployeeType type;    Employee emp;    Employee managed;     type = new EmployeeType("Manager");    emp = new Employee("Michael");    managed = new Employee("John");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Tim");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);     emp = new Employee("Paul");    managed = new Employee("Michael");    emp.ManagedEmployees.Add(managed);    managed = new Employee("Sara");    emp.ManagedEmployees.Add(managed);    type.Employees.Add(emp);    this.Add(type);     type = new EmployeeType("Project Managers");    type.Employees.Add(new Employee("Tim"));    type.Employees.Add(new Employee("John"));    type.Employees.Add(new Employee("David"));    this.Add(type);  }} Now that you have all of the data built in your classes, you are now ready to hook up this three-level structure to your Tree View. Figure 5 shows the complete XAML needed to hook up your three-level Tree View. You can see in the XAML that there are now two Hierarchical Data Templates and one Data Template. Again you list the Data Template first since that is the lowest level in your Tree View. The next Hierarchical Data Template listed is the next level up from the lowest level, and finally you have a Hierarchical Data Template for the first level in your tree. You need to work your way from the bottom up when creating your Tree View hierarchy. XAML is processed from the top down, so if you attempt to reference a XAML key name that is below where you are referencing it from, you will get a runtime error. Figure 5: For three levels in a Tree View you will need two Hierarchical Data Templates and one Data Template. Each Hierarchical Data Template uses the previous template as its ItemTemplate. The ItemsSource of each Hierarchical Data Template is used to feed the data to the previous template. This is probably the most confusing part about working with the Tree View control. You are expecting the content of the current Hierarchical Data Template to use the properties set in the ItemsSource property of that template. But you need to look to the template lower down in the XAML to see the source of the data as shown in Figure 6. Figure 6: The properties you use within the Content of a template come from the ItemsSource of the next template in the resources section. Summary Understanding how to put together your hierarchy in a Tree View is simple once you understand that you need to work from the bottom up. Start with the bottom node in your Tree View and determine what that will look like and where the data will come from. You then build the next Hierarchical Data Template to feed the data to the previous template you created. You keep doing this for each level in your Tree View until you get to the last level. The data for that last Hierarchical Data Template comes from the ItemsSource in the Tree View itself. NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “Silverlight TreeView with Multiple Levels” from the drop down list.

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >