Search Results

Search found 96 results on 4 pages for 'criterion'.

Page 4/4 | < Previous Page | 1 2 3 4 

  • how to apply iddata into calculation?

    - by Sam
    I am trying to figure out how to combine the input and output data into the ARX model and then apply it into the BIC (Bayesian Information Criterion) formula. Below is the code that I am currently working on: for i=1:30; %% Set Model Order data=iddata(output,input,1); model = arx(data,[8 9 i]); yp = predict(model,data); ye = regress(data,yp{1,1}(1:4018,1)); M(i) = var(yp); BIC(i)=(N+i*(log(N)-1))/(N-i)*log(M(i)); end But it does not work. It keep on giving me an error something like below: "The syntax "Data{...}" is not supported. Use the "getexp" command to extract individual experiments from an IDDATA object." I did not understand what does that mean. Can someone explain it to me and where do I do wrong on my piece of code? Thanks in advance. Sam.

    Read the article

  • Determining polygon intersection and containment

    - by Victor Liu
    I have a set of simple (no holes, no self-intersections) polygons, and I need to check that they don't intersect each other (one can be entirely contained in another; that is okay). I can check this by simply checking the per-vertex inside-ness of one polygon versus other polygons. I also need to determine the containment tree, which is the set of relationships that say which polygon contains any given polygon. Since no polygon can intersect any other, then any contained polygon has a unique container; the "next-bigger" one. In other words, if A contains B contains C, then A is the parent of B, and B is the parent of C, and we don't consider A the parent of C. The question: How do I efficiently determine the containment relationships and check the non-intersection criterion? I ask this as one question because maybe a combined algorithm is more efficient than solving each problem separately. The algorithm should take as input a list of polygons, given by a list of their vertices. It should produce a boolean B indicating if none of the polygons intersect any other polygon, and also if B = true, a list of pairs (P, C) where polygon P is the parent of child C. This is not homework. This is for a hobby project I am working on.

    Read the article

  • Can't iterate over a list class in Python

    - by Vicky
    I'm trying to write a simple GUI front end for Plurk using pyplurk. I have successfully got it to create the API connection, log in, and retrieve and display a list of friends. Now I'm trying to retrieve and display a list of Plurks. pyplurk provides a GetNewPlurks function as follows: def GetNewPlurks(self, since): '''Get new plurks since the specified time. Args: since: [datetime.datetime] the timestamp criterion. Returns: A PlurkPostList object or None. ''' offset = jsonizer.conv_datetime(since) status_code, result = self._CallAPI('/Polling/getPlurks', offset=offset) return None if status_code != 200 else \ PlurkPostList(result['plurks'], result['plurk_users'].values()) As you can see this returns a PlurkPostList, which in turn is defined as follows: class PlurkPostList: '''A list of plurks and the set of users that posted them.''' def __init__(self, plurk_json_list, user_json_list=[]): self._plurks = [PlurkPost(p) for p in plurk_json_list] self._users = [PlurkUser(u) for u in user_json_list] def __iter__(self): return self._plurks def GetUsers(self): return self._users def __eq__(self, other): if other.__class__ != PlurkPostList: return False if self._plurks != other._plurks: return False if self._users != other._users: return False return True Now I expected to be able to do something like this: api = plurk_api_urllib2.PlurkAPI(open('api.key').read().strip(), debug_level=1) plurkproxy = PlurkProxy(api, json.loads) user = plurkproxy.Login('my_user', 'my_pass') ps = plurkproxy.GetNewPlurks(datetime.datetime(2009, 12, 12, 0, 0, 0)) print ps for p in ps: print str(p) When I run this, what I actually get is: <plurk.PlurkPostList instance at 0x01E8D738> from the "print ps", then: for p in ps: TypeError: __iter__ returned non-iterator of type 'list' I don't understand - surely a list is iterable? Where am I going wrong - how do I access the Plurks in the PlurkPostList?

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • What is the minimal licensable source code?

    - by Hernán Eche
    Let's suppose I want to "protect" this code about being used without attribution, patenting it, or through any open source licence... #include<stdio.h> int main (void) { int version=2; printf("\r\n.Hello world, ver:(%d).", version); return 0; } It's a little obvious or just a language definition example.. When a source stop being "trivial, banal, commonplace, obvious", and start to be something that you may claim "rights"? Perhaps it depends on who read it, something that could be great geniality for someone that have never programmed, could be just obvious for an expert. It's easy when watching two sources there are 10000 same lines of code, that's a theft.. but that's not always so obvious. How to measure amount of "ownness", it's about creativity? line numbers? complexity? I can't imagine objetive answers for that, only some patches. For example perhaps the complexity, It's not fair to replace "years of engeneering" with "copy and paste". But is there any objetive index for objetive determination of this subject? (In a funny way I imagine this criterion: If the licence is longer than the code, then there is no owner, just to punish not caring storage space and world resources =P)

    Read the article

  • Big problem with fluent nhibernate, c# and MySQL need to search in BLOB

    - by VinnyG
    I've done a big mistake, now I have to find a solution. It was my first project working with fluent nhibernate, I mapped an object this way : public PosteCandidateMap() { Id(x => x.Id); Map(x => x.Candidate); Map(x => x.Status); Map(x => x.Poste); Map(x => x.MatchPossibility); Map(x => x.ModificationDate); } So the whole Poste object is in the database but I would have need only the PosteId. Now I got to find all Candidates for one Poste so when I look in my repository I have : return GetAll().Where(x => x.Poste.Id == id).ToList(); But this is very slow since it loads all the items, we now have more than 1500 items in the table, at first to project was not supposed to be that big (not a big paycheck either). Now I'm trying to do this with criterion ou Linq but it's not working since my Poste is in a BLOB. Is there anyway I can change this easyly? Thanks a lot for the help!

    Read the article

  • How to negate a predicate function using operator ! in C++?

    - by Chan
    Hi, I want to erase all the elements that do not satisfy a criterion. For example: delete all the characters in a string that are not digit. My solution using boost::is_digit worked well. struct my_is_digit { bool operator()( char c ) const { return c >= '0' && c <= '9'; } }; int main() { string s( "1a2b3c4d" ); s.erase( remove_if( s.begin(), s.end(), !boost::is_digit() ), s.end() ); s.erase( remove_if( s.begin(), s.end(), !my_is_digit() ), s.end() ); cout << s << endl; return 0; } Then I tried my own version, the compiler complained :( error C2675: unary '!' : 'my_is_digit' does not define this operator or a conversion to a type acceptable to the predefined operator I could use not1() adapter, however I still think the operator ! is more meaningful in my current context. How could I implement such a ! like boost::is_digit() ? Any idea? Thanks, Chan Nguyen

    Read the article

  • please help me to solve problem

    - by davit-datuashvili
    first of all this is not homework and nobody tag it as homewrok i did not understand this porblem can anybody explain me?this is not english problem it is just misunderstanding what problem say Consider the problem of neatly printing a paragraph on a printer. The input text is a sequence of n words of lengths l1 , l2 , . . . , ln , measured in characters. We want to print this paragraph neatly on a number of lines that hold a maximum of M characters each. Our criterion of “neatness” is as follows. If a given line contains words i through j , where i = j , and we leave exactly one space between words, the number of extra space characters at the end of the line is M - j + i -(k=i,k< j,k++) lk , which must be nonnegative so that the words fit on the line. We wish to minimize the sum, over all lines except the last, of the cubes of the numbers of extra space characters at the ends of lines. Give a dynamic-programming algorithm to print a paragraph of n words neatly on a printer. Analyze the running time and space requirements of your algorithm.

    Read the article

  • Finding the most frequent subtrees in a collection of (parse) trees

    - by peter.murray.rust
    I have a collection of trees whose nodes are labelled (but not uniquely). Specifically the trees are from a collection of parsed sentences (see http://en.wikipedia.org/wiki/Treebank). I wish to extract the most common subtrees from the collection - performance is not (yet) an issue. I'd be grateful for algorithms (ideally Java) or pointers to tools which do this for treebanks. Note that order of child nodes is important. EDIT @mjv. We are working in a limited domain (chemistry) which has a stylised language so the varirty of the trees is not huge - probably similar to children's readers. Simple tree for "the cat sat on the mat". <sentence> <nounPhrase> <article/> <noun/> </nounPhrase> <verbPhrase> <verb/> <prepositionPhrase> <preposition/> <nounPhrase> <article/> <noun/> </nounPhrase> </prepositionPhrase> </verbPhrase> </sentence> Here the sentence contains two identical part-of-speech subtrees (the actual tokens "cat". "mat" are not important in matching). So the algorithm would need to detect this. Note that not all nounPhrases are identical - "the big black cat" could be: <nounPhrase> <article/> <adjective/> <adjective/> <noun/> </nounPhrase> The length of sentences will be longer - between 15 to 30 nodes. I would expect to get useful results from 1000 trees. If this does not take more than a day or so that's acceptable. Obviously the shorter the tree the more frequent, so nounPhrase will be very common. EDIT If this is to be solved by flattening the tree then I think it would be related to Longest Common Substring, not Longest Common Sequence. But note that I don't necessarily just want the longest - I want a list of all those long enough to be "interesting" (criterion yet to be decided).

    Read the article

  • MODX parse error function implode (is it me or modx?)

    - by Ian
    Hi, I cannot for the life of me figure this out, maybe someone can help. Using MODX a form takes user criteria to create a filter and return a list of documents. The form is one text field and a few checkboxes. If both text field and checkbox data is posted, the function works fine; if just the checkbox data is posted the function works fine; but if just the text field data is posted, modx gives me the following error: Error: implode() [function.implode]: Invalid arguments passed. I've tested this outside of modx with flat files and it all works fine leading me to assume a bug exists within modx. But I'm not convinced. Here's my code: <?php $order = array('price ASC'); //default sort order if(!empty($_POST['tour_finder_duration'])){ //duration submitted $days = htmlentities($_POST['tour_finder_duration']); //clean up post array_unshift($order,"duration DESC"); //add duration sort before default $filter[] = 'duration,'.$days.',4'; //add duration to filter[] (field,criterion,mode) $criteria[] = 'Number of days: <strong>'.$days.'</strong>'; //displayed on results page } if(!empty($_POST['tour_finder_dests'])){ //destination/s submitted $dests = $_POST['tour_finder_dests']; foreach($dests as $value){ //iterate through dests array $filter[] = 'searchDests,'.htmlentities($value).',7'; //add dests to filter[] $params['docid'] = $value; $params['field'] = 'pagetitle'; $pagetitle = $modx->runSnippet('GetField',$params); $dests_array[] = '<a href="[~'.$value.'~]" title="Read more about '.$pagetitle.'" class="tourdestlink">'.$pagetitle.'</a>'; } $dests_array = implode(', ',$dests_array); $criteria[] = 'Destinations: '.$dests_array; //displayed on results page } if(is_array($filter)){ $filter = implode('|',$filter);//pipe-separated string } if(is_array($order)){ $order = implode(',',$order);//comma-separated string } if(is_array($criteria)){ $criteria = implode('<br />',$criteria); } echo '<br />Order: '.$order.'<br /> Filter: '.$filter.'<br /> Criteria: '.$criteria; //next: extract docs using $filter and $order, display user's criteria using $criteria... ?> The echo statement is displayed above the MODX error message and the $filter array is correctly imploded. Any help will save my computer from flying out the window. Thanks

    Read the article

  • Can I write a test that succeeds if and only if a statement does not compile?

    - by Billy ONeal
    I'd like to prevent clients of my class from doing something stupid. To that end, I have used the type system, and made my class only accept specific types as input. Consider the following example (Not real code, I've left off things like virtual destructors for the sake of example): class MyDataChunk { //Look Ma! Implementation! }; class Sink; class Source { virtual void Run() = 0; Sink *next_; void SetNext(Sink *next) { next_ = next; } }; class Sink { virtual void GiveMeAChunk(const MyDataChunk& data) { //Impl }; }; class In { virtual void Run { //Impl } }; class Out { }; //Note how filter and sorter have the same declaration. Concrete classes //will inherit from them. The seperate names are there to ensure only //that some idiot doesn't go in and put in a filter where someone expects //a sorter, etc. class Filter : public Source, public Sink { //Drop objects from the chain-of-command pattern that don't match a particular //criterion. }; class Sorter : public Source, public Sink { //Sorts inputs to outputs. There are different sorters because someone might //want to sort by filename, size, date, etc... }; class MyClass { In i; Out o; Filter f; Sorter s; public: //Functions to set i, o, f, and s void Execute() { i.SetNext(f); f.SetNext(s); s.SetNext(o); i.Run(); } }; What I don't want is for somebody to come back later and go, "Hey, look! Sorter and Filter have the same signature. I can make a common one that does both!", thus breaking the semantic difference MyClass requires. Is this a common kind of requirement, and if so, how might I implement a test for it?

    Read the article

  • Select latest group by in nhibernate

    - by Kendrick
    I have Canine and CanineHandler objects in my application. The CanineHandler object has a PersonID (which references a completely different database), an EffectiveDate (which specifies when a handler started with the canine), and a FK reference to the Canine (CanineID). Given a specific PersonID, I want to find all canines they're currently responsible for. The (simplified) query I'd use in SQL would be: Select Canine.* from Canine inner join CanineHandler on(CanineHandler.CanineID=Canine.CanineID) inner join (select CanineID,Max(EffectiveDate) MaxEffectiveDate from caninehandler group by CanineID) as CurrentHandler on(CurrentHandler.CanineID=CanineHandler.CanineID and CurrentHandler.MaxEffectiveDate=CanineHandler.EffectiveDate) where CanineHandler.HandlerPersonID=@PersonID Edit: Added mapping files below: <class name="CanineHandler" table="CanineHandler" schema="dbo"> <id name="CanineHandlerID" type="Int32"> <generator class="identity" /> </id> <property name="EffectiveDate" type="DateTime" precision="16" not-null="true" /> <property name="HandlerPersonID" type="Int64" precision="19" not-null="true" /> <many-to-one name="Canine" class="Canine" column="CanineID" not-null="true" access="field.camelcase-underscore" /> </class> <class name="Canine" table="Canine"> <id name="CanineID" type="Int32"> <generator class="identity" /> </id> <property name="Name" type="String" length="64" not-null="true" /> ... <set name="CanineHandlers" table="CanineHandler" inverse="true" order-by="EffectiveDate desc" cascade="save-update" access="field.camelcase-underscore"> <key column="CanineID" /> <one-to-many class="CanineHandler" /> </set> <property name="IsDeleted" type="Boolean" not-null="true" /> </class> I haven't tried yet, but I'm guessing I could do this in HQL. I haven't had to write anything in HQL yet, so I'll have to tackle that eventually anyway, but my question is whether/how I can do this sub-query with the criterion/subqueries objects. I got as far as creating the following detached criteria: DetachedCriteria effectiveHandlers = DetachedCriteria.For<Canine>() .SetProjection(Projections.ProjectionList() .Add(Projections.Max("EffectiveDate"),"MaxEffectiveDate") .Add(Projections.GroupProperty("CanineID"),"handledCanineID") ); but I can't figure out how to do the inner join. If I do this: Session.CreateCriteria<Canine>() .CreateCriteria("CanineHandler", "handler", NHibernate.SqlCommand.JoinType.InnerJoin) .List<Canine>(); I get an error "could not resolve property: CanineHandler of: OPS.CanineApp.Model.Canine". Obviously I'm missing something(s) but from the documentation I got the impression that should return a list of Canines that have handlers (possibly with duplicates). Until I can make this work, adding the subquery isn't going to work... I've found similar questions, such as http://stackoverflow.com/questions/747382/only-get-latest-results-using-nhibernate but none of the answers really seem to apply with the kind of direct result I'm looking for. Any help or suggestion is greatly appreciated.

    Read the article

  • fluent nhibernate one to many mapping

    - by Sammy
    I am trying to figure out what I thought was just a simple one to many mapping using fluent Nhibernate. I hoping someone can point me to the right directory to achieve this one to many relations I have an articles table and a categories table Many Articles can only belong to one Category Now my Categores table has 4 Categories and Articles has one article associated with cateory1 here is my setup. using FluentNHibernate.Mapping; using System.Collections; using System.Collections.Generic; namespace FluentMapping { public class Article { public virtual int Id { get; private set; } public virtual string Title { get; set; } public virtual Category Category{get;set;} } public class Category { public virtual int Id { get; private set; } public virtual string Description { get; set; } public virtual IList<Article> Articles { get; set; } public Category() { Articles=new List<Article>(); } public virtual void AddArticle(Article article) { article.Category = this; Articles.Add(article); } public virtual void RemoveArticle(Article article) { Articles.Remove(article); } } public class ArticleMap:ClassMap<Article> { public ArticleMap() { Table("Articles"); Id(x => x.Id).GeneratedBy.Identity(); Map(x => x.Title); References(x => x.Category).Column("CategoryId").LazyLoad(); } public class CategoryMap:ClassMap<Category> { public CategoryMap() { Table("Categories"); Id(x => x.Id).GeneratedBy.Identity(); Map(x => x.Description); HasMany(x => x.Articles).KeyColumn("CategoryId").Fetch.Join(); } } } } if I run this test [Fact] public void Can_Get_Categories() { using (var session = SessionManager.Instance.Current) { using (var transaction = session.BeginTransaction()) { var categories = session.CreateCriteria(typeof(Category)) //.CreateCriteria("Articles").Add(NHibernate.Criterion.Restrictions.EqProperty("Category", "Id")) .AddOrder(Order.Asc("Description")) .List<Category>(); } } } I am getting 7 Categories due to Left outer join used by Nhibernate any idea what I am doing wrong in here? Thanks [Solution] After a couple of hours reading nhibernate docs I here is what I came up with var criteria = session.CreateCriteria(typeof (Category)); criteria.AddOrder(Order.Asc("Description")); criteria.SetResultTransformer(new DistinctRootEntityResultTransformer()); var cats1 = criteria.List<Category>(); Using Nhibernate linq provider var linq = session.Linq<Category>(); linq.QueryOptions.RegisterCustomAction(c => c.SetResultTransformer(new DistinctRootEntityResultTransformer())); var cats2 = linq.ToList();

    Read the article

  • CSS/JavaScript/hacking: Detect :visited styling on a link *without* checking it directly OR do it fa

    - by Sai Emrys
    This is for research purposes on http://cssfingerprint.com Consider the following code: <style> div.csshistory a { display: none; color: #00ff00;} div.csshistory a:visited { display: inline; color: #ff0000;} </style> <div id="batch" class="csshistory"> <a id="1" href="http://foo.com">anything you want here</a> <a id="2" href="http://bar.com">anything you want here</a> [etc * ~2000] </div> My goal is to detect whether foo has been rendered using the :visited styling. I want to detect whether foo.com is visited without directly looking at $('1').getComputedStyle (or in Internet Explorer, currentStyle), or any other direct method on that element. The purpose of this is to get around a potential browser restriction that would prevent direct inspection of the style of visited links. For instance, maybe you can put a sub-element in the <a> tag, or check the styling of the text directly; etc. Any method that does not directly or indierctly rely on $('1').anything is acceptable. Doing something clever with the child or parent is probably necessary. Note that for the purposes of this point only, the scenario is that the browser will lie to JavaScript about all properties of the <a> element (but not others), and that it will only render color: in :visited. Therefore, methods that rely on e.g. text size or background-image will not meet this requirement. I want to improve the speed of my current scraping methods. The majority of time (at least with the jQuery method in Firefox) is spent on document.body.appendChild(batch), so finding a way to improve that call would probably most effective. See http://cssfingerprint.com/about and http://cssfingerprint.com/results for current speed test results. The methods I am currently using can be seen at http://github.com/saizai/cssfingerprint/blob/master/public/javascripts/history_scrape.js To summarize for tl;dr, they are: set color or display on :visited per above, and check each one directly w/ getComputedStyle put the ID of the link (plus a space) inside the <a> tag, and using jQuery's :visible selector, extract only the visible text (= the visited link IDs) FWIW, I'm a white hat, and I'm doing this in consultation with the EFF and some other fairly well known security researchers. If you contribute a new method or speedup, you'll get thanked at http://cssfingerprint.com/about (if you want to be :-P), and potentially in a future published paper. ETA: The bounty will be rewarded only for suggestions that can, on Firefox, avoid the hypothetical restriction described in point 1 above, or perform at least 10% faster, on any browser for which I have sufficient current data, than my best performing methods listed in the graph at http://cssfingerprint.com/about In case more than one suggestion fits either criterion, the one that does best wins.

    Read the article

  • Optimizing Python code with many attribute and dictionary lookups

    - by gotgenes
    I have written a program in Python which spends a large amount of time looking up attributes of objects and values from dictionary keys. I would like to know if there's any way I can optimize these lookup times, potentially with a C extension, to reduce the time of execution, or if I need to simply re-implement the program in a compiled language. The program implements some algorithms using a graph. It runs prohibitively slowly on our data sets, so I profiled the code with cProfile using a reduced data set that could actually complete. The vast majority of the time is being burned in one function, and specifically in two statements, generator expressions, within the function: The generator expression at line 202 is neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) and the generator expression at line 204 is neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) The source code for this function of context provided below. selected_nodes is a set of nodes in the interaction_graph, which is a NetworkX Graph instance. node_neighbors is an iterator from Graph.neighbors_iter(). Graph itself uses dictionaries for storing nodes and edges. Its Graph.node attribute is a dictionary which stores nodes and their attributes (e.g., 'weight') in dictionaries belonging to each node. Each of these lookups should be amortized constant time (i.e., O(1)), however, I am still paying a large penalty for the lookups. Is there some way which I can speed up these lookups (e.g., by writing parts of this as a C extension), or do I need to move the program to a compiled language? Below is the full source code for the function that provides the context; the vast majority of execution time is spent within this function. def calculate_node_z_prime( node, interaction_graph, selected_nodes ): """Calculates a z'-score for a given node. The z'-score is based on the z-scores (weights) of the neighbors of the given node, and proportional to the z-score (weight) of the given node. Specifically, we find the maximum z-score of all neighbors of the given node that are also members of the given set of selected nodes, multiply this z-score by the z-score of the given node, and return this value as the z'-score for the given node. If the given node has no neighbors in the interaction graph, the z'-score is defined as zero. Returns the z'-score as zero or a positive floating point value. :Parameters: - `node`: the node for which to compute the z-prime score - `interaction_graph`: graph containing the gene-gene or gene product-gene product interactions - `selected_nodes`: a `set` of nodes fitting some criterion of interest (e.g., annotated with a term of interest) """ node_neighbors = interaction_graph.neighbors_iter(node) neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) try: max_z_score = max(neighbor_z_scores) # max() throws a ValueError if its argument has no elements; in this # case, we need to set the max_z_score to zero except ValueError, e: # Check to make certain max() raised this error if 'max()' in e.args[0]: max_z_score = 0 else: raise e z_prime = interaction_graph.node[node]['weight'] * max_z_score return z_prime Here are the top couple of calls according to cProfiler, sorted by time. ncalls tottime percall cumtime percall filename:lineno(function) 156067701 352.313 0.000 642.072 0.000 bpln_contextual.py:204(<genexpr>) 156067701 289.759 0.000 289.759 0.000 bpln_contextual.py:202(<genexpr>) 13963893 174.047 0.000 816.119 0.000 {max} 13963885 69.804 0.000 936.754 0.000 bpln_contextual.py:171(calculate_node_z_prime) 7116883 61.982 0.000 61.982 0.000 {method 'update' of 'set' objects}

    Read the article

  • Using a "white list" for extracting terms for Text Mining, Part 2

    - by [email protected]
    In my last post, we set the groundwork for extracting specific tokens from a white list using a CTXRULE index. In this post, we will populate a table with the extracted tokens and produce a case table suitable for clustering with Oracle Data Mining. Our corpus of documents will be stored in a database table that is defined as create table documents(id NUMBER, text VARCHAR2(4000)); However, any suitable Oracle Text-accepted data type can be used for the text. We then create a table to contain the extracted tokens. The id column contains the unique identifier (or case id) of the document. The token column contains the extracted token. Note that a given document many have many tokens, so there will be one row per token for a given document. create table extracted_tokens (id NUMBER, token VARCHAR2(4000)); The next step is to iterate over the documents and extract the matching tokens using the index and insert them into our token table. We use the MATCHES function for matching the query_string from my_thesaurus_rules with the text. DECLARE     cursor c2 is       select id, text       from documents; BEGIN     for r_c2 in c2 loop        insert into extracted_tokens          select r_c2.id id, main_term token          from my_thesaurus_rules          where matches(query_string,                        r_c2.text)>0;     end loop; END; Now that we have the tokens, we can compute the term frequency - inverse document frequency (TF-IDF) for each token of each document. create table extracted_tokens_tfidf as   with num_docs as (select count(distinct id) doc_cnt                     from extracted_tokens),        tf       as (select a.id, a.token,                            a.token_cnt/b.num_tokens token_freq                     from                        (select id, token, count(*) token_cnt                        from extracted_tokens                        group by id, token) a,                       (select id, count(*) num_tokens                        from extracted_tokens                        group by id) b                     where a.id=b.id),        doc_freq as (select token, count(*) overall_token_cnt                     from extracted_tokens                     group by token)   select tf.id, tf.token,          token_freq * ln(doc_cnt/df.overall_token_cnt) tf_idf   from num_docs,        tf,        doc_freq df   where df.token=tf.token; From the WITH clause, the num_docs query simply counts the number of documents in the corpus. The tf query computes the term (token) frequency by computing the number of times each token appears in a document and divides that by the number of tokens found in the document. The doc_req query counts the number of times the token appears overall in the corpus. In the SELECT clause, we compute the tf_idf. Next, we create the nested table required to produce one record per case, where a case corresponds to an individual document. Here, we COLLECT all the tokens for a given document into the nested column extracted_tokens_tfidf_1. CREATE TABLE extracted_tokens_tfidf_nt              NESTED TABLE extracted_tokens_tfidf_1                  STORE AS extracted_tokens_tfidf_tab AS              select id,                     cast(collect(DM_NESTED_NUMERICAL(token,tf_idf)) as DM_NESTED_NUMERICALS) extracted_tokens_tfidf_1              from extracted_tokens_tfidf              group by id;   To build the clustering model, we create a settings table and then insert the various settings. Most notable are the number of clusters (20), using cosine distance which is better for text, turning off auto data preparation since the values are ready for mining, the number of iterations (20) to get a better model, and the split criterion of size for clusters that are roughly balanced in number of cases assigned. CREATE TABLE km_settings (setting_name  VARCHAR2(30), setting_value VARCHAR2(30)); BEGIN  INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.clus_num_clusters, 20);  INSERT INTO km_settings (setting_name, setting_value)     VALUES (dbms_data_mining.kmns_distance, dbms_data_mining.kmns_cosine);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.prep_auto,dbms_data_mining.prep_auto_off);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_iterations,20);   INSERT INTO km_settings (setting_name, setting_value) VALUES     VALUES (dbms_data_mining.kmns_split_criterion,dbms_data_mining.kmns_size);   COMMIT; END; With this in place, we can now build the clustering model. BEGIN     DBMS_DATA_MINING.CREATE_MODEL(     model_name          => 'TEXT_CLUSTERING_MODEL',     mining_function     => dbms_data_mining.clustering,     data_table_name     => 'extracted_tokens_tfidf_nt',     case_id_column_name => 'id',     settings_table_name => 'km_settings'); END;To generate cluster names from this model, check out my earlier post on that topic.

    Read the article

  • Oracle WebCenter Partner Program

    - by kellsey.ruppel
    In competitive marketplaces, your company needs to quickly respond to changes and new trends, in order to open opportunities and build long-term growth. Oracle has a variety of next-generation services, solutions and resources that will leverage the differentiators in your offerings. Name your partnering needs: Oracle has the answer. This week we’d like to focus on Partners and the value your organization can gain from working with the Oracle PartnerNetwork. The Oracle PartnerNetwork will empower your company with exceptional resources to distinguish your offerings from the competition, seize opportunities, and increase your sales. We’re happy to welcome Christine Kungl, and Brian Buzzell, from Oracle’s World Wide Alliances & Channels (WWA&C) WebCenter Partner Enablement team, as today’s guests on the Oracle WebCenter blog. Q: What is the Oracle PartnerNetwork (OPN)?A: Christine: Oracle’s PartnerNetwork (OPN) is a collaborative partnership which allows registered companies specific added value resources to help differentiate themselves from their competition. Through OPN programs it provides companies the ability to seize and target opportunities, educate and train their teams, and leverage unparalleled opportunity given Oracle’s large market footprint. OPN’s multi-level programs are targeted at different levels allowing companies to grow and evolve with Oracle based on their business needs.  As part of their OPN memberships partners are encouraged to become OPN Specialized allowing those partners additional differentiation in Oracle’s Partner Network Community.  Q: What is an OPN Specialization and what resources are available for Specialized Partners?A: Brian: Oracle wanted a better way for our partners to differentiate their special skills and expertise, as well a more effective way to communicate that difference to customers.  Oracle’s expanding product portfolio demanded that we be able to identify partners with significant product knowledge—those who had made an investment in Oracle and a continuing commitment to deliver Oracle solutions. And with more than 30,000 Oracle partners around the world, Oracle needed a way for our customers to choose the right partner for their business. So how did Oracle meet this need? With the new partner program:  Oracle PartnerNetwork (OPN) Specialized. In this new program, Oracle partners are: Specialized :  Differentiating themselves from the competition with expertise that set them apart Recognized:  Being acknowledged for investing in becoming Oracle experts in specialized areas. Preferred :  Connecting with potential customers who are seeking  value-added solutions for their business OPN Specialized provides all partners with educational opportunities, training, and tools specially designed to build competency and grow business.  Partners can serve their customers better through key resources:OPN Specialized Knowledge Zones – Located on the updated and enhanced OPN portal— provide a single point of entry for all education and training information for Oracle partners. Enablement 2.0 Resources —Enablement 2.0 helps Oracle partners build their competencies and skills through a variety of educational opportunities and expanded training choices. These resources include: Enablement 2.0 “Boot camps” provide three-tiered learning levels that help jump-start partner training The role-based training covers Oracle’s application and technology products and offers a combination of classroom lectures, hands-on lab exercises, and case studies. Enablement 2.0 Interactive guided learning paths (GLPs) with recommendations on how to achieve specialization Upgraded partner solution kits Enhanced, specialized business centers available 24/7 around the globe on the OPN portal OPN Competency Center—Tracking ProgressThe OPN Competency Center keeps track as a partner applies for and achieves specialization in selected areas. You start with an assessment that compares your organization’s current skills and experience with the requirements for specialization in the area you have chosen. The OPN Competency Center then provides a roadmap that itemizes the skills and the knowledge you need to earn specialized status. In summary, OPN Specialization not only includes key training resources but a way to track and show progression for your partner organization. Q: What is are the OPN Membership Levels and what are the benefits?A:  Christine: The base OPN membership levels are: Remarketer: At the Remarketer level, retailers can choose to resell select Oracle products with the backing of authorized, regionally located, value-added distributors (VADs). The Remarketer level has no fees and no partner agreement with Oracle, but does offer online training and sales tools through the OPN portal.Program Details: RemarketerSilver Level: The Silver level is for Oracle partners who are focused on reselling and developing business with products ordered through the Oracle 1-Click Ordering Program. The Silver level provides a cost-effective, yet scalable way for partners to start an OPN Specialized membership and offers a substantial set of benefits that lets partners increase their competitive positioning. Program Details: SilverGold Level: Gold-level partners have the ability to specialize, helping them grow their business and create differentiation in the marketplace. Oracle partners at the Gold level can develop, sell, or implement the full stack of Oracle solutions and can apply to resell Oracle Applications.Program Details: GoldPlatinum Level: The Platinum level is for Oracle partners who want the highest level of benefits and are committed to reaching a minimum of five specializations. Platinum partners are recognized for their expertise in a broad range of products and technology, and receive dedicated support from Oracle.Program Details: PlatinumIn addition we recently introduced a new level:Diamond Level: This level is the most prestigious level of OPN Specialized. It allows companies to differentiate further because of their focused depth and breadth of their expertise. Program Details: DiamondSo as you can see there are various levels cost effective ways that Partners can get assistance, differentiation through OPN membership. Q: What role does the Oracle's World Wide Alliances & Channels (WWA&C), Partner Enablement teams and the WebCenter Community play?  A: Brian: Oracle’s WWA&C teams are responsible for manage relationships, educating their teams, creating go-to-market solutions and fostering communities for Oracle partners worldwide.  The WebCenter Partner Enablement Middleware Team is tasked to create, manage and distribute Specialization resources for the WebCenter Partner community. Q: What WebCenter Specializations are currently available?A: Christine:  As of now here are the following WebCenter Specializations and their availability: Oracle WebCenter Portal Specialization (Oracle WebCenter Portal): Available NowThe Oracle WebCenter Specialization provides insight into the following products: WebCenter Services, WebCenter Spaces, and WebLogic Portal.Oracle WebCenter Specialized Partners can efficiently use Oracle WebCenter products to create social applications, enterprise portals, communities, composite applications, and Internet or intranet Web sites on a standards-based, service-oriented architecture (SOA). The suite combines the development of rich internet applications; a multi-channel portal framework; and a suite of horizontal WebCenter applications, which provide content, presence, and social networking capabilities to create a highly interactive user experience. Oracle WebCenter Content Specialization: Available NowThe Oracle WebCenter Content Specialization provides insight into the following products; Universal Content Management, WebCenter Records Management, WebCenter Imaging, WebCenter Distributed Capture, and WebCenter Capture.Oracle WebCenter Content Specialized Partners can efficiently build content-rich business applications, reuse content, and integrate hundreds of content services with other business applications. This allows our customers to decrease costs, automate processes, reduce resource bottlenecks, share content effectively, minimize the number of lost documents, and better manage risk. Oracle WebCenter Sites Specialization: Available Q1 2012Oracle WebCenter Sites is part of the broader Oracle WebCenter platform that provides organizations with a complete customer experience management solution.  Partners that align with the new Oracle WebCenter Sites platform allow their customers organizations to: Leverage customer information from all channels and systems Manage interactions across all channels Unify commerce, merchandising, marketing, and service across all channels Provide personalized, choreographed consumer journeys across all channels Integrate order orchestration, supply chain management and order fulfillment Q: What criteria does the Partner organization need to achieve Specialization? What about individual Sales, PreSales & Implementation Specialist/Technical consultants?A: Brian: Each Oracle WebCenter Specialization has unique Business Criteria that must be met in order to achieve that Specialization.  This includes a unique number of transactions (co-sell, re-sell, and referral), customer references and then unique number of specialists as part of a partner team (Sales, Pre-Sales, Implementation, and Support).   Each WebCenter Specialization provides training resources (GLPs, BootCamps, Assessments and Exams for individuals on a partner’s staff to fulfill those requirements.  That criterion can be found for each Specialization on the Specialize tab for each WebCenter Knowledge Zone.  Here are the sample criteria, recommended courses, exams for the WebCenter Portal Specialization: WebCenter Portal Specialization Criteria Q: Do you have any suggestions on the best way for partners to get started if they would like to know more?A: Christine:   The best way to start is for partners is look at their business and core Oracle team focus and then look to become specialized in one or more areas.  Once you have selected the Specializations that are right for your business, you need to follow the first 3 key steps described below. The fourth step outlines the additional process to follow if you meet the criteria to be Advanced Specialized. Note that Step 4 may not be done without first following Steps 1-3.1. Join the Knowledge Zone(s) where you want to achieve Specialized status Go to the Knowledge Zone lick on the "Why Partner" tab Click on the "Join Knowledge Zone" link 2. Meet the Specialization criteria - Define and implement plans in your organization to achieve the competency and business criteria targets of the Specialization. (Note: Worldwide OPN members at the Gold, Platinum, or Diamond level and their Associates at the Gold, Platinum, or Diamond level may count their collective resources to meet the business and competency criteria required for specialization in this area.) 3. Apply for Specialization – when you have met the business and competency criteria required, inform Oracle by completing the following steps: Click on the "Specialize" tab in the Knowledge Zone Click on the "Apply Now" button Complete the online application form Oracle will validate the information provided, and once approved, you will receive notification from Oracle of your awarded Specialized status. Need more information? Access our Step by Step Guide (PDF) 4. Apply for Advanced Specialization (Optional) – If your company has on staff 50 unique Certified Implementation Specialists in your company's approved Specialization's product set, let Oracle know by following these steps: Ensure that you have 50 or more unique individuals that are Certified Implementation Specialists in the specific Specialization awarded to your company If you are pooling resources from another Associate or Worldwide entity, ensure you know that company’s name and country Have your Oracle PRM Administrator complete the online Advanced Specialization Application Oracle will validate the information provided, and once approved, you will receive notification from Oracle of your awarded Advanced Specialized status. There are additional resources on OPN as well as the broader WebCenter Community: v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • No persister for: <ClassName> issue with Fluent NHibernate

    - by Amit
    I have following code: //AutoMapConfig.cs using System; using FluentNHibernate.Automapping; namespace SimpleFNH.AutoMap { public class AutoMapConfig : DefaultAutomappingConfiguration { public override bool ShouldMap(Type type) { return type.Namespace == "Examples.FirstAutomappedProject.Entities"; } } } //CascadeConvention.cs using FluentNHibernate.Conventions; using FluentNHibernate.Conventions.Instances; namespace SimpleFNH.AutoMap { public class CascadeConvention : IReferenceConvention, IHasManyConvention, IHasManyToManyConvention { public void Apply(IManyToOneInstance instance) { instance.Cascade.All(); } public void Apply(IOneToManyCollectionInstance instance) { instance.Cascade.All(); } public void Apply(IManyToManyCollectionInstance instance) { instance.Cascade.All(); } } } //Item.cs namespace SimpleFNH.Entities { public class Item { public virtual long ID { get; set; } public virtual string ItemName { get; set; } public virtual string Description { get; set; } public virtual OrderItem OrderItem { get; set; } } } //OrderItem.cs namespace SimpleFNH.Entities { public class OrderItem { public virtual long ID { get; set; } public virtual int Quantity { get; set; } public virtual Item Item { get; set; } public virtual ProductOrder ProductOrder { get; set; } public virtual void AddItem(Item item) { item.OrderItem = this; } } } using System; using System.Collections.Generic; //ProductOrder.cs namespace SimpleFNH.Entities { public class ProductOrder { public virtual long ID { get; set; } public virtual DateTime OrderDate { get; set; } public virtual string CustomerName { get; set; } public virtual IList<OrderItem> OrderItems { get; set; } public ProductOrder() { OrderItems = new List<OrderItem>(); } public virtual void AddOrderItems(params OrderItem[] items) { foreach (var item in items) { OrderItems.Add(item); item.ProductOrder = this; } } } } //NHibernateRepo.cs using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using NHibernate; using NHibernate.Criterion; using NHibernate.Tool.hbm2ddl; namespace SimpleFNH.Repository { public class NHibernateRepo { private static ISessionFactory _sessionFactory; private static ISessionFactory SessionFactory { get { if (_sessionFactory == null) InitializeSessionFactory(); return _sessionFactory; } } private static void InitializeSessionFactory() { _sessionFactory = Fluently.Configure().Database( MsSqlConfiguration.MsSql2008.ConnectionString( @"server=Amit-PC\SQLEXPRESS;database=SimpleFNH;Trusted_Connection=True;").ShowSql()). Mappings(m => m.FluentMappings.AddFromAssemblyOf<Order>()).ExposeConfiguration( cfg => new SchemaExport(cfg).Create(true, true)).BuildSessionFactory(); } public static ISession OpenSession() { return SessionFactory.OpenSession(); } } } //Program.cs using System; using System.Collections.Generic; using System.Linq; using SimpleFNH.Entities; using SimpleFNH.Repository; namespace SimpleFNH { class Program { static void Main(string[] args) { using (var session = NHibernateRepo.OpenSession()) { using (var transaction = session.BeginTransaction()) { var item1 = new Item { ItemName = "item 1", Description = "test 1" }; var item2 = new Item { ItemName = "item 2", Description = "test 2" }; var item3 = new Item { ItemName = "item 3", Description = "test 3" }; var orderItem1 = new OrderItem { Item = item1, Quantity = 2 }; var orderItem2 = new OrderItem { Item = item2, Quantity = 4 }; var orderItem3 = new OrderItem { Item = item3, Quantity = 5 }; var productOrder = new ProductOrder { CustomerName = "Amit", OrderDate = DateTime.Now, OrderItems = new List<OrderItem> { orderItem1, orderItem2, orderItem3 } }; productOrder.AddOrderItems(orderItem1, orderItem2, orderItem3); session.Save(productOrder); transaction.Commit(); } } using (var session = NHibernateRepo.OpenSession()) { // retreive all stores and display them using (session.BeginTransaction()) { var orders = session.CreateCriteria(typeof(ProductOrder)) .List<ProductOrder>(); foreach (var item in orders) { Console.WriteLine(item.OrderItems.First().Quantity); } } } } } } I tried many variations to get it working but i get an error saying No persister for: SimpleFNH.Entities.ProductOrder Can someone help me get it working? I wanted to create a simple program which will set a pattern for my bigger project but it is taking quite a lot of time than expected. It would be rally helpful if you can explain in simple terms on any template/pattern that i can use to get fluent nHibernate working. The above code uses auto mapping, which i tried after i tried with fluent mapping.

    Read the article

  • Using VLOOKUP in Excel

    - by Mark Virtue
    VLOOKUP is one of Excel’s most useful functions, and it’s also one of the least understood.  In this article, we demystify VLOOKUP by way of a real-life example.  We’ll create a usable Invoice Template for a fictitious company. So what is VLOOKUP?  Well, of course it’s an Excel function.  This article will assume that the reader already has a passing understanding of Excel functions, and can use basic functions such as SUM, AVERAGE, and TODAY.  In its most common usage, VLOOKUP is a database function, meaning that it works with database tables – or more simply, lists of things in an Excel worksheet.  What sort of things?   Well, any sort of thing.  You may have a worksheet that contains a list of employees, or products, or customers, or CDs in your CD collection, or stars in the night sky.  It doesn’t really matter. Here’s an example of a list, or database.  In this case it’s a list of products that our fictitious company sells: Usually lists like this have some sort of unique identifier for each item in the list.  In this case, the unique identifier is in the “Item Code” column.  Note:  For the VLOOKUP function to work with a database/list, that list must have a column containing the unique identifier (or “key”, or “ID”), and that column must be the first column in the table.  Our sample database above satisfies this criterion. The hardest part of using VLOOKUP is understanding exactly what it’s for.  So let’s see if we can get that clear first: VLOOKUP retrieves information from a database/list based on a supplied instance of the unique identifier. Put another way, if you put the VLOOKUP function into a cell and pass it one of the unique identifiers from your database, it will return you one of the pieces of information associated with that unique identifier.  In the example above, you would pass VLOOKUP an item code, and it would return to you either the corresponding item’s description, its price, or its availability (its “In stock” quantity).  Which of these pieces of information will it pass you back?  Well, you get to decide this when you’re creating the formula. If all you need is one piece of information from the database, it would be a lot of trouble to go to to construct a formula with a VLOOKUP function in it.  Typically you would use this sort of functionality in a reusable spreadsheet, such as a template.  Each time someone enters a valid item code, the system would retrieve all the necessary information about the corresponding item. Let’s create an example of this:  An Invoice Template that we can reuse over and over in our fictitious company. First we start Excel… …and we create ourselves a blank invoice: This is how it’s going to work:  The person using the invoice template will fill in a series of item codes in column “A”, and the system will retrieve each item’s description and price, which will be used to calculate the line total for each item (assuming we enter a valid quantity). For the purposes of keeping this example simple, we will locate the product database on a separate sheet in the same workbook: In reality, it’s more likely that the product database would be located in a separate workbook.  It makes little difference to the VLOOKUP function, which doesn’t really care if the database is located on the same sheet, a different sheet, or a completely different workbook. In order to test the VLOOKUP formula we’re about to write, we first enter a valid item code into cell A11: Next, we move the active cell to the cell in which we want information retrieved from the database by VLOOKUP to be stored.  Interestingly, this is the step that most people get wrong.  To explain further:  We are about to create a VLOOKUP formula that will retrieve the description that corresponds to the item code in cell A11.  Where do we want this description put when we get it?  In cell B11, of course.  So that’s where we write the VLOOKUP formula – in cell B11. Select cell B11: We need to locate the list of all available functions that Excel has to offer, so that we can choose VLOOKUP and get some assistance in completing the formula.  This is found by first clicking the Formulas tab, and then clicking Insert Function:   A box appears that allows us to select any of the functions available in Excel.  To find the one we’re looking for, we could type a search term like “lookup” (because the function we’re interested in is a lookup function).  The system would return us a list of all lookup-related functions in Excel.  VLOOKUP is the second one in the list.  Select it an click OK… The Function Arguments box appears, prompting us for all the arguments (or parameters) needed in order to complete the VLOOKUP function.  You can think of this box as the function is asking us the following questions: What unique identifier are you looking up in the database? Where is the database? Which piece of information from the database, associated with the unique identifier, do you wish to have retrieved for you? The first three arguments are shown in bold, indicating that they are mandatory arguments (the VLOOKUP function is incomplete without them and will not return a valid value).  The fourth argument is not bold, meaning that it’s optional:   We will complete the arguments in order, top to bottom. The first argument we need to complete is the Lookup_value argument.  The function needs us to tell it where to find the unique identifier (the item code in this case) that it should be retuning the description of.  We must select the item code we entered earlier (in A11). Click on the selector icon to the right of the first argument: Then click once on the cell containing the item code (A11), and press Enter: The value of “A11” is inserted into the first argument. Now we need to enter a value for the Table_array argument.  In other words, we need to tell VLOOKUP where to find the database/list.  Click on the selector icon next to the second argument: Now locate the database/list and select the entire list – not including the header line.  The database is located on a separate worksheet, so we first click on that worksheet tab: Next we select the entire database, not including the header line: …and press Enter.  The range of cells that represents the database (in this case “’Product Database’!A2:D7”) is entered automatically for us into the second argument. Now we need to enter the third argument, Col_index_num.  We use this argument to specify to VLOOKUP which piece of information from the database, associate with our item code in A11, we wish to have returned to us.  In this particular example, we wish to have the item’s description returned to us.  If you look on the database worksheet, you’ll notice that the “Description” column is the second column in the database.  This means that we must enter a value of “2” into the Col_index_num box: It is important to note that that we are not entering a “2” here because the “Description” column is in the B column on that worksheet.  If the database happened to start in column K of the worksheet, we would still enter a “2” in this field. Finally, we need to decide whether to enter a value into the final VLOOKUP argument, Range_lookup.  This argument requires either a true or false value, or it should be left blank.  When using VLOOKUP with databases (as is true 90% of the time), then the way to decide what to put in this argument can be thought of as follows: If the first column of the database (the column that contains the unique identifiers) is sorted alphabetically/numerically in ascending order, then it’s possible to enter a value of true into this argument, or leave it blank. If the first column of the database is not sorted, or it’s sorted in descending order, then you must enter a value of false into this argument As the first column of our database is not sorted, we enter false into this argument: That’s it!  We’ve entered all the information required for VLOOKUP to return the value we need.  Click the OK button and notice that the description corresponding to item code “R99245” has been correctly entered into cell B11: The formula that was created for us looks like this: If we enter a different item code into cell A11, we will begin to see the power of the VLOOKUP function:  The description cell changes to match the new item code: We can perform a similar set of steps to get the item’s price returned into cell E11.  Note that the new formula must be created in cell E11.  The result will look like this: …and the formula will look like this: Note that the only difference between the two formulae is the third argument (Col_index_num) has changed from a “2” to a “3” (because we want data retrieved from the 3rd column in the database). If we decided to buy 2 of these items, we would enter a “2” into cell D11.  We would then enter a simple formula into cell F11 to get the line total: =D11*E11 …which looks like this… Completing the Invoice Template We’ve learned a lot about VLOOKUP so far.  In fact, we’ve learned all we’re going to learn in this article.  It’s important to note that VLOOKUP can be used in other circumstances besides databases.  This is less common, and may be covered in future How-To Geek articles. Our invoice template is not yet complete.  In order to complete it, we would do the following: We would remove the sample item code from cell A11 and the “2” from cell D11.  This will cause our newly created VLOOKUP formulae to display error messages: We can remedy this by judicious use of Excel’s IF() and ISBLANK() functions.  We change our formula from this…       =VLOOKUP(A11,’Product Database’!A2:D7,2,FALSE) …to this…       =IF(ISBLANK(A11),”",VLOOKUP(A11,’Product Database’!A2:D7,2,FALSE)) We would copy the formulas in cells B11, E11 and F11 down to the remainder of the item rows of the invoice.  Note that if we do this, the resulting formulas will no longer correctly refer to the database table.  We could fix this by changing the cell references for the database to absolute cell references.  Alternatively – and even better – we could create a range name for the entire product database (such as “Products”), and use this range name instead of the cell references.  The formula would change from this…       =IF(ISBLANK(A11),”",VLOOKUP(A11,’Product Database’!A2:D7,2,FALSE)) …to this…       =IF(ISBLANK(A11),”",VLOOKUP(A11,Products,2,FALSE)) …and then copy the formulas down to the rest of the invoice item rows. We would probably “lock” the cells that contain our formulae (or rather unlock the other cells), and then protect the worksheet, in order to ensure that our carefully constructed formulae are not accidentally overwritten when someone comes to fill in the invoice. We would save the file as a template, so that it could be reused by everyone in our company If we were feeling really clever, we would create a database of all our customers in another worksheet, and then use the customer ID entered in cell F5 to automatically fill in the customer’s name and address in cells B6, B7 and B8. If you would like to practice with VLOOKUP, or simply see our resulting Invoice Template, it can be downloaded from here. Similar Articles Productive Geek Tips Make Excel 2007 Print Gridlines In Workbook FileMake Excel 2007 Always Save in Excel 2003 FormatConvert Older Excel Documents to Excel 2007 FormatImport Microsoft Access Data Into ExcelChange the Default Font in Excel 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Classic Cinema Online offers 100’s of OnDemand Movies OutSync will Sync Photos of your Friends on Facebook and Outlook Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • data is not inserting in my db table [closed]

    - by Sarojit Chakraborty
    Please see my below(SubjectDetailsDao.java) code of addZoneToDb method. My debugger is nicely running upto ** session.getTransaction().commit();** code. but after that debugger stops,I do not know why it stops after that line? .And because of this i am unable to insert my data into my database table. I don't know what to do.Why it is not inserting my data into my database table? Plz help me for this. H Now i am getting this Error: Struts Problem Report Struts has detected an unhandled exception: Messages: org.hibernate.event.PreInsertEvent.getSource()Lorg/hibernate/event/EventSource; File: org/hibernate/validator/event/ValidateEventListener.java Line number: 172 Stacktraces java.lang.reflect.InvocationTargetException sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:601) com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:441) com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:280) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:243) com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:165) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:252) org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ConversionErrorInterceptor.intercept(ConversionErrorInterceptor.java:122) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:195) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ParametersInterceptor.doIntercept(ParametersInterceptor.java:195) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.StaticParametersInterceptor.intercept(StaticParametersInterceptor.java:179) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.MultiselectInterceptor.intercept(MultiselectInterceptor.java:75) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.CheckboxInterceptor.intercept(CheckboxInterceptor.java:94) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.FileUploadInterceptor.intercept(FileUploadInterceptor.java:235) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ModelDrivenInterceptor.intercept(ModelDrivenInterceptor.java:89) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ScopedModelDrivenInterceptor.intercept(ScopedModelDrivenInterceptor.java:130) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.debugging.DebuggingInterceptor.intercept(DebuggingInterceptor.java:267) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ChainingInterceptor.intercept(ChainingInterceptor.java:126) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.PrepareInterceptor.doIntercept(PrepareInterceptor.java:138) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.I18nInterceptor.intercept(I18nInterceptor.java:165) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.interceptor.ServletConfigInterceptor.intercept(ServletConfigInterceptor.java:164) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.AliasInterceptor.intercept(AliasInterceptor.java:179) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.interceptor.ExceptionMappingInterceptor.intercept(ExceptionMappingInterceptor.java:176) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) org.apache.struts2.impl.StrutsActionProxy.execute(StrutsActionProxy.java:52) org.apache.struts2.dispatcher.Dispatcher.serviceAction(Dispatcher.java:488) org.apache.struts2.dispatcher.ng.ExecuteOperations.executeAction(ExecuteOperations.java:77) org.apache.struts2.dispatcher.ng.filter.StrutsPrepareAndExecuteFilter.doFilter(StrutsPrepareAndExecuteFilter.java:91) org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:240) org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:164) org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:498) org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100) org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:562) org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:394) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:243) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:188) org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:302) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) java.lang.Thread.run(Thread.java:722) java.lang.NoSuchMethodError: org.hibernate.event.PreInsertEvent.getSource()Lorg/hibernate/event/EventSource; org.hibernate.validator.event.ValidateEventListener.onPreInsert(ValidateEventListener.java:172) org.hibernate.action.EntityInsertAction.preInsert(EntityInsertAction.java:156) org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:49) org.hibernate.engine.ActionQueue.execute(ActionQueue.java:250) org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:234) org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:141) org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298) org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27) org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000) org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:338) org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106) v.esoft.dao.SubjectdetailsDAO.SubjectdetailsDAO.addZoneToDb(SubjectdetailsDAO.java:185) v.esoft.actions.LoginAction.datatobeinsert(LoginAction.java:53) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) java.lang.reflect.Method.invoke(Method.java:601) com.opensymphony.xwork2.DefaultActionInvocation.invokeAction(DefaultActionInvocation.java:441) com.opensymphony.xwork2.DefaultActionInvocation.invokeActionOnly(DefaultActionInvocation.java:280) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:243) com.opensymphony.xwork2.interceptor.DefaultWorkflowInterceptor.doIntercept(DefaultWorkflowInterceptor.java:165) com.opensymphony.xwork2.interceptor.MethodFilterInterceptor.intercept(MethodFilterInterceptor.java:87) com.opensymphony.xwork2.DefaultActionInvocation.invoke(DefaultActionInvocation.java:237) com.opensymphony.xwork2.validator.ValidationInterceptor.doIntercept(ValidationInterceptor.java:252) org.apache.struts2.interceptor.validation.AnnotationValidationInterceptor.doIntercept(AnnotationValidationInterceptor.java:68) ............................... ............................... SubjectDetailsDao.java(I have problem in addZoneToDb) package v.esoft.dao.SubjectdetailsDAO; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import org.hibernate.HibernateException; import org.hibernate.Query; import org.hibernate.Session; import org.hibernate.SessionFactory; import org.hibernate.Transaction; import org.hibernate.criterion.Order; import com.opensymphony.xwork2.ActionSupport; import v.esoft.connection.HibernateUtil; import v.esoft.pojos.Subjectdetails; public class SubjectdetailsDAO extends ActionSupport { private static Session session = null; private static SessionFactory sessionFactory = null; static Transaction transaction = null; private String currentDate; SimpleDateFormat formatter1 = new SimpleDateFormat("yyyy-MM-dd"); private java.util.Date currentdate; public SubjectdetailsDAO() { sessionFactory = HibernateUtil.getSessionFactory(); SimpleDateFormat formatter = new SimpleDateFormat("yyyy-MM-dd"); currentdate = new java.util.Date(); currentDate = formatter.format(currentdate); } public List getAllCustomTempleteRoutinesForGrid() { List list = new ArrayList(); try { session = sessionFactory.openSession(); list = session.createCriteria(Subjectdetails.class).addOrder(Order.desc("subjectId")).list(); } catch (Exception e) { System.out.println("Exepetion in getAllCustomTempleteRoutines" + e); } finally { try { // HibernateUtil.shutdown(); } catch (Exception e) { System.out.println("Exception In getExerciseListByLoginId Resource closing :" + e); } } return list; } //**showing list on grid private static List<Subjectdetails> custLst=new ArrayList<Subjectdetails>(); static int total=50; static { SubjectdetailsDAO cts = new SubjectdetailsDAO(); Iterator iterator1 = cts.getAllCustomTempleteRoutinesForGrid().iterator(); while (iterator1.hasNext()) { Subjectdetails get = (Subjectdetails) iterator1.next(); custLst.add(get); } } /****************************************update Routines List by WorkId************************************/ public int updatesub(Subjectdetails s) { int updated = 0; try { session = sessionFactory.openSession(); transaction = session.beginTransaction(); Query query = session.createQuery("UPDATE Subjectdetails set subjectName = :routineName1 WHERE subjectId=:workoutId1"); query.setString("routineName1", s.getSubjectName()); query.setInteger("workoutId1", s.getSubjectId()); updated = query.executeUpdate(); if (updated != 0) { } transaction.commit(); } catch (Exception e) { if (transaction != null && transaction.isActive()) { try { transaction.rollback(); } catch (Exception e1) { System.out.println("Exception in addUser() Rollback :" + e1); } } } finally { try { session.flush(); session.close(); } catch (Exception e) { System.out.println("Exception In addUser Resource closing :" + e); } } return updated; } /****************************************update Routines List by WorkId************************************/ public int addSubjectt(Subjectdetails s) { int inserted = 0; Subjectdetails ss=new Subjectdetails(); try { session = sessionFactory.openSession(); transaction = session.beginTransaction(); ss. setSubjectName(s.getSubjectName()); session.save(ss); System.out.println("Successfully data insert in database"); inserted++; if (inserted != 0) { } transaction.commit(); } catch (Exception e) { if (transaction != null && transaction.isActive()) { try { transaction.rollback(); } catch (Exception e1) { System.out.println("Exception in addUser() Rollback :" + e1); } } } finally { try { session.flush(); session.close(); } catch (Exception e) { System.out.println("Exception In addUser Resource closing :" + e); } } return inserted; } /******************************************Get all Routines List by LoginID************************************/ public List getSubjects() { List list = null; try { session = sessionFactory.openSession(); list = session.createCriteria(Subjectdetails.class).list(); } catch (Exception e) { System.out.println("Exception in getRoutineList() :" + e); } finally { try { session.flush(); session.close(); } catch (Exception e) { System.out.println("Exception In getUserList Resource closing :" + e); } } return list; } //---\ public int addZoneToDb(String countryName, Integer loginId) { int inserted = 0; try { System.out.println("---------1--------"); Session session = HibernateUtil.getSessionFactory().openSession(); System.out.println("---------2------session--"+session); session.beginTransaction(); Subjectdetails country = new Subjectdetails(countryName, loginId, currentdate, loginId, currentdate); System.out.println("---------2------country--"+country); session.save(country); System.out.println("-------after save--"); inserted++; session.getTransaction().commit(); System.out.println("-------after commits--"); } catch (Exception e) { if (transaction != null && transaction.isActive()) { try { transaction.rollback(); } catch (Exception e1) { } } } finally { try { } catch (Exception e) { } } return inserted; } //-- public int nextId() { return total++; } public List<Subjectdetails> buildList() { return custLst; } public static int count() { return custLst.size(); } public static List<Subjectdetails> find(int o,int q) { return custLst.subList(o, q); } public void save(Subjectdetails c) { custLst.add(c); } public static Subjectdetails findById(Integer id) { try { for(Subjectdetails c:custLst) { if(c.getSubjectId()==id) { return c; } } } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; } public void update(Subjectdetails c) { for(Subjectdetails x:custLst) { if(x.getSubjectId()==c.getSubjectId()) { x.setSubjectName(c.getSubjectName()); } } } public void delete(Subjectdetails c) { custLst.remove(c); } public static List<Subjectdetails> findNotById(int id, int from,int to) { List<Subjectdetails> subLst=custLst.subList(from, to); List<Subjectdetails> temp=new ArrayList<Subjectdetails>(); for(Subjectdetails c:subLst) { if(c.getSubjectId()!=id) { temp.add(c); } } return temp; } public static List<Subjectdetails> findLesserAsId(int id, int from,int to) { List<Subjectdetails> subLst=custLst.subList(from, to); List<Subjectdetails> temp=new ArrayList<Subjectdetails>(); for(Subjectdetails c:subLst) { if(c.getSubjectId()<=id) { temp.add(c); } } return temp; } public static List<Subjectdetails> findGreaterAsId(int id, int from,int to) { List<Subjectdetails> subLst=custLst.subList(from, to); List<Subjectdetails> temp=new ArrayList<Subjectdetails>(); for(Subjectdetails c:subLst) { if(c.getSubjectId()>=id) { temp.add(c); } } return temp; } } Subjectdetails.hbm.xml <hibernate-mapping> <class name="vb.sofware.pojos.Subjectdetails" table="subjectdetails" catalog="vbsoftware"> <id name="subjectId" type="int"> <column name="subject_id" /> <generator class="increment"/> </id> <property name="subjectName" type="string"> <column name="subject_name" length="150" /> </property> <property name="createrId" type="java.lang.Integer"> <column name="creater_id" /> </property> <property name="createdDate" type="timestamp"> <column name="created_date" length="19" /> </property> <property name="updateId" type="java.lang.Integer"> <column name="update_id" /> </property> <property name="updatedDate" type="timestamp"> <column name="updated_date" length="19" /> </property> </class> </hibernate-mapping> My POJO - Subjectdetails.java package v.esoft.pojos; // Generated Oct 6, 2012 1:58:21 PM by Hibernate Tools 3.4.0.CR1 import java.util.Date; /** * Subjectdetails generated by hbm2java */ public class Subjectdetails implements java.io.Serializable { private int subjectId; private String subjectName; private Integer createrId; private Date createdDate; private Integer updateId; private Date updatedDate; public Subjectdetails( String subjectName) { //this.subjectId = subjectId; this.subjectName = subjectName; } public Subjectdetails() { } public Subjectdetails(int subjectId) { this.subjectId = subjectId; } public Subjectdetails(int subjectId, String subjectName, Integer createrId, Date createdDate, Integer updateId, Date updatedDate) { this.subjectId = subjectId; this.subjectName = subjectName; this.createrId = createrId; this.createdDate = createdDate; this.updateId = updateId; this.updatedDate = updatedDate; } public Subjectdetails( String subjectName, Integer createrId, Date createdDate, Integer updateId, Date updatedDate) { this.subjectName = subjectName; this.createrId = createrId; this.createdDate = createdDate; this.updateId = updateId; this.updatedDate = updatedDate; } public int getSubjectId() { return this.subjectId; } public void setSubjectId(int subjectId) { this.subjectId = subjectId; } public String getSubjectName() { return this.subjectName; } public void setSubjectName(String subjectName) { this.subjectName = subjectName; } public Integer getCreaterId() { return this.createrId; } public void setCreaterId(Integer createrId) { this.createrId = createrId; } public Date getCreatedDate() { return this.createdDate; } public void setCreatedDate(Date createdDate) { this.createdDate = createdDate; } public Integer getUpdateId() { return this.updateId; } public void setUpdateId(Integer updateId) { this.updateId = updateId; } public Date getUpdatedDate() { return this.updatedDate; } public void setUpdatedDate(Date updatedDate) { this.updatedDate = updatedDate; } } And my Sql query is CREATE TABLE IF NOT EXISTS `subjectdetails` ( `subject_id` int(3) NOT NULL, `subject_name` varchar(150) DEFAULT NULL, `creater_id` int(5) DEFAULT NULL, `created_date` datetime DEFAULT NULL, `update_id` int(5) DEFAULT NULL, `updated_date` datetime DEFAULT NULL, PRIMARY KEY (`subject_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

    Read the article

< Previous Page | 1 2 3 4