Search Results

Search found 2455 results on 99 pages for 'lambda expressions'.

Page 57/99 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Tomcat memory usage grows until crash with no GC run

    - by Phil
    I'm administrating a server running Tomcat that is getting a lot of traffic lately. If I monitor memory usage in Task Manager I can see the memory usage growing and eventually tomcat crashes around the 1GB mark. Here's the memory relevent bits I've set in Tomcat Properties (this is a Windows Server): Intial memory pool: 1024 MB Maximum memory pool: 1024 MB -XX:MaxPermSize=256M The weird thing is since these problems arose I've deployed Lambda Probe to the Tomcat instance and the memory usage values I see there are much lower, for example Task Manager might show 467MB used while the "Total" used in Probe is 212 MB. Also, the Maximum Total listed in Probe is 1.29GB, when I would have expected 1GB, the maximum memory set above. If I force the garbage collector to run using Probe, I can keep Tomcat from crashing for a while (indefinitely, AFAIK). So why doesn't the GC run automatically and stop Tomcat from crashing? Thanks.

    Read the article

  • What files should be excluded from a complete Windows backup?

    - by tro
    I'm starting to use CrashPlan to backup my Win 7 PC. I've got it writing to my external HD (for quick local restores) and to CrashPlan Central (for offsite storage). I'd like to backup my entire C:\ drive (the only partition) in a way that: Preserves all of my installed software and configuration, but Avoids backing up log files and other ephemeral / temporary files that are regenerated during normal operation of the OS. Which files and/or directories should I be excluding from backups? I'd like to make this a community wiki, so that we could all contribute towards a definitive list. Here's a list of regular expressions identifying the directories and files that CrashPlan excludes on Windows by default listed at http://support.crashplan.com/doku.php/articles/admin_excludes: .*/(?:42|\d{8,})/(?:cp|~).* (?i).*/CrashPlan.*/(?:cache|log|conf|manifest|upgrade)/.* .*\.part .*/iPhoto Library/iPod Photo Cache/.* .*\.cprestoretmp.* *\.rbf :/Config\\.Msi.* .*/Google/Chrome/.*cache.* .*/Mozilla/Firefox/.*cache.* .*\$RECYCLE\.BIN/.* .*/System Volume Information/.* .*/RECYCLER/.* .*/I386.* .*/pagefile.sys .*/MSOCache.* .*UsrClass\.dat\.LOG .*UsrClass\.dat .*/Temporary Internet Files/.* (?i).*/ntuser.dat.* .*/Local Settings/Temp.* .*/AppData/Local/Temp.* .*/AppData/Temp.* .*/Windows/Temp.* (?i).*/Microsoft.*/Windows/.*\.log .*/Microsoft.*/Windows/Cookies.* .*/Microsoft.*/RecoveryStore.* (?i).:/Config\\.Msi.* (?i).*\\.rbf .*/Windows/Installer.* Other excludes: .*\.(class|obj) .*/hiberfil.sys (?i).*\.tmp (?i).*/temp/ (?i).*/tmp/ .*Thumbs\.db .*/Local Settings/History/ .*/NetHood/ .*/PrintHood/ .*/Cookies/ .*/Recent/ .*/SendTo/

    Read the article

  • Is there a way to have customised text instead of subject in Thunderbird's inbox list?

    - by peterp
    I am getting a lot of informational emails like "You've got a new message from ..." or "Notification of Donation Received", which often do not contain any information in the subject so that I have to open the email to see who sent the message or who donated which amount. I'd love to be able to make TB parse incoming emails and then display something interesting instead of the original subject, e.g. by defining a regular expression pattern. I know how to write regular expressions, but I do not know whether there is a way or an addon to modify the displayed text in the messages view. EDIT for clarification: I would like donation notifications from Paypal not to be displayed as original Notification of Donation Received but rather Paypal: John Doe has donated 50$

    Read the article

  • Is there a smarter Find files utility for Windows 8 than Windows key + F?

    - by Clay Shannon
    Is there any utility for Windows 8 that will basically do the same thing the old "Find" dialog in Explorer did? Often times (many times a day) I need to find a particular file, and I don't know the name of it or where it is, but I can remember a phrase in it, and approximately when it was written, e.g., it has the phrase "Duckbilled Platypus" in it and was written sometime in the last week. The Find Files functionality in Windows 8 is lame by comparison; I know there are probably geeky ways to jump through hoops and do it, but I don't want to have to write GREP expressions, I want something easy like the old functionality...

    Read the article

  • Why isn't 'ether proto \ip host host' a legal tcpdump expression?

    - by Ezequiel Garzon
    In its description of valid tcpdump expressions, the pcap-filter man pages state: The filter expression consists of one or more primitives. Primitives usually consist of an id (name or number) preceded by one or more qualifiers. In turn, these qualifiers are type, dir and proto. So far so good, but further down we find this: ip host host which is equivalent to: ether proto \ip and host host In the first case, ip and host are, respectively, proto and type. What pattern does ether proto \ip follow? Isn't that, as a whole, a proto qualifier? If so, why isn't (a properly escaped) 'ether proto \ip host host' legal (no and)?

    Read the article

  • How to remove line breaks (or carriage returns) only from certain parts of a block of text?

    - by Luke Allen
    Whenever I copy formatted text from a PDF file which is formatted to have line breaks (or carriage returns), I need to find a way to remove these line breaks without removing the paragraph format. To do this I need to use RegEx (Regular expressions) to only remove the line breaks which aren't preceded by a period. So for example, if a string of text has a line break right after a period, that is obviously almost always a legitimate line break which will start a new paragraph. If a string of text has a line break mid-word or after a word with no period, it's simply part of the bad formatting I need to get rid of. My problem is that I don't know how to use RegEx to make it only remove the ^p tags in word or CRLF or line breaks in any format under the conditions that it omits ones following a period.

    Read the article

  • Reg Expression htaccess RewriteRule

    - by Rick
    I am new to using regular expressions for rewriting URL's in htaccess I need to redirect mysite.com/123 to mysite.com/, IF cookie named 'ref' is set. my current htaccess is: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{HTTP_COOKIE} ref=true [NC] RewriteRule ^/([0-9]+)/$ http://www.mysite.com </IfModule> The goal is that when someone enters site with: mysite.com/111(some number) that they are redirected to the home page of the site after the cookie is set. Be nice... I'm new! ;o)

    Read the article

  • Braces (syntax) highlighting in OpenOffice Math formula text editor

    - by Oleksandr Bolotov
    When you use OpenOffice Math, in upper part you see formula and formula text editor in lower part. Almost like this: %sigma = 2 %mu %epsilon + %lambda Tr(%epsilon)I So my questions are: How to replace OpenOffice Math's formula text editor with own text editor? ... or how to enable braces (syntax) highlighting in embedded editor? ... are there any extensions for anything like this? I need this because sometimes it's too much braces and stuff and it's hard to distinguish which braces match each other. Please do not suggest me to use MathType Mathematica (or anything) instead of OpenOffice Math (because I'm almost happy with it:)

    Read the article

  • AdBlock Plus Advanced Element Hiding?

    - by funkafied
    I'm trying to block a certain element on a site using AdBlock Plus's element hiding feature. However the problem is that there are two elements with the same exact name and type that I'm trying to hide so there's no way to tell the filter which one to keep and which one not to keep. So I figure there might be a way to hide only the second element by telling it to only hide the second occurrence of an element that matches the filter. Like skip the first one and hide the second occurrence. Or alternatively maybe hide the one that also has a certain other element in front of it. Is there any way to do this? Like regular expressions or something?

    Read the article

  • Getting rid of GNU Emacs's menu bar in terminal windows

    - by Ernest A
    How to get rid of Emacs's menu bar in terminal windows? The standard answer is to put (when (not (display-graphic-p)) (menu-bar-mode -1)) in init.el. However, this solution is not good, because all it does is remove the menu bar after the fact. You can still see it for a split second. It's very annoying. Looking at the source code in startup.el I don't see an obvious solution to this problem. I think the only way is to use before-init-hook. Maybe this could do the trick? (add-hook 'before-init-hook (lambda () (setq emacs-basic-display t))) But this hook is run before init.el and other init files are evaluated, so how is one supposed to use it?

    Read the article

  • Find Files That Contain Both Words in Notepad++

    - by SethO
    In Notepad++ (v5.9), I want to search for files which contain two words. For example, I would like to find all text files in a directory that have both Alpha and Bravo in the file. They may not be next to each other and they may have multiple occurrences of either. I just want to find the files that have at least one instance of each. Is there a way to structure this search without resorting to Regular Expressions? Thanks for the advice.

    Read the article

  • Quick Replace in Visual Studio 2010 fails to use Tagged Expression n

    - by slomojo
    I'm trying to do some basic regex Quick Replace operations in Visual Studio 2010, but when I use regex grouping I don't get Tagged Expressions (ie. \1 \2 etc.) returning their values, instead they are blank. For example: Text int a = int.Parse("10"); int b = int.Parse("20"); int c = int.Parse("30"); Search Pattern (regex enabled) int\.Parse\("([0-9]*)"\); Replace \1; Replaced Text int a = ; int b = ; int c = ;

    Read the article

  • Case in-sensitivity for Apache httpd Location directive

    - by user57178
    I am working with a solution that requires the usage of mod_proxy_balancer and an application server that both ignores case and mixes different case combinations in URLs found in generated content. The configuration works, however I have now a new requirement that causes problems. I should be able to create a location directive (as per http://httpd.apache.org/docs/current/mod/core.html#location ) and have the URL-path interpret in case insensitive mode. This requirement comes from the need to add authentication directives to the location. As you might guess, users (or the application in question) changing one letter to capital circumvents the protection instantly. The httpd runs on Unix platform so every configuration directive is apparently case sensitive by default. Should the regular expressions in the Location directive work in this case? Could someone please show me an example of such configuration that should work? In case a regular expression can not be forced to work case insensitively, what part of httpd's source code should I go around modifying?

    Read the article

  • Disable incremental search in firefox (and everything else!)

    - by Alan Curry
    All I find from googling "disable incremental search" is a bunch of people telling me how great incremental search is. It isn't. Firefox has the worst version of it, jumping around and making me lose my place because of a search I haven't even finished typing yet. I don't want the window scrolling up and down without my say-so. It would be nice if I could search with regular expressions, like text search has been done in every non-toy application since ed. But the jumpiness of the window is the overriding concern. How can this evil be defeated?

    Read the article

  • Everything You Ever Wanted to Know about Mod_Rewrite Rules but Were Afraid to Ask?

    - by Kyle Brandt
    How can I become an expert at writing mod_rewrite rules? What is the fundamental format and structure of mod_rewrite rules? What form/flavor of regular expressions do I need to have a solid grasp of? What are the most common mistakes/pitfalls when writing rewrite rules? What is a good method for testing and verifying mod_rewrite rules? Are there SEO or performance implications of mod_rewrite rules I should be aware of? Are there common situations where mod_rewrite might seem like the right tool for the job but isn't? What are some common examples?

    Read the article

  • Variable for the suffix of $request_uri that didn't match the location block prefix

    - by hsivonen
    Suppose I want to move an /images/ directory to an images host so that what was before http://example.org/images/foo.png becomes http://images.example.org/foo.png. If I do: location /images/ { return 301 http://images.example.org$request_uri; }, the result is a redirect to http://images.example.org/images/foo.png which isn't what I want. An older question has an answer that suggests using a regexp location, but that seems like an overkill. Is there really no way to refer to $request_uri with the location prefix chopped off without using regular expressions? Seems like an obvious feature to have.

    Read the article

  • Everything You Ever Wanted to Know about Mod_Rewrite Rules but Were Afraid to Ask?

    - by Kyle Brandt
    How can I become an expert at writing mod_rewrite rules? What is the fundamental format and structure of mod_rewrite rules? What form/flavor of regular expressions do I need to have a solid grasp of? What are the most common mistakes/pitfalls when writing rewrite rules? What is a good method for testing and verifying mod_rewrite rules? Are there SEO or performance implications of mod_rewrite rules I should be aware of? Are there common situations where mod_rewrite might seem like the right tool for the job but isn't? What are some common examples?

    Read the article

  • Command-line tool to search for file names on offline backup drives

    - by halloleo
    I am looking for an open-source (command-line) tool to register and search all my (backup) drives on a file name level. I want to search for file and folder names preferably written as regular expressions or file glob patterns. The external drives contain just normal HFS and NTFS filesystems. The backups are done via direct file copy. Requirement is that the tool compiles on OS X and works without each of the drives attached, but rather pointing me to the drive in case a drive contains a file with the pattern I searched for. At the moment I use a hand-knit script solution with locate databases, one for each external backup drive, but this is rather cumbersome, because locate itself can accesses only one database at a time and does not contain any management system for all the indices/databases. Are there any other tools out there for this?

    Read the article

  • overriding ctype<wchar_t>

    - by Potatoswatter
    I'm writing a lambda calculus interpreter for fun and practice. I got iostreams to properly tokenize identifiers by adding a ctype facet which defines punctuation as whitespace: struct token_ctype : ctype<char> { mask t[ table_size ]; token_ctype() : ctype<char>( t ) { for ( size_t tx = 0; tx < table_size; ++ tx ) { t[tx] = isalnum( tx )? alnum : space; } } }; (classic_table() would probably be cleaner but that doesn't work on OS X!) And then swap the facet in when I hit an identifier: locale token_loc( in.getloc(), new token_ctype ); … locale const &oldloc = in.imbue( token_loc ); in.unget() >> token; in.imbue( oldloc ); There seems to be surprisingly little lambda calculus code on the Web. Most of what I've found so far is full of unicode ? characters. So I thought to try adding Unicode support. But ctype<wchar_t> works completely differently from ctype<char>. There is no master table; there are four methods do_is x2, do_scan_is, and do_scan_not. So I did this: struct token_ctype : ctype< wchar_t > { typedef ctype<wchar_t> base; bool do_is( mask m, char_type c ) const { return base::do_is(m,c) || (m&space) && ( base::do_is(punct,c) || c == L'?' ); } const char_type* do_is (const char_type* lo, const char_type* hi, mask* vec) const { base::do_is(lo,hi,vec); for ( mask *vp = vec; lo != hi; ++ vp, ++ lo ) { if ( *vp & punct || *lo == L'?' ) *vp |= space; } return hi; } const char_type *do_scan_is (mask m, const char_type* lo, const char_type* hi) const { if ( m & space ) m |= punct; hi = do_scan_is(m,lo,hi); if ( m & space ) hi = find( lo, hi, L'?' ); return hi; } const char_type *do_scan_not (mask m, const char_type* lo, const char_type* hi) const { if ( m & space ) { m |= punct; while ( * ( lo = base::do_scan_not(m,lo,hi) ) == L'?' && lo != hi ) ++ lo; return lo; } return base::do_scan_not(m,lo,hi); } }; (Apologies for the flat formatting; the preview converted the tabs differently.) The code is WAY less elegant. I does better express the notion that only punctuation is additional whitespace, but that would've been fine in the original had I had classic_table. Is there a simpler way to do this? Do I really need all those overloads? (Testing showed do_scan_not is extraneous here, but I'm thinking more broadly.) Am I abusing facets in the first place? Is the above even correct? Would it be better style to implement less logic?

    Read the article

  • Matlab: Optimization by perturbing variable

    - by S_H
    My main script contains following code: %# Grid and model parameters nModel=50; nModel_want=1; nI_grid1=5; Nth=1; nRow.Scale1=5; nCol.Scale1=5; nRow.Scale2=5^2; nCol.Scale2=5^2; theta = 90; % degrees a_minor = 2; % range along minor direction a_major = 5; % range along major direction sill = var(reshape(Deff_matrix_NthModel,nCell.Scale1,1)); % variance of the coarse data matrix of size nRow.Scale1 X nCol.Scale1 %# Covariance computation % Scale 1 for ihRow = 1:nRow.Scale1 for ihCol = 1:nCol.Scale1 [cov.Scale1(ihRow,ihCol),heff.Scale1(ihRow,ihCol)] = general_CovModel(theta, ihCol, ihRow, a_minor, a_major, sill, 'Exp'); end end % Scale 2 for ihRow = 1:nRow.Scale2 for ihCol = 1:nCol.Scale2 [cov.Scale2(ihRow,ihCol),heff.Scale2(ihRow,ihCol)] = general_CovModel(theta, ihCol/(nCol.Scale2/nCol.Scale1), ihRow/(nRow.Scale2/nRow.Scale1), a_minor, a_major, sill/(nRow.Scale2*nCol.Scale2), 'Exp'); end end %# Scale-up of fine scale values by averaging [covAvg.Scale2,var_covAvg.Scale2,varNorm_covAvg.Scale2] = general_AverageProperty(nRow.Scale2/nRow.Scale1,nCol.Scale2/nCol.Scale1,1,nRow.Scale1,nCol.Scale1,1,cov.Scale2,1); I am using two functions, general_CovModel() and general_AverageProperty(), in my main script which are given as following: function [cov,h_eff] = general_CovModel(theta, hx, hy, a_minor, a_major, sill, mod_type) % mod_type should be in strings angle_rad = theta*(pi/180); % theta in degrees, angle_rad in radians R_theta = [sin(angle_rad) cos(angle_rad); -cos(angle_rad) sin(angle_rad)]; h = [hx; hy]; lambda = a_minor/a_major; D_lambda = [lambda 0; 0 1]; h_2prime = D_lambda*R_theta*h; h_eff = sqrt((h_2prime(1)^2)+(h_2prime(2)^2)); if strcmp(mod_type,'Sph')==1 || strcmp(mod_type,'sph') ==1 if h_eff<=a cov = sill - sill.*(1.5*(h_eff/a_minor)-0.5*((h_eff/a_minor)^3)); else cov = sill; end elseif strcmp(mod_type,'Exp')==1 || strcmp(mod_type,'exp') ==1 cov = sill-(sill.*(1-exp(-(3*h_eff)/a_minor))); elseif strcmp(mod_type,'Gauss')==1 || strcmp(mod_type,'gauss') ==1 cov = sill-(sill.*(1-exp(-((3*h_eff)^2/(a_minor^2))))); end and function [PropertyAvg,variance_PropertyAvg,NormVariance_PropertyAvg]=... general_AverageProperty(blocksize_row,blocksize_col,blocksize_t,... nUpscaledRow,nUpscaledCol,nUpscaledT,PropertyArray,omega) % This function computes average of a property and variance of that averaged % property using power averaging PropertyAvg=zeros(nUpscaledRow,nUpscaledCol,nUpscaledT); %# Average of property for k=1:nUpscaledT, for j=1:nUpscaledCol, for i=1:nUpscaledRow, sum=0; for a=1:blocksize_row, for b=1:blocksize_col, for c=1:blocksize_t, sum=sum+(PropertyArray((i-1)*blocksize_row+a,(j-1)*blocksize_col+b,(k-1)*blocksize_t+c).^omega); % add all the property values in 'blocksize_x','blocksize_y','blocksize_t' to one variable end end end PropertyAvg(i,j,k)=(sum/(blocksize_row*blocksize_col*blocksize_t)).^(1/omega); % take average of the summed property end end end %# Variance of averageed property variance_PropertyAvg=var(reshape(PropertyAvg,... nUpscaledRow*nUpscaledCol*nUpscaledT,1),1,1); %# Normalized variance of averageed property NormVariance_PropertyAvg=variance_PropertyAvg./(var(reshape(... PropertyArray,numel(PropertyArray),1),1,1)); Question: Using Matlab, I would like to optimize covAvg.Scale2 such that it matches closely with cov.Scale1 by perturbing/varying any (or all) of the following variables 1) a_minor 2) a_major 3) theta Thanks.

    Read the article

  • Preventing multiple repeat selection of synchronized Controls ?

    - by BillW
    The working code sample here synchronizes (single) selection in a TreeView, ListView, and ComboBox via the use of lambda expressions in a dictionary where the Key in the dictionary is a Control, and the Value of each Key is an Action<int. Where I am stuck is that I am getting multiple repetitions of execution of the code that sets the selection in the various controls in a way that's unexpected : it's not recursing : there's no StackOverFlow error happening; but, I would like to figure out why the current strategy for preventing multiple selection of the same controls is not working. Perhaps the real problem here is distinguishing between a selection update triggered by the end-user and a selection update triggered by the code that synchronizes the other controls ? Note: I've been experimenting with using Delegates, and forms of Delegates like Action<T>, to insert executable code in Dictionaries : I "learn best" by posing programming "challenges" to myself, and implementing them, as well as studying, at the same time, the "golden words" of luminaries like Skeet, McDonald, Liberty, Troelsen, Sells, Richter. Note: Appended to this question/code, for "deep background," is a statement of how I used to do things in pre C#3.0 days where it seemed like I did need to use explicit measures to prevent recursion when synchronizing selection. Code : Assume a WinForms standard TreeView, ListView, ComboBox, all with the same identical set of entries (i.e., the TreeView has only root nodes; the ListView, in Details View, has one Column). private Dictionary<Control, Action<int>> ControlToAction = new Dictionary<Control, Action<int>>(); private void Form1_Load(object sender, EventArgs e) { // add the Controls to be synchronized to the Dictionary // with appropriate Action<int> lambda expressions ControlToAction.Add(treeView1, (i => { treeView1.SelectedNode = treeView1.Nodes[i]; })); ControlToAction.Add(listView1, (i => { listView1.Items[i].Selected = true; })); ControlToAction.Add(comboBox1, (i => { comboBox1.SelectedIndex = i; })); } private void synchronizeSelection(int i, Control currentControl) { foreach(Control theControl in ControlToAction.Keys) { // skip the 'current control' if (theControl == currentControl) continue; // for debugging only Console.WriteLine(theControl.Name + " synchronized"); // execute the Action<int> associated with the Control ControlToAction[theControl](i); } } private void treeView1_AfterSelect(object sender, TreeViewEventArgs e) { synchronizeSelection(e.Node.Index, treeView1); } private void listView1_SelectedIndexChanged(object sender, EventArgs e) { // weed out ListView SelectedIndexChanged firing // with SelectedIndices having a Count of #0 if (listView1.SelectedIndices.Count > 0) { synchronizeSelection(listView1.SelectedIndices[0], listView1); } } private void comboBox1_SelectedValueChanged(object sender, EventArgs e) { if (comboBox1.SelectedIndex > -1) { synchronizeSelection(comboBox1.SelectedIndex, comboBox1); } } background : pre C# 3.0 Seems like, back in pre C# 3.0 days, I was always using a boolean flag to prevent recursion when multiple controls were updated. For example, I'd typically have code like this for synchronizing a TreeView and ListView : assuming each Item in the ListView was synchronized with a root-level node of the TreeView via a common index : // assume ListView is in 'Details View,' has a single column, // MultiSelect = false // FullRowSelect = true // HideSelection = false; // assume TreeView // HideSelection = false // FullRowSelect = true // form scoped variable private bool dontRecurse = false; private void treeView1_AfterSelect(object sender, TreeViewEventArgs e) { if(dontRecurse) return; dontRecurse = true; listView1.Items[e.Node.Index].Selected = true; dontRecurse = false; } private void listView1_SelectedIndexChanged(object sender, EventArgs e) { if(dontRecurse) return // weed out ListView SelectedIndexChanged firing // with SelectedIndices having a Count of #0 if (listView1.SelectedIndices.Count > 0) { dontRecurse = true; treeView1.SelectedNode = treeView1.Nodes[listView1.SelectedIndices[0]]; dontRecurse = false; } } Then it seems, somewhere around FrameWork 3~3.5, I could get rid of the code to suppress recursion, and there was was no recursion (at least not when synchronizing a TreeView and a ListView). By that time it had become a "habit" to use a boolean flag to prevent recursion, and that may have had to do with using a certain third party control.

    Read the article

  • LINQ Generic Query with inherited base class?

    - by sah302
    I am trying to write some generic LINQ queries for my entities, but am having issue doing the more complex things. Right now I am using an EntityDao class that has all my generics and each of my object class Daos (such as Accomplishments Dao) inherit it, am example: using LCFVB.ObjectsNS; using LCFVB.EntityNS; namespace AccomplishmentNS { public class AccomplishmentDao : EntityDao<Accomplishment>{} } Now my entityDao has the following code: using LCFVB.ObjectsNS; using LCFVB.LinqDataContextNS; namespace EntityNS { public abstract class EntityDao<ImplementationType> where ImplementationType : Entity { public ImplementationType getOneByValueOfProperty(string getProperty, object getValue) { ImplementationType entity = null; if (getProperty != null && getValue != null) { //Nhibernate Example: //ImplementationType entity = default(ImplementationType); //entity = Me.session.CreateCriteria(Of ImplementationType)().Add(Expression.Eq(getProperty, getValue)).UniqueResult(Of InterfaceType)() LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here lcfdatacontext.GetTable<ImplementationType>(); lcfdatacontext.SubmitChanges(); lcfdatacontext.Dispose(); } return entity; } public bool insertRow(ImplementationType entity) { if (entity != null) { //Nhibernate Example: //Me.session.Save(entity, entity.Id) //Me.session.Flush() LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here lcfdatacontext.GetTable<ImplementationType>().InsertOnSubmit(entity); lcfdatacontext.SubmitChanges(); lcfdatacontext.Dispose(); return true; } else { return false; } } } }             I have gotten the insertRow function working, however I am not even sure how to go about doing getOnebyValueOfProperty, the closest thing I could find on this site was: http://stackoverflow.com/questions/2157560/generic-linq-to-sql-query How can I pass in the column name and the value I am checking against generically using my current set-up? It seems like from that link it's impossible since using a where predicate because entity class doesn't know what any of the properties are until I pass them in. Lastly, I need some way of setting a new object as the return type set to the implementation type, in nhibernate (what I am trying to convert from) it was simply this line that did it: ImplentationType entity = default(ImplentationType); However default is an nhibernate command, how would I do this for LINQ? EDIT: getOne doesn't seem to work even when just going off the base class (this is a partial class of the auto generated LINQ classes). I even removed the generics. I tried: namespace ObjectsNS { public partial class Accomplishment { public Accomplishment getOneByWhereClause(Expression<Action<Accomplishment, bool>> singleOrDefaultClause) { Accomplishment entity = new Accomplishment(); if (singleOrDefaultClause != null) { LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here entity = lcfdatacontext.Accomplishments.SingleOrDefault(singleOrDefaultClause); lcfdatacontext.Dispose(); } return entity; } } } Get the following error: Error 1 Overload resolution failed because no accessible 'SingleOrDefault' can be called with these arguments: Extension method 'Public Function SingleOrDefault(predicate As System.Linq.Expressions.Expression(Of System.Func(Of Accomplishment, Boolean))) As Accomplishment' defined in 'System.Linq.Queryable': Value of type 'System.Action(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))' cannot be converted to 'System.Linq.Expressions.Expression(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))'. Extension method 'Public Function SingleOrDefault(predicate As System.Func(Of Accomplishment, Boolean)) As Accomplishment' defined in 'System.Linq.Enumerable': Value of type 'System.Action(Of System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean))' cannot be converted to 'System.Func(Of LCFVB.ObjectsNS.Accomplishment, Boolean)'. 14 LCF Okay no problem I changed: public Accomplishment getOneByWhereClause(Expression<Action<Accomplishment, bool>> singleOrDefaultClause) to: public Accomplishment getOneByWhereClause(Expression<Func<Accomplishment, bool>> singleOrDefaultClause) Error goes away. Alright, but now when I try to call the method via: Accomplishment accomplishment = new Accomplishment(); var result = accomplishment.getOneByWhereClause(x=>x.Id = 4) It doesn't work it says x is not declared. I also tried getOne, and various other Expression =(

    Read the article

  • Complex Rails queries across multiple tables, unions, and will_paginate. Solved.

    - by uberllama
    Hi folks. I've been working on a complex "user feed" type of functionality for a while now, and after experimenting with various union plugins, hacking named scopes, and brute force, have arrived at a solution I'm happy with. S.O. has been hugely helpful for me, so I thought I'd post it here in hopes that it might help others and also to get feedback -- it's very possible that I worked on this so long that I walked down an unnecessarily complicated road. For the sake of my example, I'll use users, groups, and articles. A user can follow other users to get a feed of their articles. They can also join groups and get a feed of articles that have been added to those groups. What I needed was a combined, pageable feed of distinct articles from a user's contacts and groups. Let's begin. user.rb has_many :articles has_many :contacts has_many :contacted_users, :through => :contacts has_many :memberships has_many :groups, :through => :memberships contact.rb belongs_to :user belongs_to :contacted_user, :class_name => "User", :foreign_key => "contacted_user_id" article.rb belongs_to :user has_many :submissions has_many :groups, :through => :submissions group.rb has_many :memberships has_many :users, :through => :memberships has_many :submissions has_many :articles, :through => :submissions Those are the basic models that define my relationships. Now, I add two named scopes to the Article model so that I can get separate feeds of both contact articles and group articles should I desire. article.rb # Get all articles by user's contacts named_scope :by_contacts, lambda {|user| {:joins => "inner join contacts on articles.user_id = contacts.contacted_user_id", :conditions => ["articles.published = 1 and contacts.user_id = ?", user.id]} } # Get all articles in user's groups. This does an additional query to get the user's group IDs, then uses those in an IN clause named_scope :by_groups, lambda {|user| {:select => "DISTINCT articles.*", :joins => :submissions, :conditions => {:submissions => {:group_id => user.group_ids}}} } Now I have to create a method that will provide a UNION of these two feeds into one. Since I'm using Rails 2.3.5, I have to use the construct_finder_sql method to render a scope into its base sql. In Rails 3.0, I could use the to_sql method. user.rb def feed "(#{Article.by_groups(self).send(:construct_finder_sql,{})}) UNION (#{Article.by_contacts(self).send(:construct_finder_sql,{})})" end And finally, I can now call this method and paginate it from my controller using will_paginate's paginate_by_sql method. HomeController.rb @articles = Article.paginate_by_sql(current_user.feed, :page => 1) And we're done! It may seem simple now, but it was a lot of work getting there. Feedback is always appreciated. In particular, it would be great to get away from some of the raw sql hacking. Cheers.

    Read the article

  • Boost::Spirit::Qi autorules -- avoiding repeated copying of AST data structures

    - by phooji
    I've been using Qi and Karma to do some processing on several small languages. Most of the grammars are pretty small (20-40 rules). I've been able to use autorules almost exclusively, so my parse trees consist entirely of variants, structs, and std::vectors. This setup works great for the common case: 1) parse something (Qi), 2) make minor manipulations to the parse tree (visitor), and 3) output something (Karma). However, I'm concerned about what will happen if I want to make complex structural changes to a syntax tree, like moving big subtrees around. Consider the following toy example: A grammar for s-expr-style logical expressions that uses autorules... // Inside grammar class; rule names match struct names... pexpr %= pand | por | var | bconst; pand %= lit("(and ") >> (pexpr % lit(" ")) >> ")"; por %= lit("(or ") >> (pexpr % lit(" ")) >> ")"; pnot %= lit("(not ") >> pexpr >> ")"; ... which leads to parse tree representation that looks like this... struct var { std::string name; }; struct bconst { bool val; }; struct pand; struct por; struct pnot; typedef boost::variant<bconst, var, boost::recursive_wrapper<pand>, boost::recursive_wrapper<por>, boost::recursive_wrapper<pnot> > pexpr; struct pand { std::vector<pexpr> operands; }; struct por { std::vector<pexpr> operands; }; struct pnot { pexpr victim; }; // Many Fusion Macros here Suppose I have a parse tree that looks something like this: pand / ... \ por por / \ / \ var var var var (The ellipsis means 'many more children of similar shape for pand.') Now, suppose that I want negate each of the por nodes, so that the end result is: pand / ... \ pnot pnot | | por por / \ / \ var var var var The direct approach would be, for each por subtree: - create pnot node (copies por in construction); - re-assign the appropriate vector slot in the pand node (copies pnot node and its por subtree). Alternatively, I could construct a separate vector, and then replace (swap) the pand vector wholesale, eliminating a second round of copying. All of this seems cumbersome compared to a pointer-based tree representation, which would allow for the pnot nodes to be inserted without any copying of existing nodes. My question: Is there a way to avoid copy-heavy tree manipulations with autorule-compliant data structures? Should I bite the bullet and just use non-autorules to build a pointer-based AST (e.g., http://boost-spirit.com/home/2010/03/11/s-expressions-and-variants/)?

    Read the article

  • Dynamically loading modules in Python (+ multi processing question)

    - by morpheous
    I am writing a Python package which reads the list of modules (along with ancillary data) from a configuration file. I then want to iterate through each of the dynamically loaded modules and invoke a do_work() function in it which will spawn a new process, so that the code runs ASYNCHRONOUSLY in a separate process. At the moment, I am importing the list of all known modules at the beginning of my main script - this is a nasty hack I feel, and is not very flexible, as well as being a maintenance pain. This is the function that spawns the processes. I will like to modify it to dynamically load the module when it is encountered. The key in the dictionary is the name of the module containing the code: def do_work(work_info): for (worker, dataset) in work_info.items(): #import the module defined by variable worker here... # [Edit] NOT using threads anymore, want to spawn processes asynchronously here... #t = threading.Thread(target=worker.do_work, args=[dataset]) # I'll NOT dameonize since spawned children need to clean up on shutdown # Since the threads will be holding resources #t.daemon = True #t.start() Question 1 When I call the function in my script (as written above), I get the following error: AttributeError: 'str' object has no attribute 'do_work' Which makes sense, since the dictionary key is a string (name of the module to be imported). When I add the statement: import worker before spawning the thread, I get the error: ImportError: No module named worker This is strange, since the variable name rather than the value it holds are being used - when I print the variable, I get the value (as I expect) whats going on? Question 2 As I mentioned in the comments section, I realize that the do_work() function written in the spawned children needs to cleanup after itself. My understanding is to write a clean_up function that is called when do_work() has completed successfully, or an unhandled exception is caught - is there anything more I need to do to ensure resources don't leak or leave the OS in an unstable state? Question 3 If I comment out the t.daemon flag statement, will the code stil run ASYNCHRONOUSLY?. The work carried out by the spawned children are pretty intensive, and I don't want to have to be waiting for one child to finish before spawning another child. BTW, I am aware that threading in Python is in reality, a kind of time sharing/slicing - thats ok Lastly is there a better (more Pythonic) way of doing what I'm trying to do? [Edit] After reading a little more about Pythons GIL and the threading (ahem - hack) in Python, I think its best to use separate processes instead (at least IIUC, the script can take advantage of multiple processes if they are available), so I will be spawning new processes instead of threads. I have some sample code for spawning processes, but it is a bit trivial (using lambad functions). I would like to know how to expand it, so that it can deal with running functions in a loaded module (like I am doing above). This is a snippet of what I have: def do_mp_bench(): q = mp.Queue() # Not only thread safe, but "process safe" p1 = mp.Process(target=lambda: q.put(sum(range(10000000)))) p2 = mp.Process(target=lambda: q.put(sum(range(10000000)))) p1.start() p2.start() r1 = q.get() r2 = q.get() return r1 + r2 How may I modify this to process a dictionary of modules and run a do_work() function in each loaded module in a new process?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >