Search Results

Search found 1385 results on 56 pages for 'dependent'.

Page 42/56 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Unit testing with Mocks when SUT is leveraging Task Parallel Libaray

    - by StevenH
    I am trying to unit test / verify that a method is being called on a dependency, by the system under test. The depenedency is IFoo. The dependent class is IBar. IBar is implemented as Bar. Bar will call Start() on IFoo in a new (System.Threading.Tasks.)Task, when Start() is called on Bar instance. Unit Test (Moq): [Test] public void StartBar_ShouldCallStartOnAllFoo_WhenFoosExist() { //ARRANGE //Create a foo, and setup expectation var mockFoo0 = new Mock<IFoo>(); mockFoo0.Setup(foo => foo.Start()); var mockFoo1 = new Mock<IFoo>(); mockFoo1.Setup(foo => foo.Start()); //Add mockobjects to a collection var foos = new List<IFoo> { mockFoo0.Object, mockFoo1.Object }; IBar sutBar = new Bar(foos); //ACT sutBar.Start(); //Should call mockFoo.Start() //ASSERT mockFoo0.VerifyAll(); mockFoo1.VerifyAll(); } Implementation of IBar as Bar: class Bar : IBar { private IEnumerable<IFoo> Foos { get; set; } public Bar(IEnumerable<IFoo> foos) { Foos = foos; } public void Start() { foreach(var foo in Foos) { Task.Factory.StartNew( () => { foo.Start(); }); } } } I appears that the issue is obviously due to the fact that the call to "foo.Start()" is taking place on another thread (/task), and the mock (Moq framework) can't detect it. But I could be wrong. Thanks

    Read the article

  • Nested Object Forms not working as expected

    - by Craig Walker
    I'm trying to get a nested model forms view working. As far as I can tell I'm doing everything right, but it still does not work. I'm on Rails 3 beta 3. My models are as expected: class Recipe < ActiveRecord::Base has_many :ingredients, :dependent => :destroy accepts_nested_attributes_for :ingredients attr_accessible :name end class Ingredient < ActiveRecord::Base attr_accessible :name, :sort_order, :amount belongs_to :recipe end I can use Recipe.ingredients_attributes= as expected: recipe = Recipe.new recipe.ingredients_attributes = [ {:name=>"flour", :amount=>"1 cup"}, {:name=>"sugar", :amount=>"2 cups"}] recipe.ingredients.size # -> 2; ingredients contains expected instances However, I cannot create new object graphs using a hash of parameters as shown in the documentation: params = { :name => "test", :ingredients_attributes => [ {:name=>"flour", :amount=>"1 cup"}, {:name=>"sugar", :amount=>"2 cups"}] } recipe = Recipe.new(params) recipe.name # -> "test" recipe.ingredients # -> []; no ingredient instances in the collection Is there something I'm doing wrong here? Or is there a problem in the Rails 3 beta?

    Read the article

  • Asynchronous callback - gwt

    - by sprasad12
    Hi, I am using gwt and postgres for my project. On the front end i have few widgets whose data i am trying to save on to tables at the back-end when i click on "save project" button(this also takes the name for the created project). In the asynchronous callback part i am setting more than one table. But it is not sending the data properly. I am getting the following error: org.postgresql.util.PSQLException: ERROR: insert or update on table "entitytype" violates foreign key constraint "entitytype_pname_fkey" Detail: Key (pname)=(Project Name) is not present in table "project". But when i do the select statement on project table i can see that the project name is present. Here is how the callback part looks like: oksave.addClickHandler(new ClickHandler(){ @Override public void onClick(ClickEvent event) { if(erasync == null) erasync = GWT.create(EntityRelationService.class); AsyncCallback<Void> callback = new AsyncCallback<Void>(){ @Override public void onFailure(Throwable caught) { } @Override public void onSuccess(Void result){ } }; erasync.setProjects(projectname, callback); for(int i = 0; i < boundaryPanel.getWidgetCount(); i++){ top = new Integer(boundaryPanel.getWidget(i).getAbsoluteTop()).toString(); left = new Integer(boundaryPanel.getWidget(i).getAbsoluteLeft()).toString(); if(widgetTitle.startsWith("ATTR")){ type = "regular"; erasync.setEntityAttribute(name1, name, type, top, left, projectname, callback); } else{ erasync.setEntityType(name, top, left, projectname, callback); } } } Question: Is it wrong to set more than one in the asynchronous callback where all the other tables are dependent on a particular table? when i say setProjects in the above code isn't it first completed and then moved on to the next one? Please any input will be greatly appreciated. Thank you.

    Read the article

  • Problem while executing test case in VS2008 test project

    - by sukumar
    Hi all I have the situation as follows I have develpoed one test project in visual studio 2008 to test my target project. I was getting the following exception when i ran test case in my PC System.IO.FileNotFoundException: The specified module could not be found. (Exception from HRESULT: 0x8007007E) at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue, AssemblyHashAlgorithm hashAlgorithm, Boolean forIntrospection, StackCrawlMark& stackMark) at System.Reflection.Assembly.LoadFrom(String assemblyFile) at Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestExecuter.GetType(UnitTestElement unitTest, String type) at Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestExecuter.ResolveMethods(). but the same project runs successfully in my colleague PC. as per my Understanding System.IO.FileNotFoundException will occur in case of missing out the dlls. i checked up with dependency walker to trace out the missed dll.dependency walke traced out the following dlls 1)MFC90D.dll 2)mSvcr90d.dll 3)msvcp90d.dll i copied this dlls to C:\windows\system32 from Microsoft visual studio 9.0 dir and again i ran the dependency walker.this time dependency walker is able to open the given testproject dll with 0 errors .even then the same exception comes up when i ran the test. i got fed up with this. can any one tell why it is behaving as PC dependent.is there any thing that i still missing? any suggestion can be helpfull Thakns in Advance Sukumar i

    Read the article

  • Files not waiting for each other

    - by Sunny
    I have two batch files as follows in which file2.bat is dependent on file1.bat's output: file1.bat @ECHO OFF setlocal enabledelayedexpansion SET "keystring1=" ( FOR /f "delims=" %%a IN ( Source.txt ) DO ( ECHO %%a|FIND "Appprocess.exe" >NUL IF NOT ERRORLEVEL 1 SET keystring1=%%a FOR %%b IN (App1 App2 App3 App4 App5 App6 ) DO ( ECHO %%a|FIND "%%b" >NUL IF NOT ERRORLEVEL 1 IF DEFINED keystring1 CALL ECHO(%%keystring1%% %%b&SET "keystring1=" )))>result.txt GOTO :EOF file2.bat @echo off setlocal enabledelayedexpansion (for /f "tokens=1,2" %%a in (memory.txt) do ( for /f "tokens=5" %%c in ('find " %%a " ^< result.txt ') do echo %%c %%b ))> new.txt file1.bat usually takes 60 sec to complete its execution. In master.bat file i am calling above two files as: call file1.bat call file2.bat but file2.bat is not waiting for file1.bat to complete its execution. Even , i tried to call file2.bat within file1.bat as below but still its not waiting for file1.bat to get completed: @ECHO OFF setlocal enabledelayedexpansion SET "keystring1=" ( FOR /f "delims=" %%a IN ( Source.txt ) DO ( ECHO %%a|FIND "HsvDataSource.exe" >NUL IF NOT ERRORLEVEL 1 SET keystring1=%%a FOR %%b IN (EUHFMPROD USHFMPROD TL2TEST GSHFMPROD TL2PROD GSARCH1213 TL2FY13) DO ( ECHO %%a|FIND "%%b" >NUL IF NOT ERRORLEVEL 1 IF DEFINED keystring1 CALL ECHO(%%keystring1%% %%b&SET "keystring1=" )))>file2.txt GOTO :EOF call file1.bat I also tried below start option, but no effect.: start file1.bat /wait call file2.bat Not getting ..why its happening..?

    Read the article

  • Exponential regression : p-value and F significance

    - by Saravanan K
    I am new to statistics. I have a set of independent data and dependent data (X,Y), where I would like to do an exponential regression to obtain its p-value and significant F (already obtained R2 and also the coefficients through mathematical calculation). What is the natural evolution from the (X,Y) data to mathematically calculate those variables. Spent a week on the internet to study this but unable to find the right answer. Often an exponential data, y=be^(mx) will be converted first to a linear data, ln y = mx + ln b . Then a linear regression will done on the converted data, obtaining its p-value etc. Assume we use a statistical tool such as Excel's Analysis ToolPak: Data Analysis : Regression, it will produce a result such as below, I believe the p-value and Significant F value is representing the converted linear data and not the original exponential data. Questions: What is the approach/steps used by Excel to get the p-value and Significant F value for the converted linear data as shown in the statistic output in the image above? It is not clear in their help page or website. Can the p-value and Significant F could be mathematically calculated for exponential regression without using a statistical tool? Can you assist to point me to the right link if this has been answered before.

    Read the article

  • Compiling and linking libcurl to create a stand alone dll

    - by Haraldo
    Hi, I've managed to compile a dll with the necessary linked libraries (*.lib) and with CURL_STATICLIB set in the preprocessor section among other settings. I'm using "libcurl-7.19.3-win32-ssl-msvc.zip" package and compiling with VS 2008 express. This has been the first version I managed to get compiled properly with no link issues etc. The problem I have now is that my dll needs libcurl.dll to function and this is not ok. My dll needs to be independent. I have no idea how to implement this. I've taken all day just to get what I've got compiled. I've got runtime library set to Multi threaded dll (debug/release) respectively under C/C++ - code generation. I've a number of preprocessor set - CURL_STATICLIB being one of them. Configuration Type is set to Dynamic Library Use of MFC is set to Use MFC in a static library Additional Library Directories is set to the lib folders (debug/release) respectively. I've noticed there is a curllib_static.lib file which I've tried instead of curllib.lib as an additional dependency but it only compiles with the later. This is driving me nuts! So I guess I need some guidance as to how to make my dll completely static so it doesn't have any dependencies. I notice my dll is currently dependent on: CURLLIB.DLL MSVCR90D.DLL As I'm pretty new to C++ it could be a setting I'm missing in VS 2008 but I'm not sure. One person said I should be using a static library with *.a files (libcurl.a) etc but when I do this I get link errors which I haven't been able to resolve. Any guidance here would be much appreciated.

    Read the article

  • Rails preventing duplicates in polymorphic has_many :through associations

    - by seaneshbaugh
    Is there an easy or at least elegant way to prevent duplicate entries in polymorphic has_many through associations? I've got two models, stories and links that can be tagged. I'm making a conscious decision to not use a plugin here. I want to actually understand everything that's going on and not be dependent on someone else's code that I don't fully grasp. To see what my question is getting at, if I run the following in the console (assuming the story and tag objects exist in the database already) s = Story.find_by_id(1) t = Tag.find_by_id(1) s.tags << t s.tags << t My taggings join table will have two entries added to it, each with the same exact data (tag_id = 1, taggable_id = 1, taggable_type = "Story"). That just doesn't seem very proper to me. So in an attempt to prevent this from happening I added the following to my Tagging model: before_validation :validate_uniqueness def validate_uniqueness taggings = Tagging.find(:all, :conditions => { :tag_id => self.tag_id, :taggable_id => self.taggable_id, :taggable_type => self.taggable_type }) if !taggings.empty? return false end return true end And it works almost as intended, but if I attempt to add a duplicate tag to a story or link I get an ActiveRecord::RecordInvalid: Validation failed exception. It seems that when you add an association to a list it calls the save! (rather than save sans !) method which raises exceptions if something goes wrong rather than just returning false. That isn't quite what I want to happen. I suppose I can surround any attempts to add new tags with a try/catch but that goes against the idea that you shouldn't expect your exceptions and this is something I fully expect to happen. Is there a better way of doing this that won't raise exceptions when all I want to do is just silently not save the object to the database because a duplicate exists?

    Read the article

  • Valgrind says "stack allocation," I say "heap allocation"

    - by Joel J. Adamson
    Dear Friends, I am trying to trace a segfault with valgrind. I get the following message from valgrind: ==3683== Conditional jump or move depends on uninitialised value(s) ==3683== at 0x4C277C5: sparse_mat_mat_kron (sparse.c:165) ==3683== by 0x4C2706E: rec_mating (rec.c:176) ==3683== by 0x401C1C: age_dep_iterate (age_dep.c:287) ==3683== by 0x4014CB: main (age_dep.c:92) ==3683== Uninitialised value was created by a stack allocation ==3683== at 0x401848: age_dep_init_params (age_dep.c:131) ==3683== ==3683== Conditional jump or move depends on uninitialised value(s) ==3683== at 0x4C277C7: sparse_mat_mat_kron (sparse.c:165) ==3683== by 0x4C2706E: rec_mating (rec.c:176) ==3683== by 0x401C1C: age_dep_iterate (age_dep.c:287) ==3683== by 0x4014CB: main (age_dep.c:92) ==3683== Uninitialised value was created by a stack allocation ==3683== at 0x401848: age_dep_init_params (age_dep.c:131) However, here's the offending line: /* allocate mating table */ age_dep_data->mtable = malloc (age_dep_data->geno * sizeof (double *)); if (age_dep_data->mtable == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); for (int j = 0; j < age_dep_data->geno; j++) { 131=> age_dep_data->mtable[j] = calloc (age_dep_data->geno, sizeof (double)); if (age_dep_data->mtable[j] == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); } What gives? I thought any call to malloc or calloc allocated heap space; there is no other variable allocated here, right? Is it possible there's another allocation going on (the offending stack allocation) that I'm not seeing? You asked to see the code, here goes: /* Copyright 2010 Joel J. Adamson <[email protected]> $Id: age_dep.c 1010 2010-04-21 19:19:16Z joel $ age_dep.c:main file Joel J. Adamson -- http://www.unc.edu/~adamsonj Servedio Lab University of North Carolina at Chapel Hill CB #3280, Coker Hall Chapel Hill, NC 27599-3280 This file is part of an investigation of age-dependent sexual selection. This code is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This software is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with haploid. If not, see <http://www.gnu.org/licenses/>. */ #include "age_dep.h" /* global variables */ extern struct argp age_dep_argp; /* global error message variables */ char * nullmsg = "Null pointer: %i"; /* error message for conversions: */ char * errmsg = "Representation error: %s"; /* precision for formatted output: */ const char prec[] = "%-#9.8f "; const size_t age_max = AGEMAX; /* maximum age of males */ static int keep_going_p = 1; int main (int argc, char ** argv) { /* often used counters: */ int i, j; /* read the command line */ struct age_dep_args age_dep_args = { NULL, NULL, NULL }; argp_parse (&age_dep_argp, argc, argv, 0, 0, &age_dep_args); /* set the parameters here: */ /* initialize an age_dep_params structure, set the members */ age_dep_params_t * params = malloc (sizeof (age_dep_params_t)); if (params == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); age_dep_init_params (params, &age_dep_args); /* initialize frequencies: this initializes a list of pointers to initial frqeuencies, terminated by a NULL pointer*/ params->freqs = age_dep_init (&age_dep_args); params->by = 0.0; /* what range of parameters do we want, and with what stepsize? */ /* we should go from 0 to half-of-theta with a step size of about 0.01 */ double from = 0.0; double to = params->theta / 2.0; double stepsz = 0.01; /* did you think I would spell the whole word? */ unsigned int numparts = floor(to / stepsz); do { #pragma omp parallel for private(i) firstprivate(params) \ shared(stepsz, numparts) for (i = 0; i < numparts; i++) { params->by = i * stepsz; int tries = 0; while (keep_going_p) { /* each time through, modify mfreqs and mating table, then go again */ keep_going_p = age_dep_iterate (params, ++tries); if (keep_going_p == ERANGE) error (ERANGE, ERANGE, "Failure to converge\n"); } fprintf (stdout, "%i iterations\n", tries); } /* for i < numparts */ params->freqs = params->freqs->next; } while (params->freqs->next != NULL); return 0; } inline double age_dep_pmate (double age_dep_t, unsigned int genot, double bp, double ba) { /* the probability of mating between these phenotypes */ /* the female preference depends on whether the female has the preference allele, the strength of preference (parameter bp) and the male phenotype (age_dep_t); if the female lacks the preference allele, then this will return 0, which is not quite accurate; it should return 1 */ return bits_isset (genot, CLOCI)? 1.0 - exp (-bp * age_dep_t) + ba: 1.0; } inline double age_dep_trait (int age, unsigned int genot, double by) { /* return the male trait, a function of the trait locus, age, the age-dependent scaling parameter (bx) and the males condition genotype */ double C; double T; /* get the male's condition genotype */ C = (double) bits_popcount (bits_extract (0, CLOCI, genot)); /* get his trait genotype */ T = bits_isset (genot, CLOCI + 1)? 1.0: 0.0; /* return the trait value */ return T * by * exp (age * C); } int age_dep_iterate (age_dep_params_t * data, unsigned int tries) { /* main driver routine */ /* number of bytes for female frequencies */ size_t geno = data->age_dep_data->geno; size_t genosize = geno * sizeof (double); /* female frequencies are equal to male frequencies at birth (before selection) */ double ffreqs[geno]; if (ffreqs == NULL) error (ENOMEM, ENOMEM, nullmsg, __LINE__); /* do not set! Use memcpy (we need to alter male frequencies (selection) without altering female frequencies) */ memmove (ffreqs, data->freqs->freqs[0], genosize); /* for (int i = 0; i < geno; i++) */ /* ffreqs[i] = data->freqs->freqs[0][i]; */ #ifdef PRMTABLE age_dep_pr_mfreqs (data); #endif /* PRMTABLE */ /* natural selection: */ age_dep_ns (data); /* normalized mating table with new frequencies */ age_dep_norm_mtable (ffreqs, data); #ifdef PRMTABLE age_dep_pr_mtable (data); #endif /* PRMTABLE */ double * newfreqs; /* mutate here */ /* i.e. get the new frequency of 0-year-olds using recombination; */ newfreqs = rec_mating (data->age_dep_data); /* return block */ { if (sim_stop_ck (data->freqs->freqs[0], newfreqs, GENO, TOL) == 0) { /* if we have converged, stop the iterations and handle the data */ age_dep_sim_out (data, stdout); return 0; } else if (tries > MAXTRIES) return ERANGE; else { /* advance generations */ for (int j = age_max - 1; j < 0; j--) memmove (data->freqs->freqs[j], data->freqs->freqs[j-1], genosize); /* advance the first age-class */ memmove (data->freqs->freqs[0], newfreqs, genosize); return 1; } } } void age_dep_ns (age_dep_params_t * data) { /* calculate the new frequency of genotypes given additive fitness and selection coefficient s */ size_t geno = data->age_dep_data->geno; double w[geno]; double wbar, dtheta, ttheta, dcond, tcond; double t, cond; /* fitness parameters */ double mu, nu; mu = data->wparams[0]; nu = data->wparams[1]; /* calculate fitness */ for (int j = 0; j < age_max; j++) { int i; for (i = 0; i < geno; i++) { /* calculate male trait: */ t = age_dep_trait(j, i, data->by); /* calculate condition: */ cond = (double) bits_popcount (bits_extract(0, CLOCI, i)); /* trait-based fitness term */ dtheta = data->theta - t; ttheta = (dtheta * dtheta) / (2.0 * nu * nu); /* condition-based fitness term */ dcond = CLOCI - cond; tcond = (dcond * dcond) / (2.0 * mu * mu); /* calculate male fitness */ w[i] = 1 + exp(-tcond) - exp(-ttheta); } /* calculate mean fitness */ /* as long as we calculate wbar before altering any values of freqs[], we're safe */ wbar = gen_mean (data->freqs->freqs[j], w, geno); for (i = 0; i < geno; i++) data->freqs->freqs[j][i] = (data->freqs->freqs[j][i] * w[i]) / wbar; } } void age_dep_norm_mtable (double * ffreqs, age_dep_params_t * params) { /* this function produces a single mating table that forms the input for recombination () */ /* i is female genotype; j is male genotype; k is male age */ int i,j,k; double norm_denom; double trait; size_t geno = params->age_dep_data->geno; for (i = 0; i < geno; i++) { double norm_mtable[geno]; /* initialize the denominator: */ norm_denom = 0.0; /* find the probability of mating and add it to the denominator */ for (j = 0; j < geno; j++) { /* initialize entry: */ norm_mtable[j] = 0.0; for (k = 0; k < age_max; k++) { trait = age_dep_trait (k, j, params->by); norm_mtable[j] += age_dep_pmate (trait, i, params->bp, params->ba) * (params->freqs->freqs)[k][j]; } norm_denom += norm_mtable[j]; } /* now calculate entry (i,j) */ for (j = 0; j < geno; j++) params->age_dep_data->mtable[i][j] = (ffreqs[i] * norm_mtable[j]) / norm_denom; } } My current suspicion is the array newfreqs: I can't memmove, memcpy or assign a stack variable then hope it will persist, can I? rec_mating() returns double *.

    Read the article

  • Zend Framework Relationships - findDependentRowset

    - by Morten Nielsen
    Hello, When I call the method findDependentRowset, the returning rowset contains all the rows in the dependent table, and not only the rowsets that matches the reference. Hoping someone could explain this, since I was of the assumption that findDependentRowset would only return rowset matching my 'rule'? I have the following DbTable Models: class Model_DbTable_Advertisement extends Zend_Db_Table_Abstract { protected $_name = 'Advertisements'; protected $_primary = 'Id'; protected $_dependentTables = array ( 'Model_DbTable_Image', ); } class Model_DbTable_Image extends Zend_Db_Table_Abstract { protected $_name = 'Images'; protected $_primary = 'Id'; protected $_referenceMap = array( 'Images' => array( 'column' => 'AdvertisementId', 'refColumn' => 'Id', 'refTableClass' => 'Model_DbTable_Advertisement', ) ); } Now when i execute the following: (Simplified for Question sake) $model = new Model_DbTable_Advertisement(); $rowSet = $model->fetchAll(); $row = $rowSet->current(); $dRow = $row->findDependentRowset('Model_DbTable_Image'); I would expect $dRow to only contain 'Images' that has the same advertisementId as $row, but instead i receive all rows in the Images table. Any help appriciated. Kind regards, Morten

    Read the article

  • Hybrid EAV/CR model via WCF (and statically-typed language)?

    - by Pat
    Background I'm working on the architecture for a cloud-based LOB application, using Silverlight for the client, WCF, ASP.NET/C# for server and SQL Server for storage. The data model requires some flexibility per user (ability to add custom properties and define validation rules for them, for example), and a hybrid EAV/CR persistence model on the server side will suit nicely. Problem I need an efficient and maintainable technology and approach to handle the transformation from the persisted EAV model to/from WCF (and similarly allow the client to bind to the resulting data - DataGrid is a key UI element)? Admission: I don't yet know enough about WCF to understand if it supports ExpandoObject directly, but I suspect it will. Options I started off looking at WCF RIA services, but quickly discovered they're heavily dependent upon both static type data and compile-time code generation. Neither of these appeal. The options I'm considering include: Using WCF RIA services and pass the data over the network directly in EAV form (i.e. Dictionary), and handle the binding issue purely on the client side (like this) Using a dynamic language (probably IronPython) to handle both ends of the communication, with plumbing to generate the necessary CLR type data on the client to allow binding, and transform to/from EAV form on the server (spam preventer stopped me from posting a URL here, I'll try it in a comment). Dynamic LINQ (CreateClass() and friends), although I'm way out of my depth there and don't know what the limitations on that approach might be yet. I'm interested in comments on these approaches as well as alternative approaches that might solve the problem. Other Notes The Silverlight client will not be the only consumer of the service, making me slightly uncomfortable with option #1 above. While the data model is flexible, it's not expected to be modified heavily. For argument's sake, we could assume that we might have 25 distinct data models active at a given time, with something like 10-20 unique data fields/rules each. Modifications to the data model will happen infrequently (typically when a new user is initially configured).

    Read the article

  • How to recursive rake? -- or suitable alternatives

    - by TerryP
    I want my projects top level Rakefile to build things using rakefiles deeper in the tree; i.e. the top level rakefile says how to build the project (big picture) and the lower level ones build a specific module (local picture). There is of course a shared set of configuration for the minute details of doing that whenever it can be shared between tasks: so it is mostly about keeping the descriptions of what needs building, as close to the sources being built. E.g. /Source/Module/code.foo and cie should be built using the instructions in /Source/Module/Rakefile; and /Rakefile understands the dependencies between modules. I don't care if it uses multiple rake processes (ala recursive make), or just creates separate build environments. Either way it should be self-containable enough to be processed by a queue: so that non-dependent modules could be built simultaneously. The problem is, how the heck do you actually do something like that with Rake!? I haven't been able to find anything meaningful on the Internet, nor in the documentation. I tried creating a new Rake::Application object and setting it up, but whatever methods I try invoking, only exceptions or "Don't know how to build task ':default'" errors get thrown. (Yes, all rakefiles have a :default). Obviously one could just execute 'rake' in a sub directory for a :modulename task, but that would ditch the options given to the top level; e.g. think of $(MAKE) and $(MAKEFLAGS). Anyone have a clue on how to properly do something like a recursive rake?

    Read the article

  • JDBC/OSGi and how to dynamically load drivers without explicitly stating dependencies in the bundle?

    - by Chris
    Hi, This is a biggie. I have a well-structured yet monolithic code base that has a primitive modular architecture (all modules implement interfaces yet share the same classpath). I realize the folly of this approach and the problems it represents when I go to deploy on application servers that may have different conflicting versions of my library. I'm dependent on around 30 jars right now and am mid-way though bnding them up. Now some of my modules are easy to declare the versioned dependencies of, such as my networking components. They statically reference classes within the JRE and other BNDded libraries but my JDBC related components instantiate via Class.forName(...) and can use one of any number of drivers. I am breaking everything up into OSGi bundles by service area. My core classes/interfaces. Reporting related components. Database access related components (via JDBC). etc.... I wish for my code to be able to still be used without OSGi via single jar file with all my dependencies and without OSGi at all (via JARJAR) and also to be modular via the OSGi meta-data and granular bundles with dependency information. How do I configure my bundle and my code so that it can dynamically utilize any driver on the classpath and/or within the OSGi container environment (Felix/Equinox/etc.)? Is there a run-time method to detect if I am running in an OSGi container that is compatible across containers (Felix/Equinox/etc.) ? Do I need to use a different class loading mechanism if I am in a OSGi container? Am I required to import OSGi classes into my project to be able to load an at-bundle-time-unknown JDBC driver via my database module? I also have a second method of obtaining a driver (via JNDI, which is only really applicable when running in an app server), do I need to change my JNDI access code for OSGi-aware app servers?

    Read the article

  • rewrite not a member of LiftRules

    - by José Leal
    Hi guys, I was following http://www.assembla.com/wiki/show/liftweb/URL_Rewriting tutorial for url rewritting in liftweb.. but I get this error: error: value rewrite is not a member of object net.liftweb.http.LiftRules .. it is really odd.. and the documentation says that it exists. I'm using idea IDE, and I've done everything from scratch, using the lift maven blank archifact. Some more info: [INFO] ------------------------------------------------------------------------ [INFO] Building Joseph3 [INFO] task-segment: [tomcat:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing tomcat:run [INFO] [resources:resources {execution: default-resources}] [WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 0 resource [INFO] [yuicompressor:compress {execution: default}] [INFO] nb warnings: 0, nb errors: 0 [INFO] artifact org.mortbay.jetty:jetty: checking for updates from scala-tools.org [INFO] artifact org.mortbay.jetty:jetty: checking for updates from central [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [scala:compile {execution: default}] [INFO] Checking for multiple versions of scala [INFO] /home/dpz/Scala/Doit/Joseph3/src/main/scala:-1: info: compiling [INFO] Compiling 2 source files to /home/dpz/Scala/Doit/Joseph3/target/classes at 1274922123910 [ERROR] /home/dpz/Scala/Doit/Joseph3/src/main/scala/bootstrap/liftweb/Boot.scala:16: error: value rewrite is not a member of object net.liftweb.http.LiftRules [INFO] LiftRules.rewrite.prepend(NamedPF("ProductExampleRewrite") { [INFO] ^ [ERROR] one error found [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1(Exit value: 1) [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 19 seconds [INFO] Finished at: Thu May 27 03:02:07 CEST 2010 [INFO] Final Memory: 20M/175M [INFO] ------------------------------------------------------------------------ Process finished with exit code 1 enter code here

    Read the article

  • Positioning DIV's with Javascript

    - by Andrew P.
    I have two divs that I need to position horizontally dependent on the width of the user's screen. I've styled their vertical position in CSS, and I am trying to position them horizontally using Javascript. My divs: <div id="tl"> blah blah </div> <div id="bl"> blah blah </div> My CSS: #tl { position: absolute; top: -14px; right: 0; } #bl { position: absolute; bottom: 1px; right: 0; } My Javascript: var tl = document.getElementById('tl'); var bl = document.getElementById('bl'); var wide = parseInt(screen.width); var nudge = wide*.86; nudge = nudge+21; tl = tl.style; tl.right = ( parseInt(tl.right) + nudge ); bl = bl.style; bl.right = ( parseInt(bl.right) + nudge ); However... nothing happens. No errors, and definitely no movement from my divs. What am I doing wrong? Can anyone help?

    Read the article

  • Form not updating usercontrol

    - by user328259
    From the post "Growing user control not updating"... Using C#, .Net 2.0 in a Windows environment. UserControl1 - draws cells to a bitmap buffer dependent upon NumberOfCells property UserControl2 - panel contains UserControl1 which displays vertical scroll when necessary; also contains NumberOfCells which sets UserControl1's NumberOfCells. Formf1 - contains NumericUpDown controls (just increments) which updates the UserControl2 - suppose to! When I increment the control on the form by say 20, UserControl1 adds the necessary cells, UserControl2 displays the vertical scroll bar accordingly, BUT the form does not 'redraw' to the updated/correct image!! Meaning, after I increment by 20, cells are added, vertical scrool bar added... but the image shown is just everything else expanding. I reset the control to scoll to the very TOP and the scrolling works, but the image is still staic... UNTIL I resize my form, more specifically, when I change it from maximize to window or vice versa!!! What can I do to 'reset/redraw' the correct image???? Thank you in advance. Lawrence

    Read the article

  • Can I give a different id to define Cakephp model associations?

    - by gacrux
    I have an one to many association in which a Thing can have many Statuses defined as below: Status Model: class Status extends AppModel { var $name = 'Status'; var $belongsTo = array( 'Thing' => array( 'className' => 'Thing', 'foreignKey' => 'thing_id', ); } Thing Model: class Thing extends AppModel { var $name = 'Thing'; var $belongsTo = array( // other associations ); var $hasMany = array( 'Status' => array( 'className' => 'Status', 'foreignKey' => 'thing_id', 'dependent' => false, 'order' => 'datetime DESC', 'limit' => '10', ), // other associations ); } This works OK, but I would like Thing to use a different id to connect to Status. E.g. Thing would use 'id' for all of it's other associations but use 'thing_status_id' for it's Status association. How can I best do this?

    Read the article

  • NHibernate Session DI from StructureMap in components

    - by Corey Coogan
    I know this is somewhat of a dead horse, but I'm not finding a satisfactory answer. First let me say, I am NOT dealing with a web app, otherwise managing NH Session is quite simple. I have a bunch of enterprise components. Those components have their own service layer that will act on multiple repositories. For example: Claim Component Claim Processing Service Claim Repository Billing Component Billing Service Billing REpository Policy Component PolicyLockService Policy Repository Now I may have a console, or windows application that needs to coordinate an operation that involves each of the services. I want to write the services to be injected with (DI) their required repositories. The Repositories should have an ISession, or similar, injected into them so that I can have this operation performed under one ISession/ITransaction. I'm aware of the Unit Of Work pattern and the many samples out there, but none of them showed DI. I'm also leery of [ThreadStatic] because this stuff can also be used from WCF and I have found enough posts describing how to do that. I've read about Business Conversations, but need something simple that each windows/console app can easily bootstrap since we have alot of these apps and some pretty inexperienced developers. So how can I configure StructureMap to inject the same ISession into each of the dependent repositories from an application? Here's a totally contrived and totally made up example without using SM (for clarification only - please don't spend energy critisizing): ConsoleApplication Main { using(ISession session = GetSession()) using(ITransaction trans = session.BeginTransaction()) { var policyRepo = new PolicyRepo(session); var policyService = new PolicyService(policyRepo); var billingRepo = new BillingRepo(session) var billingService = new BillingService(billingRepo); var claimRepo = new ClaimsRepo(session); var claimService = new ClaimService(claimRepo, policyService, billingService); claimService.FileCLaim(); trans.Commit(); } }

    Read the article

  • How to sort objects in a many-to-many relationship in ruby on rails?

    - by Kenji Kina
    I've been trying to deal with this problem for a couple of hours now and haven't been able to come up with a clean solution. It seems I'm not too good with rails... Anyway, I have the following: In code: class Article < ActiveRecord::Base has_many :line_aspects, :dependent => :destroy has_many :aspects, :through => :line_aspects #plus a name field end class LineAspect < ActiveRecord::Base belongs_to :article belongs_to :aspect end class Aspect < ActiveRecord::Base belongs_to :data_type has_many :line_aspects has_many :articles, :through => :line_aspects end Now, what I would like to do, is to sort these in two steps. First list of Articles by their Articles.name, and then inside sort them by Aspect.name (note, not the middleman). For instance, alphabetically (sorry if the notation is not correct): [{ article => 'Apple', line_aspects => [ {:value => 'red'}, #corresponding to the Attribute with :name => 'color' {:value => 'small'} #corresponding to the Attribute with :name => 'shape' ] },{ article => 'Watermelon', line_aspects => [ {:value => 'green'}, #corresponding to the Attribute with :name => 'color' {:value => 'big'} #corresponding to the Attribute with :name => 'shape' ] }] Again, note that these are ordered by the aspect name (color before shape) instead of the specific values of each line (red before green). (NOTE: My intention is to displaye these in a table in the view) I have not found a good way to do this in rails yet (without resorting to N queries). Can anyone tell me a good way to do it?

    Read the article

  • JQuery Deferred - Adding to the Deferred contract

    - by MgSam
    I'm trying to add another asynchronous call to the contract of an existing Deferred before its state is set to success. Rather than try and explain this in English, see the following pseudo-code: $.when( $.ajax({ url: someUrl, data: data, async: true, success: function (data, textStatus, jqXhr) { console.log('Call 1 done.') jqXhr.pipe( $.ajax({ url: someUrl, data: data, async: true, success: function (data, textStatus, jqXhr) { console.log('Call 2 done.'); }, }) ); }, }), $.ajax({ url: someUrl, data: data, async: true, success: function (data, textStatus, jqXhr) { console.log('Call 3 done.'); }, }) ).then(function(){ console.log('All done!'); }); Basically, Call 2 is dependent on the results of Call 1. I want Call 1 and Call 3 to be executed in parallel. Once all 3 calls are complete, I want the All Done code to execute. My understanding is that Deferred.pipe() is supposed to chain another asynchronous call to the given deferred, but in practice, I always get Call 2 completing after All Done. Does anyone know how to get jQuery's Deferred to do what I want? Hopefully the solution doesn't involve ripping the code apart into chunks any further. Thanks for any help.

    Read the article

  • verilog / systemverilog -- What is the behavior of blocking statements across two always blocks?

    - by miles.sherman
    I am wondering about the behavior of the below code. There are two always blocks, one is combinational to calculate the next_state signal, the other is sequential which will perform some logic and determine whether or not to shutdown the system. It does this by setting the shutdown_now signal high and then calling state <= next_state. My question is if the conditions become true that the shutdown_now signal is set (during clock cycle n) in a blocking manner before the state <= next_state line, will the state during clock cycle n+1 be SHUTDOWN or RUNNING? In other words, does the shutdown_now = 1'b1 line block across both state machines since the state signal is dependent on it through the next_state determination? enum {IDLE, RUNNING, SHUTDOWN} state, next_state; logic shutdown_now; // State machine (combinational) always_comb begin case (state) IDLE: next_state <= RUNNING; RUNNING: next_state <= shutdown ? SHUTDOWN : RUNNING; SHUTDOWN: next_state <= SHUTDOWN; default: next_state <= SHUTDOWN; endcase end // Sequential Behavior always_ff @ (posedge clk) begin // Some code here if (/*some condition) begin shutdown_now = 1'b0; end else begin shutdown_now = 1'b1; end state <= next_state; end

    Read the article

  • Fast Lightweight Image Comparisson Metric Algorithm

    - by gav
    Hi All, I am developing an application for the Android platform which contains 1000+ image filters that have been 'evolved'. When a user selects a photo I want to present the most relevant filters first. This 'relevance' should be dependent on previous use cases. I have already developed tools that register when a filtered image is saved; this combination of filter and image can be seen as the training data for my system. The issue is that the comparison must occur between selecting an image and the next screen coming up. From a UI point of view I need the whole process to take less that 4 seconds; select an image- obtain a metric to use for similarity - check against use cases - return 6 closest matches. I figure with 4 seconds I can use animations and progress dialogs to keep the user happy. Due to platform contraints I am fairly limited in the computational expense of the algorithm. I have implemented a technique adapted from various online tutorials for running C code on the G1 and hence this language is available Specific Constraints; Qualcomm® MSM7201A™, 528 MHz Processor 320 x 480 Pixel bitmap in 32 bit ARGB ~ 2 seconds computational time for the native method to get the metric ~ 2 seconds to compare the metric of the current image with training data This is an academic project so all ideas are welcome, anything you can think of or have heard about would be of interest to me. My ideas; I want to keep the complexity down (O(n*m)?) by using pixel data only rather than a neighbourhood function I was looking at using the Colour historgram/Greyscale histogram/Texture/Entropy of the image, combining them to make the measure. There will be an obvious loss of information but I need the resultant metric to be substantially smaller than the memory footprint of the image (~0.512 MB) As I said, any ideas to direct my research would be fantastic. Kind regards, Gavin

    Read the article

  • How to modify/replace option set file when building from command line?

    - by Heinrich Ulbricht
    I build packages from a batch file using commands like: msbuild ..\lib\Package.dproj /target:Build /p:config=%1 The packages' settings are dependent on an option set: <Import Project="..\optionsets\COND_Defined.optset" Condition="'$(Base)'!='' And Exists('..\optionsets\COND_Defined.optset')"/> This option set defines a conditional symbol many of my packages depend on. The file looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <DCC_Define>CONDITION;$(DCC_Define)</DCC_Define> </PropertyGroup> <ProjectExtensions> <Borland.Personality>Delphi.Personality.12</Borland.Personality> <Borland.ProjectType>OptionSet</Borland.ProjectType> <BorlandProject> <Delphi.Personality/> </BorlandProject> <ProjectFileVersion>12</ProjectFileVersion> </ProjectExtensions> </Project> Now I need two builds: one with the condition defined and one without. My attack vector would be the option set file. I have some ideas on what to do: write a program which modifies the option set file, run this before batch build fiddle with the project files and modify the option set path to contain an environment variable, then have different option sets in different locations But before starting to reinvent the wheel I'd like to ask how you would tackle this task? Maybe there are already means meant to support such a case (like certain command line switches, things I could configure in Delphi or batch file magic).

    Read the article

  • fastest SCM tool available for Embedded software development

    - by wrapperm
    Hi All, In my company, presently we are using Rational clearcase as the Software Configuration Management tool for our Embedded software development. The software is basically for Automobiles, to be specific for Engines (I dont think these information really matters). But I find Clearcase to be very slow is performing any the activities (accesing files, branching and labelling), in addition to which there are various other limitations. We have recently decided to research on some free & open source, distributed version control system which could be able to handle our large projects with speed and efficiency. This tool should be a full-fledged repository with complete history and full revision tracking capabilities, not dependent on network access or a central server. Branching and merging are fast and easy to do. It should have multisite development facility. With these above mentioned requirement, we have come up with some of the tools that are presently available in the market: GIT, Mercurial, Bazaar, Subversion, CVS, Perforce, and Visual SourceSafe. I need everybody's help in finding me an approrpiate SCM tool for me which meets the above mentioned requirements. Thanking you in Advance, Rahamath.

    Read the article

  • eclipse, one classpath for compiling, another for launching

    - by DragonFax
    example: For logging, my code uses log4j. but other jars my code is dependent upon, uses slf4j instead. So both jars must be in the build path. Unfortunately, its possible for my code to directly use (depend on) slf4j now, either by context-assist, or some other developers changes. I would like any use of slf4j to show up as an error, but my application (and tests) will still need it in the classpath when running. explanation: I'd like to find out if this is possible in eclipse. This scenario happens often for me. I'll have a large project, that uses alot of 3rd party libraries. And of course those 3rd party jars have their own dependencies as well. So I have to include all dependencies in the classpath ("build path" in eclipse) for the application and its tests to compile and run (from within eclipse). But I don't want my code to use all of those jars, just the few direct dependencies I've decided upon myself. So if my code accidentally uses a dependency of a dependency, I want it to show up as a compilation error. Ideally, as class not found, but any error would do. I know I can manually configure the classpath when running outside of eclipse, and even within eclipse I can modify the classpath for a specific class I'm running (in the run configurations), but thats not manageable if you run alot of individual test cases, or have alot of main() classes.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >