Search Results

Search found 3985 results on 160 pages for 'contexts and dependency injection'.

Page 138/160 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • How do I correctly install dulwich to get hg-git working on Windows?

    - by Joshua Flanagan
    I'm trying to use the hg-git Mercurial extension on Windows (Windows 7 64-bit, to be specific). I have Mercurial and Git installed. I have Python 2.5 (32-bit) installed. I followed the instructions on http://hg-git.github.com/ to install the extension. The initial easy_install failed because it was unable to compile dulwich without Visual Studio 2003. I installed dulwich manually by: git clone git://git.samba.org/jelmer/dulwich.git cd dulwich c:\Python25\python setup.py --pure install Now when I run easy_install hg-git, it succeeds (since the dulwich dependency is satisfied). In my C:\Users\username\Mercurial.ini, I have: [extensions] hgext.bookmarks = hggit = When I type 'hg' at a command prompt, I see: "* failed to import extension hggit: No module named hggit" Looking under my c:\Python25 folder, the only reference to hggit I see is Lib\site-packages\hg_git-0.2.1-py2.5.egg. Is this supposed to be extracted somewhere, or should it work as-is? Since that failed, I attempted the "more involved" instructions from the hg-git page that suggested cloning git://github.com/schacon/hg-git.git and referencing the path in my Mercurial configuration. I cloned the repo, and changed my extensions file to look like: [extensions] hgext.bookmarks = hggit = c:\code\hg-git\hggit Now when I run hg, I see: * failed to import extension hggit from c:\code\hg-git\hggit: No module named dulwich.errors. Ok, so that tells me that it is finding hggit now, because I can see in hg-git\hggit\git_handler.py that it calls from dulwich.errors import HangupException That makes me think dulwich is not installed correctly, or not in the path. Update: From Python command line: import dulwich yields Import Error: No module named dulwich However, under C:\Python25\Lib\site-packages, I do have a dulwich-0.5.0-py2.5.egg folder which appears to be populated. This was created by the steps mentioned above. Is there an additional step I need to take to make it part of the Python "path"?

    Read the article

  • Spring 3, Jersey (JSR-311) and Maven dependencies

    - by smeg4brains
    Hola guys! im currently struggling to integrate a REST Service based on Jersey and Spring. I'm using Spring 3.0.2-RELEASE and jersey-spring 1.2. But jersey-spring adds a dependency to Spring 2.5.6 to my project which of cause conflicts with the 3.0.2-RELEASE to give me thefollwing error: 11:58:25,409 ERROR org.springframework.web.context.ContextLoader:215 - Context initialization failed org.springframework.beans.factory.BeanDefinitionStoreException: Unexpected exception parsing XML document from class path resource [cloverjazz-web-context.xml]; nested exception is java.lang.NoSuchMethodError: org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.getLocalName(Lorg/w3c/dom/Node;)Ljava/lang/String; at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:420) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:342) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:310) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:92) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:123) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:422) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:352) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) Is there a way to get around this issue? Does anyone know? Thanks!

    Read the article

  • Freezes (not crashes) with GCD, blocks and Core Data

    - by Lukasz
    I have recently rewritten my Core Data driven database controller to use Grand Central Dispatch to manage fetching and importing in the background. Controller can operate on 2 NSManagedContext's: NSManagedObjectContext *mainMoc instance variable for main thread. this contexts is used only by quick access for UI by main thread or by dipatch_get_main_queue() global queue. NSManagedObjectContext *bgMoc for background tasks (importing and fetching data for NSFetchedresultsController for tables). This background tasks are fired ONLY by user defined queue: dispatch_queue_t bgQueue (instance variable in database controller object). Fetching data for tables is done in background to not block user UI when bigger or more complicated predicates are performed. Example fetching code for NSFetchedResultsController in my table view controllers: -(void)fetchData{ dispatch_async([CDdb db].bgQueue, ^{ NSError *error = nil; [[self.fetchedResultsController fetchRequest] setPredicate:self.predicate]; if (self.fetchedResultsController && ![self.fetchedResultsController performFetch:&error]) { NSSLog(@"Unresolved error in fetchData %@", error); } if (!initial_fetch_attampted)initial_fetch_attampted = YES; fetching = NO; dispatch_async(dispatch_get_main_queue(), ^{ [self.table reloadData]; [self.table scrollRectToVisible:CGRectMake(0, 0, 100, 20) animated:YES]; }); }); } // end of fetchData function bgMoc merges with mainMoc on save using NSManagedObjectContextDidSaveNotification: - (void)bgMocDidSave:(NSNotification *)saveNotification { // CDdb - bgMoc didsave - merging changes with main mainMoc dispatch_async(dispatch_get_main_queue(), ^{ [self.mainMoc mergeChangesFromContextDidSaveNotification:saveNotification]; // Extra notification for some other, potentially interested clients [[NSNotificationCenter defaultCenter] postNotificationName:DATABASE_SAVED_WITH_CHANGES object:saveNotification]; }); } - (void)mainMocDidSave:(NSNotification *)saveNotification { // CDdb - main mainMoc didSave - merging changes with bgMoc dispatch_async(self.bgQueue, ^{ [self.bgMoc mergeChangesFromContextDidSaveNotification:saveNotification]; }); } NSfetchedResultsController delegate has only one method implemented (for simplicity): - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { dispatch_async(dispatch_get_main_queue(), ^{ [self fetchData]; }); } This way I am trying to follow Apple recommendation for Core Data: 1 NSManagedObjectContext per thread. I know this pattern is not completely clean for at last 2 reasons: bgQueue not necessarily fires the same thread after suspension but since it is serial, it should not matter much (there is never 2 threads trying access bgMoc NSManagedObjectContext dedicated to it). Sometimes table view data source methods will ask NSFetchedResultsController for info from bgMoc (since fetch is done on bgQueue) like sections count, fetched objects in section count, etc.... Event with this flaws this approach works pretty well of the 95% of application running time until ... AND HERE GOES MY QUESTION: Sometimes, very randomly application freezes but not crashes. It does not response on any touch and the only way to get it back to live is to restart it completely (switching back to and from background does not help). No exception is thrown and nothing is printed to the console (I have Breakpoints set for all exception in Xcode). I have tried to debug it using Instruments (time profiles especially) to see if there is something hard going on on main thread but nothing is showing up. I am aware that GCD and Core Data are the main suspects here, but I have no idea how to track / debug this. Let me point out, that this also happens when I dispatch all the tasks to the queues asynchronously only (using dispatch_async everywhere). This makes me think it is not just standard deadlock. Is there any possibility or hints of how could I get more info what is going on? Some extra debug flags, Instruments magical tricks or build setting etc... Any suggestions on what could be the cause are very much appreciated as well as (or) pointers to how to implement background fetching for NSFetchedResultsController and background importing in better way.

    Read the article

  • Why doesn't every class in the .Net framework have a corresponding interface?

    - by Thorsten Lorenz
    Since I started to develop in a test/behavior driven style, I appreciated the ability to mock out every dependency. Since mocking frameworks like Moq work best when told to mock an interface, I now implement an interface for almost every class I create b/c most likely I will have to mock it out in a test eventually. Well, and programming to an interface is good practice, anyways. At times, my classes take dependencies on .Net classes (e.g. FileSystemWatcher, DispatcherTimer). It would be great in that case to have an interface, so I could depend on an IDispatcherTimer instead, to be able to pass it a mock and simulate its behavior to see if my system under test reacts correctly. Unfortunately both of above mentioned classes do not implement such interfaces, so I have to resort to creating adapters, that do nothing else but inherit from the original class and conform to an interface, that I then can use. Here is such an adapter for the DispatcherTimer and the corresponding interface: using System; using System.Windows.Threading; public interface IDispatcherTimer { #region Events event EventHandler Tick; #endregion #region Properties Dispatcher Dispatcher { get; } TimeSpan Interval { get; set; } bool IsEnabled { get; set; } object Tag { get; set; } #endregion #region Public Methods void Start(); void Stop(); #endregion } /// <summary> /// Adapts the DispatcherTimer class to implement the <see cref="IDispatcherTimer"/> interface. /// </summary> public class DispatcherTimerAdapter : DispatcherTimer, IDispatcherTimer { } Although this is not the end of the world, I wonder, why the .Net developers didn't take the minute to make their classes implement these interfaces from the get go. It puzzles me especially since now there is a big push for good practices from inside Microsoft. Does anyone have any (maybe inside) information why this contradiction exists?

    Read the article

  • MVC implementation/best-practices question

    - by Vivin Paliath
    I have to work with some code that isn't truly MVC (i.e., it doesn't use an explicit framework among other things). Right now we make do with servlets that pass data to services. Here is my problem. I am receiving a post to a servlet that contains a whole bunch of address data that I have to save to the database. The data is (obviously) in the HttpRequest object. My question is, how do I pass this data into a service? I am reluctant to do it like this: AddressService.saveAddress(request); Because I don't think the service should have a dependency on the request. My other option is to do something like this: String addressLine = request.getParameter("addressLine"); .. .. about 7 other parameters .. String zip = request.getParameter("zip"); AddressService.saveAddress(addressLine, ... 7 other parameters ..., zip); But I don't like having a function with a huge number of parameters either. I was thinking of making an intermediate object called AddressData that would hold data from the request, and then passing that into the service. Is that an acceptable way of doing things?

    Read the article

  • Rails deployment strategies with Bundler and JRuby

    - by brad
    I have a jruby rails app and I've just started using bundler for gem dependency management. I'm interested in hearing peoples' opinions on deployment strategies. The docs say that bundle package will package your gems locally so you don't have to fetch them on the server (and I believe warbler does this by default), but I personally think (for us) this is not the way to go as our deployed code (in our case a WAR file) becomes much larger. My preference would be to mimic our MVN setup which fetches all dependencies directly on the server AFTER the code has been copied there. Here's what I'm thinking, all comments are appreciated: Step1: Build war file, copy to server Step2: Unpack war on server, fetch java dependencies with mvn Step3: use Bundler to fetch Gem deps (Where should these be placed??) * Step 3 is the step I'm a bit unclear on. Do I run bundle install with a particular target in mind?? Step4: Restart Tomcat Again my reasoning here is that I'd like to keep the dependencies separate from the code at deploy time. I'd also like to place all gem dependencies in the app itself so they are contained, rather than installing them in the app user's home directory (as, again, I believe is the default for Bundler)

    Read the article

  • Passenger: "Missing these required gems redgreen"

    - by Michael Stum
    Hello, total ruby newbie, trying to setup a Rails/MongoDB application on Mac OS X Snow leopard. Installed Ruby 1.9.1 and RubyGems 1.3.7, which ruby and which gem point to the same directory. I'm using the Snow Leopard built-in apache and Passenger 2.2.11. I'm using the rails template from the mongo-site which seems to work okay overall. The exact error that passenger gives me is: /Users/User/Sites/feuerapp/vendor/rails/railties/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement **Notice: C extension not loaded. This is required for optimum MongoDB Ruby driver performance. You can install the extension as follows: gem install bson_ext If you continue to receive this message after installing, make sure that the bson_ext gem is in your load path and that the bson_ext and mongo gems are of the same version. Missing these required gems: redgreen You're running: ruby 1.9.1.376 at /usr/local/bin/ruby rubygems 1.3.7 at /Users/User/.gem/ruby/1.9.1, /usr/local/lib/ruby/gems/1.9.1 Runrake gems:installto install the missing gems. The weird thing is that redgreen is installed and looks fine to me: Dahlia:feuerapp User$ ls -la vendor/gems/ total 0 drwxr-xr-x 7 User staff 238 May 18 22:56 . drwxr-xr-x 5 User staff 170 May 18 23:00 .. drwxr-xr-x 11 User staff 374 May 18 22:56 factory_girl-1.2.4 drwxr-xr-x 11 User staff 374 May 18 22:56 mocha-0.9.8 drwxr-xr-x 7 User staff 238 May 18 22:56 mongo_mapper-0.7.6 drwxr-xr-x 7 User staff 238 May 18 22:56 redgreen-1.2.2 drwxr-xr-x 11 User staff 374 May 18 22:56 shoulda-2.10.3 Commenting out this line in environment.rb "solves" the issue, but that's not really want I want: config.gem 'redgreen' I don't understand anything of gems yet, but from my limited understanding, redgreen should be there and found?

    Read the article

  • Django Class Views and Reverse Urls

    - by kalhartt
    I have a good many class based views that use reverse(name, args) to find urls and pass this to templates. However, the problem is class based views must be instantiated before urlpatterns can be defined. This means the class is instantiated while urlpatterns is empty leading to reverse throwing errors. I've been working around this by passing lambda: reverse(name, args) to my templates but surely there is a better solution. As a simple example the following fails with exception: ImproperlyConfigured at xxxx The included urlconf mysite.urls doesn't have any patterns in it mysite.urls from mysite.views import MyClassView urlpatterns = patterns('', url(r'^$' MyClassView.as_view(), name='home') ) views.py class MyClassView(View): def get(self, request): home_url = reverse('home') return render_to_response('home.html', {'home_url':home_url}, context_instance=RequestContext(request)) home.html <p><a href={{ home_url }}>Home</a></p> I'm currently working around the problem by forcing reverse to run on template rendering by changing views.py to class MyClassView(View): def get(self, request): home_url = lambda: reverse('home') return render_to_response('home.html', {'home_url':home_url}, context_instance=RequestContext(request)) and it works, but this is really ugly and surely there is a better way. So is there a way to use reverse in class based views but avoid the cyclic dependency of urlpatterns requiring view requiring reverse requiring urlpatterns...

    Read the article

  • No Commons Logging in Android?

    - by Joe Boese
    Hello all, I have a pretty big library I developed specifically for use in my Android Application. However business logic itself has no dependency on Android. To preserve that, I used Commons Logging throughout this library and it's respective JUnit tests (which I run in Eclipse). However now that I am starting to integrate it into an Activity which I launch on Android, I am unable to get my logging to work. In Eclipse/JUnit, I had simply pulled in log4j's jar file as well as a log4j.properties file. This doesn't seem to work when deploying to a device. After struggling with attempting to get that to work for several hours, I gave up and tried replacing all my commons logging stuff with android.util.Log. Now I can log on the device.. but all JUnit tests are broken. When any JUnit tries to log using android.util.Log, it throws a RuntimeException 'Stub!'. I would prefer to revert to my commons logging approach.. if anyone can help with that.. otherwise.. what can I do to get my JUnit test cases running using 'android.util.Log'? Many thanks in advance.. I've spent more than a few hours on this and I'd like to move on to writing real code again! Joe

    Read the article

  • JSON: Jackson stream parser - is it really worth it?

    - by synic
    I'm making pretty heavy use of JSON parsing in an app I'm writing. Most of what I have done is already implemented using Android's built in JSONObject library (is it json-lib?). JSONObject appears to create instances of absolutely everything in the JSON string... even if I don't end up using all of them. My app currently runs pretty well, even on a G1. My question is this: are the speed and memory benefits from using a stream parser like Jackson worth all the trouble? By trouble, I mean this: As far as I can tell, there are three downsides to using Jackson instead of the built in library: Dependency on an external library. This makes your .apk bigger in the end. Not a huge deal. Your app is more fragile. Since the parsing is not done automatically, it is more vulnerable to changes in the JSON text that it's parsing. I'm extremely worried that malformed JSON will result in infinite loops (as pull parsing requires a lot of while loops). Writing code to parse JSON via a stream parser is ugly and tedious.

    Read the article

  • testing devise with shoulda and machinist

    - by mattherick
    hello! I´d like to test my app with shoulda and machinist. I use the devise authentification gem. I get following error: $ ruby unit/page_test.rb c:/Ruby/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be rem oved on or after August 2010. Use #requirement c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:443:in load_missing_constant': uninitialized constant Admins (N ameError) from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:80:inconst_missing' from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:92:in const_missing' from c:/Users/Mattherick/Desktop/heimspiel/heimspiel_app/app/controllers/admins_controller.rb:1 from c:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:ingem_original_require' from c:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in require' from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:inrequire' from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:265:in require_or_load' from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:224:independ_on' ... 12 levels... from ./unit/../test_helper.rb:2 from c:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' from c:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' from unit/page_test.rb:1 Somebody an idea what´s wrong? If I don´t use devise my tests are okay. And my second question: Does somebdoy has a good tutorial for increasing different roles in the devise gem? If I generate my own views and add a few attributes to my devise-model, they won´t be save in the database. I read the docu at github, but don´t really checked it. mattherick

    Read the article

  • Eclipse PDE - Plug-in, Feature, and Product Versioning

    - by Michael
    I am having much confusion over the process of upgrading version numbers in dependent plug-ins, features, and products in a fairly large eclipse workspace. I have made API changes to java code residing in an existing plug-in and thus requires an increase of the Major part of the version identifier. This plug-in serves as a dependency to a given feature, where the feature is later included in a product. From the documentation at http://wiki.eclipse.org/Version_Numbering, I understand (for the most part) when the proper number should be increased on the containing plug-in itself. However, how would this Major version number change on the plug-in affect dependent, "down-the-line" items (e.g., features, products)? For example, assume we have the typical "Hello World" setup as follows: Plug-in: com.example.helloworld, version 1.0.0 Feature: com.example.helloworld.feature, version 1.0.0 Product: com.example.helloworld.product, version 1.0.0 If I were to make an API change in the plug-in, this would require a version update to be that of 2.0.0. What would then be the version of the feature, 1.1.0? The same question can be applied for the product level as well (e.g., if the feature is 1.1.0 OR 2.0.0, what is the product version number)? I'm sure this is quite the newbie question so I apologize for wasting anyone's time and effort. I have searched for this type of content but all I am finding is are examples showing how to develop a plug-in, feature, product, and update site for the first time. The only other content related to my search has been developing feature patches and have not touched on the versioning aspect as much as I would prefer. I am having difficulty coming into (for the first time) an Eclipse RCP / PDE environment and need to learn the proper way and / or best practices for making such versioning updates and how to best reflect this throughout other dependent projects in the workspace.

    Read the article

  • Ruby. Mongoid. Relations

    - by Scepion1d
    I've encountered some problems with MongoID. I have three models: require 'mongoid' class Configuration include Mongoid::Document belongs_to :user field :links, :type => Array field :root, :type => String field :objects, :type => Array field :categories, :type => Array has_many :entries end class TimeDim include Mongoid::Document field :day, :type => Integer field :month, :type => Integer field :year, :type => Integer field :day_of_week, :type => Integer field :minute, :type => Integer field :hour, :type => Integer has_many :entries end class Entry include Mongoid::Document belongs_to :configuration belongs_to :time_dim field :category, :type => String # any other dynamic fields end Creating documents for Configurations and TimeDims is successful. But when i've trying to execute following code: params = Hash.new params[:configuration] = config # an instance of Configuration from DB entry.each do |key, value| params[key.to_sym] = value # String end unless Entry.exists?(conditions: params) params[:time_dim] = self.generate_time_dim # an instance of TimeDim from DB params[:category] = self.detect_category(descr) # String Entry.new(params).save end ... i saw following output: /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/bson-1.6.1/lib/bson/bson_c.rb:24:in `serialize': Cannot serialize an object of class Configuration into BSON. (BSON::InvalidDocument) from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/bson-1.6.1/lib/bson/bson_c.rb:24:in `serialize' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:604:in `construct_query_message' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:465:in `send_initial_query' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:458:in `refresh' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:128:in `next' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/db.rb:509:in `command' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongo-1.6.1/lib/mongo/cursor.rb:191:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/cursor.rb:42:in `block in count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/collections/retry.rb:29:in `retry_on_connection_failure' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/cursor.rb:41:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/contexts/mongo.rb:93:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/criteria.rb:45:in `count' from /home/scepion1d/Workspace/RubyMine/dana-x/.bundle/ruby/1.9.1/gems/mongoid-2.4.6/lib/mongoid/finders.rb:60:in `exists?' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:110:in `block (2 levels) in push_entries_to_db' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:103:in `each' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:103:in `block in push_entries_to_db' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:102:in `each' from /home/scepion1d/Workspace/RubyMine/dana-x/crawler/crawler.rb:102:in `push_entries_to_db' from main_starter.rb:15:in `<main>' Can anyone tell what am I doing wrong?

    Read the article

  • Add jar to apache felix in Pom file?

    - by drozzy
    How can I add a jar to my bundle in Apache Felix? I am using maven, with maven-bundle-plugin to manage my bundles in OBR for me. But I am not sure where to declare the dependency inside my POM on the jar, so that maven correctly compiles it into the final bundle. This is how my plugin looks in pom: <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <version>2.1.0</version> <extensions>true</extensions> <configuration> <instructions> <Bundle-Category>sample</Bundle-Category> <Bundle-SymbolicName>${artifactId} </Bundle-SymbolicName> <Export-Package> //blahblah </Export-Package> </instructions> <!-- OBR --> <remoteOBR>repo-rel</remoteOBR> <prefixUrl>file:///C:/Users/blah/Projects/Eclipse3.6-RCP-64/Felix/obr-repo/releases</prefixUrl> <ignoreLock>true</ignoreLock> </configuration>

    Read the article

  • Change made in the Converter will notify the change in the bound property?

    - by Kishore Kumar
    I have two property FirstName and LastName and bound to a textblock using Multibinidng and converter to display the FullName as FirstName + Last Name. FirstName="Kishore" LastName="Kumar" In the Converter I changed the LastName as "Changed Text" values[1] = "Changed Text"; After executing the Converter my TextBlock will show "Kishore Changed Text" but Dependency property LastName is still have the last value "Kumar". Why I am not getting the "Changed Text" value in the LastName property after the execution?. Will the change made at converter will notify the bound property? <Window.Resources> <local:NameConverter x:Key="NameConverter"></local:NameConverter> </Window.Resources> <Grid> <TextBlock> <TextBlock.Text> <MultiBinding Converter="{StaticResource NameConverter}"> <Binding Path="FirstName"></Binding> <Binding Path="LastName"></Binding> </MultiBinding> </TextBlock.Text> </TextBlock> </Grid> Converter: public class NameConverter:IMultiValueConverter { #region IMultiValueConverter Members public object Convert(object[] values, Type targetType, object parameter, System.Globalization.CultureInfo culture) { values[1] = "Changed Text"; return values[0].ToString() + " " + values[1].ToString(); } public object[] ConvertBack(object value, Type[] targetTypes, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion }

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • sqlite is must for merb ?????

    - by mayank
    Hello All, I have a doubt regarding merb dependency with sqlite. I am going to install merb on my m/c and I don't have installed sqlite on my m/c . I tried this command "gem install merb" and faced following error. If is there any way to install merb with mysql please tell me. Thanks Mayank Building native extensions. This could take a while... ERROR: Error installing merb: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for sqlite3.h... no * extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2/ext/do_sqlite3/gem_make.out

    Read the article

  • Should business objects be able to create their own DTOs?

    - by Sam
    Suppose I have the following class: class Camera { public Camera( double exposure, double brightness, double contrast, RegionOfInterest regionOfInterest) { this.exposure = exposure; this.brightness = brightness; this.contrast = contrast; this.regionOfInterest = regionOfInterest; } public void ConfigureAcquisitionFifo(IAcquisitionFifo acquisitionFifo) { // do stuff to the acquisition FIFO } readonly double exposure; readonly double brightness; readonly double contrast; readonly RegionOfInterest regionOfInterest; } ... and a DTO to transport the camera info across a service boundary (WCF), say, for viewing in a WinForms/WPF/Web app: using System.Runtime.Serialization; [DataContract] public class CameraData { [DataMember] public double Exposure { get; set; } [DataMember] public double Brightness { get; set; } [DataMember] public double Contrast { get; set; } [DataMember] public RegionOfInterestData RegionOfInterest { get; set; } } Now I can add a method to Camera to expose its data: class Camera { // blah blah public CameraData ToData() { var regionOfInterestData = regionOfInterest.ToData(); return new CameraData() { Exposure = exposure, Brightness = brightness, Contrast = contrast, RegionOfInterestData = regionOfInterestData }; } } or, I can create a method that requires a special IReporter to be passed in for the Camera to expose its data to. This removes the dependency on the Contracts layer (Camera no longer has to know about CameraData): class Camera { // beep beep I'm a jeep public void ExposeToReporter(IReporter reporter) { reporter.GetCameraInfo(exposure, brightness, contrast, regionOfInterest); } } So which should I do? I prefer the second, but it requires the IReporter to have a CameraData field (which gets changed by GetCameraInfo()), which feels weird. Also, if there is any even better solution, please share with me! I'm still an object-oriented newb.

    Read the article

  • LPX-00607 for ora:contains in java but not sqlplus

    - by Windle
    Hey all, I am trying to doing some sql querys out of Oracle 11g and am having issues using ora:contains. I am using spring's jdbc impl and my code generates the sql statement: select * from view_name where column_a = ? and column_b = ? and existsNode(xmltype(clob_column), 'record/name [ora:contains(text(), "name1") 0]', 'xmlns:ora="http://xmlns.oralce.com/xdb"') = 1 I have removed the actual view / column names obviously, but when I copy that into sqlplus and substitute in random values, the select executes properly. When I try to run it in my DAO code I get this stack trace: org.springframework.jdbc.UncatergorizedSQLException: PreparedStatementCallback; uncatergorizedSQLException for SQL [the big select above]; SQL state [99999]; error code [31011]; ORA-31011: XML parsing failed. ORA-19202: Error occured in XML processing LPX-00607: Invalid reference: 'contains' ;nested exception is java.sql.SQLException: ORA-31011: XML parsing failed ORA-19202: Error occured in XML processing LPX-00607: Invalid reference: 'contains' (continues on like this for awhile....) I think it is worth mentioning that I am using maven and it is possible I am missing some dependency that is required for this. Sorry the post is so long, but I wanted to err on the side of too much info. Thanks for taking the time to read this at least =) -Windle

    Read the article

  • sqlite is required for merb?

    - by mayank
    I have a question regarding merb dependency with sqlite. I am going to install merb on my m/c and I don't have sqlite installed on my m/c . I tried this command "gem install merb" and saw following error. If there any way to install merb with mysql please tell me. Building native extensions. This could take a while... ERROR: Error installing merb: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for sqlite3.h... no * extconf.rb failed * Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-sqlite3-dir --without-sqlite3-dir --with-sqlite3-include --without-sqlite3-include=${sqlite3-dir}/include --with-sqlite3-lib --without-sqlite3-lib=${sqlite3-dir}/lib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/do_sqlite3-0.10.2/ext/do_sqlite3/gem_make.out

    Read the article

  • JBoss Developer Studio exploaded SAR/EAR/WAR deployment

    - by dr_hoppa
    I am new to JBoss DS and I could not figur out is how to make a folder from my project deployable, exploded SAR in my case. If I create a new server in the JBoss server view / JBoss AS perspective and select a SAR and make it a deployable then the SAR file can be deployed on the JBoss server. Another approch that I have tryed is to make the *-service.xml file as a deployable element, same procedure as with the SAR file, and to add the Eclipse project as a dependency to the classpath of the server 'Open launch configuration' / 'Classpath' after dooing this I got an error from JBoss telling me that the JBoss classes are not found. The compromise solution that I have found is to create a junction/symlink from the output of the Eclipse project 'bin' into the JBoss 'serve//deploy/xxx.sar' this is a temporary solution but will not scale in the future as multiple services must be created. What I would need: I would need to have the ability to make the exploded folder as 'mark deployable' and to be have the classes/*service.xml files loaded from there and to have hot-deployability for them. Any sugestions would help.

    Read the article

  • Cucumber could not find table; but its there. What is going on?

    - by JZ
    I'm working with cucumber and I'm running into difficulties. When I run "cucumber features", I am met with errors, cucumber is unable to find my requests table. What obvious mistake am I making? Thank you in advance! Bash: justin-zollarss-mac-pro:conversion justinz$ cucumber features Using the default profile... /Users/justinz/.gem/ruby/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be removed on or after August 2010. Use #requirement F-- (::) failed steps (::) Could not find table 'requests' (ActiveRecord::StatementInvalid) ./features/article_steps.rb:3 ./features/article_steps.rb:2:in `each' ./features/article_steps.rb:2:in `/^I have requests named (.+)$/' features/manage_articles.feature:7:in `Given I have requests named Foo, Bar' Failing Scenarios: cucumber features/manage_articles.feature:6 # Scenario: Conversion 1 scenario (1 failed) 3 steps (1 failed, 2 skipped) 0m0.154s justin-zollarss-mac-pro:conversion justinz$ Manage_articles.feature: Feature: Manage Articles In order to make sales As a customer I want to make conversions Scenario: Conversion Given I have requests named Foo, Bar When I go to the list of customers Then I should see a new "customer" Article_steps.rb: Given /^I have requests named (.+)$/ do |firsts| firsts.split(', ').each do |first| Request.create!(:first => first) pending # express the regexp above with the code you wish you had end end Then /^I should see a new "([^"]*)"$/ do |arg1| pending # express the regexp above with the code you wish you had end DB schema: ActiveRecord::Schema.define(:version => 20100528011731) do create_table "requests", :force => true do |t| t.string "institution" t.string "website" t.string "type" t.string "users" t.string "first" t.string "last" t.string "jobtitle" t.string "phone" t.string "email" t.datetime "created_at" t.datetime "updated_at" end end

    Read the article

  • jQuery eval of ajax inline script not throwing errors

    - by Josh
    http://stackoverflow.com/questions/606794/debugging-ajax-code-with-firebug This question is quite similar, though old and without real answers. I'm currently putting together an app that has scripts that get loaded in with an ajax request. An example: var main = _main.get(); main.load( someurl ); Where someurl is a page that contains an inline script element: <script type="text/javascript"> $(document).ready( function(){ var activities = new activities(); activities.init(); }); </script> jQuery will do a line by line eval of js that lives in inline script tags. The problem is, I get no errors or any information whatsoever in firebug when something goes awry. Does anyone have a good solution for this? Or a better practice for loading pages which contain javascript functionality? Edit: A little progress... so at the top of the page that is being loaded in via ajax, I have another script that was being included like this: <script type="text/javascript" src="javascript/pages/activities.js"></script> When I moved the inline $(document).ready() code in the page to the end of this included file, instead, syntax errors were now properly getting thrown. As an aside, I threw a console.log() into the inline script tag, and it was being logged just fine. I also tried removing the $(document).ready() altogether, and also switching it out for a $(window).load() event. No difference. May have something to do with the inline scripts dependency on the included activities.js, I guess. :: shakes head :: javascript can be a nightmare.

    Read the article

  • How to install PySide v0.3.1 on Mac OS X?

    - by ivo
    I'm trying to install PySide v0.3.1 in Mac OS X, for Qt development in python. As a pre-requisite, I have installed CMake and the Qt SDK. I have gone through the documentation and come up with the following installation script: export PYSIDE_BASE_DIR="<my_dir>" export APIEXTRACTOR_DIR="$PYSIDE_BASE_DIR/apiextractor-0.5.1" export GENERATORRUNNER_DIR="$PYSIDE_BASE_DIR/generatorrunner-0.4.2" export SHIBOKEN_DIR="$PYSIDE_BASE_DIR/shiboken-0.3.1" export PYSIDE_DIR="$PYSIDE_BASE_DIR/pyside-qt4.6+0.3.1" export PYSIDE_TOOLS_DIR="$PYSIDE_BASE_DIR/pyside-tools-0.1.3" pushd . cd $APIEXTRACTOR_DIR cmake . cd $GENERATORRUNNER_DIR cmake -DApiExtractor_DIR=$APIEXTRACTOR_DIR . cd $SHIBOKEN_DIR cmake -DApiExtractor_DIR=$APIEXTRACTOR_DIR -DGeneratorRunner_DIR=$GENERATORRUNNER_DIR . cd $PYSIDE_DIR cmake -DShiboken_DIR=$SHIBOKEN_DIR/libshiboken -DGENERATOR=$GENERATORRUNNER_DIR . cd $PYSIDE_TOOLS_DIR cmake . popd Now, I don't know if this installation script is ok, but apparently everything works fine. Each component (apiextractor, generatorrunner, shiboken, pyside-qt and pyside-tools) gets compiled into its own directory. The problem is that I don't quite understand how PySide gets into the system's python environment. In fact, when I start a python shell, I cannot import PySide: >>> import PySide Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named PySide Note: I am aware of the Installing PySide - OSX question, but that question is not relevant anymore, because it is about a specific a dependency on the Boost libraries, but with version 0.3.0 PySide moved from a Boost based source code to a CPython one.

    Read the article

  • Spring 3 & Jersey

    - by smeg4brains
    Hola guys! im currently struggling to integrate a REST Service based on Jersey and Spring. I'm using Spring 3.0.2-RELEASE and jersey-spring 1.2. But jersey-spring adds a dependency to Spring 2.5.6 to my project which of cause conflicts with the 3.0.2-RELEASE to give me thefollwing error: 11:58:25,409 ERROR org.springframework.web.context.ContextLoader:215 - Context initialization failed org.springframework.beans.factory.BeanDefinitionStoreException: Unexpected exception parsing XML document from class path resource [cloverjazz-web-context.xml]; nested exception is java.lang.NoSuchMethodError: org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.getLocalName(Lorg/w3c/dom/Node;)Ljava/lang/String; at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:420) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:342) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:310) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:92) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:123) at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:422) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:352) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) Is there a way to get around this issue? Does anyone know? Thanks!

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >