Search Results

Search found 20270 results on 811 pages for 'package management'.

Page 396/811 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • How to express inter project dependencies in Eclipse PDE

    - by Roland Tepp
    I am looking for the best practice of handling inter project dependencies between mixed project types where some of the projects are eclipse plug-in/OSGI bundle projects (an RCP application) and others are just plain old java projects (web services modules). Few of the eclipse plug-ins have dependencies on Java projects. My problem is that at least as far as I've looked, there is no way of cleanly expressing such a dependency in Eclipse PDE environment. I can have plug-in projects depend on other plug-in projects (via Import-Package or Require-Bundle manifest headers), but not of the plain java projects. I seem to be able to have project declare a dependency on a jar from another project in a workspace, but these jar files do not get picked up by neither export nor launch configuration (although, java code editing sees the libraries just fine). The "Java projects" are used for building services to be deployed on an J2EE container (JBoss 4.2.2 for the moment) and produce in some cases multiple jar's - one for deploying to the JBoss ear and another for use by client code (an RCP application). The way we've "solved" this problem for now is that we have 2 more external tools launcher configurations - one for building all the jar's and another for copying these jar's to the plug-in projects. This works (sort of), but the "whole build" and "copy jars" targets incur quite a large build step, bypassing the whole eclipse incremental build feature and by copying the jars instead of just referencing the projects I am decoupling the dependency information and requesting quite a massive workspace refresh that eats up the development time like it was candy. What I would like to have is a much more "natural" workspace setup that would manage dependencies between projects and request incremental rebuilds only as they are needed, be able to use client code from service libraries in an RCP application plug-ins and be able to launch the RCP application with all the necessary classes where they are needed. So can I have my cake and eat it too ;) NOTE To be clear, this is not so much about dependency management and module management at the moment as it is about Eclipse PDE configuration. I am well aware of products like [Maven], [Ivy] and [Buckminster] and they solve a quite different problem (once I've solved the workspace configuration issue, these products can actually come in handy for materializing the workspace and building the product)

    Read the article

  • In need of help with setting up the open source library JFreeChart

    - by ssbellows
    I am having trouble with setting up the open source library JFreeChart for creating charts using Java. This is the process I have followed so far in trying to set it up: I downloaded the latest version from their download page http://sourceforge.net/projects/jfreechart/files/. I then unpacked the jfreechart-1.0.13.zip in the directory C:\JFreeChart\jfreechart-1.0.13\ on my system drive. In the unpacked directory there is a folder entitled "lib" which contains the packaged .jar files specified as necessary to use JFreeChart. I added the following directory to my classpath: C:\JFreeChart\jfreechart-1.0.13\lib\ I then created a simple program and added the line "import org.jfree.chart.*;" to see if it would compile with a package imported from JFreeChart. I navigated to the folder in which my sample program was contained and compiled with the following command: "javac -classpath C:\ Program.java" I was given the following error: "package org.jfree.chart does not exist" Could someone please give me some input as to what I have done incorrectly in this setup process? This is the first time I've tried using an open source library, so I don't have any prior experience to go on myself. Thank you very much in advance.

    Read the article

  • JSF in jetty-equinox, Cannot find Bean classes in other bundles!

    - by Arnold
    Hi I have problems running JSF in an OSGi environment. I am using jetty web container and equinox to provide the OSGi functionality. The structure of my application is as follows: The first bundle has all the JSF libs, web.xml and a config.xml. It looks as the following: bundle1 ----src/main/java -------de/package ----------Activator.java ----------JSFResolver.java ----src/main/resource ------ WebContent ----------META-INF -------------face-config.xml --------------web.xhtml ----------start.xhtml -----------include.xhtml ----libs (containing all JSF required Jars) The structure of the second bundle is as follows: bundle2 ---src/main/java ------de/package ----------Bean.java ---src/main/resource ------META-INF ---------face-config.xml ------WebContent ---------index.xhtml When running the application of equinox, the bundle1 is the main bundle where all the browser requests are sent to. In the second bundle, the 'index.xhtml' file can be retrieved the by first bundle upon request. The 'index.xhtml' in bundle 2 gets its values and properties from the 'Bean.java' in bundle 2. The problem comes when i request the 'index.xhtml', the Bean.java class is not found. I think this is because the class loader of bundle1 cannot find it, it has no knowledge of it. So i would like to ask if anyone knows how to solve this problem. If so please do assist me, i have tried all the possibilities i had.. Is it infact possible to have JSF run on multiple bundles using the same FaceletsContex? Can i be able to have seperate faces-config.xml files in each bundle, which can all be connected other faces-config.xml in other bundles? Can anyone please provide me a solution. Sample code would help. thanks workspace_current.rar Arnold

    Read the article

  • Singleton Roles in Moose

    - by mjn12
    I am attempting to write a singleton role using Perl and Moose. I understand a MooseX::Singleton module is available but there is always resistance when requiring another CPAN module for our project. After trying this and having a little trouble I would like to understand WHY my method is not working. The singleton role I have written is as follows: package Singleton; use Moose::Role; my $_singleInstance; around 'new' => sub { my $orig = shift; my $class = shift; if (not defined $_singleInstance ){ $_singleInstance = $class->$orig(@_); } return $_singleInstance; }; sub getInstance { return __PACKAGE__->new(); } 1; This appears to work find when only one class uses the singleton role. However when two classes (ClassA and ClassB for example) both consume the Singleton role it appears as they are both referring to a shared $_singleInstance variable. If I call ClassA-getInstance it returns a reference to a ClassA object. If I call ClassB-getInstance sometime later in the same script it returns a reference to an object of type ClassA (even though I clearly called the getInstance method for ClassB). If I dont use a role and actually copy and paste the code from the Singleton role into ClassA and ClassB it appears to work fine. Whats going on here?

    Read the article

  • How to connect from ruby to MS Sql Server

    - by apetrov
    Hi Crowd! I'm trying to connect to the sql server 2005 database from *NIX machine: I have the following configuration: Linux 64bit ruby -v ruby 1.8.6 (2007-09-24 patchlevel 111) [x86_64-linux] important gems: dbd-odbc (0.2.4) dbi (0.4.1) active record sql server adapter - as plugin ruby-odbc 0.9996 (installed without any options.) unixODBC is installed freeTDS is installed cat /etc/odbcinst.ini [FreeTDS] Description = TDS driver (Sybase/MS SQL) Driver = /usr/lib/libtdsodbc.so Setup = /usr/lib/odbc/libtdsS.so CPTimeout = CPReuse = FileUsage = 1 DSN: DRIVER=FreeTDS;TDS_Version=8.0;SERVER=XXXX;DATABASE=XXX;Port=1433;uid=XXX;pwd=XXXX;" or DRIVER=/usr/lib/libtdsodbc.so;TDS_Version=8.0;SERVER=XXXX;DATABASE=XXX;Port=1433;uid=XXX;pwd=XXXX;" I receive the following error: >>ActiveRecord::Base.sqlserver_connection({"mode"=>"ODBC", "adapter"=>"sqlserver", "dsn"=>my_dns) DBI::DatabaseError: IM002 (0) [unixODBC][Driver Manager]Data source name not found, and no default driver specified from /usr/lib/ruby/1.8/DBD/ODBC/ODBC.rb:95:in `connect' from /usr/lib/ruby/1.8/dbi.rb:424:in `connect' from /usr/lib/ruby/1.8/dbi.rb:215:in `connect' from /opt/ublip/rails/current/vendor/plugins/activerecord-sqlserver-adapter/lib/active_record/connection_adapters/sqlserver_adapter.rb:47:in `sqlserver_connection' It looks like ODBC unable to find appropriate ODBC driver, but I have no ideas why. I had a problem with /usr/lib/libtdsodbc.so which is empty in default debian package free-tds dev, but i solved it with remove broken package and installation from sources. Will appreciate any thought! Thanks & Regards Note: I'm albe to connect using the same steps on mac 10.5

    Read the article

  • Overriding Only Some Default Parameters in ActionScript

    - by TheDarkIn1978
    i have a function which has all optional arguments. is it possible to override a an argument of a function without supplying the first? in the code below, i'd like to keep most of the default arguments for the drawSprite function, and only override the last argument, which is the sprite's color. but how can i call the object? DrawSprite(0x00FF00) clearly will not work. //Main Class package { import DrawSprite; import flash.display.Sprite; public class Start extends Sprite { public function Start():void { var myRect:DrawSprite = new DrawSprite(0x00FF00) addChild(myRect); } } } //Draw Sprite Class package { import flash.display.Sprite; import flash.display.Graphics; public class DrawSprite extends Sprite { private static const DEFAULT_WIDTH:Number = 100; private static const DEFAULT_HEIGHT:Number = 200; private static const DEFAULT_COLOR:Number = 0x000000; public function DrawSprite(spriteWidth:Number = DEFAULT_WIDTH, spriteHeight:Number = DEFAULT_HEIGHT, spriteColor:Number = DEFAULT_COLOR):void { graphics.beginFill(spriteColor, 1.0); graphics.drawRect(0, 0, spriteWidth, spriteHeight); graphics.endFill(); } } }

    Read the article

  • Rails deployment strategies with Bundler and JRuby

    - by brad
    I have a jruby rails app and I've just started using bundler for gem dependency management. I'm interested in hearing peoples' opinions on deployment strategies. The docs say that bundle package will package your gems locally so you don't have to fetch them on the server (and I believe warbler does this by default), but I personally think (for us) this is not the way to go as our deployed code (in our case a WAR file) becomes much larger. My preference would be to mimic our MVN setup which fetches all dependencies directly on the server AFTER the code has been copied there. Here's what I'm thinking, all comments are appreciated: Step1: Build war file, copy to server Step2: Unpack war on server, fetch java dependencies with mvn Step3: use Bundler to fetch Gem deps (Where should these be placed??) * Step 3 is the step I'm a bit unclear on. Do I run bundle install with a particular target in mind?? Step4: Restart Tomcat Again my reasoning here is that I'd like to keep the dependencies separate from the code at deploy time. I'd also like to place all gem dependencies in the app itself so they are contained, rather than installing them in the app user's home directory (as, again, I believe is the default for Bundler)

    Read the article

  • SQL Server 2008 Express Edition Connection Problem

    - by waqasahmed
    I have VS 2008 Professional Edition. After the installation (which included SQL Server 2008), I decided to install SQL Server 2008 Express Edition with Advanced Tools (so I could get SQL Server Management Studio on it). So I uninstalled the SQL Express that came with VS 2008, and installed the standalone SQL Server Express 2008 version with advanced tools. However, When I try to logon onto SQL Server Management Studio using: .\SQLEXPRESS as Server name and Windows Authentication as the authentication, I get the following message: TITLE: Connect to Server ------------------------------ Cannot connect to .\SQLEXPRESS. ------------------------------ ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1) For help, click: http//go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ Any suggestions on how to get it to work? I have tried disabling Windows Firewall as well and still no luck. I am using WIndows Vista and SQL Server 2008 Express SP1 Patch has also been applied recently.

    Read the article

  • Python API for VirtualBox

    - by jessica
    I have made a command-line interface for virtualbox such that the virtualbox can be controlled from a remote machine. now I am trying to implement the commmand-line interface using python virtualbox api. For that I have downloaded the pyvb package (python api documentation shows functions that can be used for implementing this under pyvb package). but when I give pyvb.vb.VB.startVM(instance of VB class,pyvb.vm.vbVM) SERVER SIDE CODE IS from pyvb.constants import * from pyvb.vm import * from pyvb.vb import * import xpcom import pyvb import os import socket import threading class ClientThread ( threading.Thread ): # Override Thread's init method to accept the parameters needed: def init ( self, channel, details ): self.channel = channel self.details = details threading.Thread.__init__ ( self ) def run ( self ): print 'Received connection:', self.details [ 0 ] while 1: s= self.channel.recv ( 1024 ) if(s!='end'): if(s=='start'): v=VB() pyvb.vb.VB.startVM(v,pyvb.vm.vbVM) else: self.channel.close() break print 'Closed connection:', self.details [ 0 ] server = socket.socket ( socket.AF_INET, socket.SOCK_STREAM ) server.bind ( ( '127.0.0.1', 2897 ) ) server.listen ( 5 ) while True: channel, details = server.accept() ClientThread ( channel, details ).start() it shows an error Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python2.5/threading.py", line 486, in __bootstrap_inner self.run() File "news.py", line 27, in run pyvb.vb.VB.startVM(v,pyvb.vm.vbVM.getUUID(m)) File "/usr/lib/python2.5/site-packages/pyvb-0.0.2-py2.5.egg/pyvb/vb.py", line 65, in startVM cmd='%s %s'%(VB_COMMAND_STARTVM, vm.getUUID()) AttributeError: 'str' object has no attribute 'getUUID'

    Read the article

  • Add jar to apache felix in Pom file?

    - by drozzy
    How can I add a jar to my bundle in Apache Felix? I am using maven, with maven-bundle-plugin to manage my bundles in OBR for me. But I am not sure where to declare the dependency inside my POM on the jar, so that maven correctly compiles it into the final bundle. This is how my plugin looks in pom: <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <version>2.1.0</version> <extensions>true</extensions> <configuration> <instructions> <Bundle-Category>sample</Bundle-Category> <Bundle-SymbolicName>${artifactId} </Bundle-SymbolicName> <Export-Package> //blahblah </Export-Package> </instructions> <!-- OBR --> <remoteOBR>repo-rel</remoteOBR> <prefixUrl>file:///C:/Users/blah/Projects/Eclipse3.6-RCP-64/Felix/obr-repo/releases</prefixUrl> <ignoreLock>true</ignoreLock> </configuration>

    Read the article

  • SSIS Lookup with Lookup Component Vs Script Component.

    - by Nev_Rahd
    Hello, I need to load Dimensions from EDW Tables (which does maintain historical records) and is of type Key-Value-Parameter. My scenario is ok if got a record in EDW as below Key1 Key2 Code Value EffectiveDate EndDate CurrentFlag 100 555 01 AAA 2010-01-01 11.00.00 9999-12-31 Y 100 555 02 BBB 2010-01-01 11.00.00 9999-12-31 Y This need to be loaded into DM by pivoting it as key1 and key2 combinations makes Natural key for DM SK NK 01 02 EffectiveDate EndDate CurrentFlag 1 100-555 AAA BBB 2010-01-01 11.00.00 9999-12-31 Y My ssis package does this all good pivoting... looking up the incoming NK in DIM.. if new will insert .. else with further lookup with effective date and determine if the incoming for same natural key got any new (change) in attribute.. if so updates the current record byy setting its end date and insert the new one with new attribute value and pulling the recent records values for other attributes. My problem is if the same natural key comes twice with same attribute in single extract my first lookup which on natural key .. will let both records pass and try to insert.. where its fails. If i get distinct records on NK the second is not picked and need to run package again. So my question how can i configure lookup or alernative way to handle this scenario when same NK comes twice in single extract, would be able to insert first record if not exists in Dim table and for second one should be able to updated with the changes with reference to one inserted above. Not sure this makes sense what am trying to explain. Will attached the screenshot once back to work desk (on monday). Thanks

    Read the article

  • How can I coordinate code review tool and RCS (specifically git)

    - by Chris Nelson
    We're committed to git for code management. We're trying to find a tool that will help us systematize code reviews. We're considering Gerrit and Code Collaborator but would welcome other suggestions. We're having a problem answering the question, "How do we know every commit was reviewed?" (Or "What commits have yet to be reviewed?") One answer would be to submit every commit or every push for review and track incomplete reviews in the review tool. I'm not entirely happy with relying on a another tool -- especially if it's not open source -- to tell us this. What seems to be a better answer is to rely on sign offs in git (e.g., "Signed-off-by: Chris Nelson") and use a hook in the review tool to sign off commits on behalf of the reviewer. And advantage of this is if we use some other review mechanism for some commits, we have just one place to look for results. One problem with this is that we can't require review before push because the review tool is unlikely to have access to the developer's private repository clone to add the sign-off. Any ideas on integrating code review with code management to achieve ease of use and high visibility of unreviewed changes?

    Read the article

  • Call Init from XUL after Page Loads (Firefox add-on)

    - by mattyboy123
    Hi all, I've been working on some code in js/html and it works great. I'm now trying to package it into an add-on for Firefox, and having some issues getting the XUL document correct. PLAIN OLD HTML/JS In my html test file between the <head></head> I have: <script type="text/javascript" src="js/MyCode.js"></script> At the end of the test file before the </body> I have: <script type="text/javascript">MyCode.Static.Init();</script> FIREFOX ADD-ON: OVERLAY.XUL In an overlay.xul file in the extension package I have : <?xml version="1.0"?> <overlay id="mycode" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <script type="application/x-javascript" src="chrome://mycode/content/MyCode.js"></script> <script> window.addEventListener("load", function () { gBrowser.addEventListener("load",MyCode.Static.Init,true); }, false); </script> </overlay> This does not seem to enter the method, but then again I'm not even sure if I've got the listeners firing properly. Would this be the correct way to duplicate what I was doing in plain old html/js ?

    Read the article

  • Qmake does not specify a valid qt

    - by Comptrol
    After installing Qt SDK for Open Source C++ development on Mac OS by following the respective steps Note for the binary package: If you have the binary package, simply double-click on the Qt.mpkg and follow the instructions to install Qt. . Yes, that is all I have done to install Qt on MacOsX. Everything was going fine, until I run a sample application, of which compile output resulted in: No valid Qt version set. Set one in Preferences Error while building project qtilk When executing build step 'QMake' Canceled build. Then I tried to change the respective Qt version in Preferences and I hovered over the Path, I realized my mkspec isn't set: Then I tried querying qmake by qmake -query : QT_INSTALL_PREFIX:/ QT_INSTALL_DATA:/usr/local/Qt4.6 QT_INSTALL_DOCS:/Developer/Documentation/Qt QT_INSTALL_HEADERS:/usr/include QT_INSTALL_LIBS:/Library/Frameworks QT_INSTALL_BINS:/Developer/Tools/Qt QT_INSTALL_PLUGINS:/Developer/Applications/Qt/plugins QT_INSTALL_TRANSLATIONS:/Developer/Applications/Qt/translations QT_INSTALL_CONFIGURATION:/Library/Preferences/Qt QT_INSTALL_EXAMPLES:/Developer/Examples/Qt/ QT_INSTALL_DEMOS:/Developer/Examples/Qt/Demos QMAKE_MKSPECS:/usr/local/Qt4.6/mkspecs QMAKE_VERSION:2.01a QT_VERSION:4.6.2 QMAKE_MKSPECS seems to be set here?? Will setting my mkspec solve my building problem? I tried setting by typing export mkspec=macx-g++. Still, mkspec seems not to be set to anything. I am all ears waiting for your answers. Thanks in advance.

    Read the article

  • Google App Engine: Unit testing concurrent access to memcache

    - by Phuong Nguyen de ManCity fan
    Would you guys show me a way to simulating concurrent access to memcache on Google App Engine? I'm trying with LocalServiceTestHelpers and threads but don't have any luck. Every time I try to access Memcache within a thread, then I get this error: ApiProxy$CallNotFoundException: The API package 'memcache' or call 'Increment()' was not found I guess that the testing library of GAE SDK tried to mimic the real environment and thus setup the environment for only one thread (the thread that running the test) which cannot be seen by other thread. Here is a piece of code that can reproduce the problem package org.seamoo.cache.memcacheImpl; import org.testng.Assert; import org.testng.annotations.AfterMethod; import org.testng.annotations.BeforeMethod; import org.testng.annotations.Test; import com.google.appengine.api.memcache.MemcacheService; import com.google.appengine.api.memcache.MemcacheServiceFactory; import com.google.appengine.tools.development.testing.LocalMemcacheServiceTestConfig; import com.google.appengine.tools.development.testing.LocalServiceTestHelper; public class MemcacheTest { LocalServiceTestHelper helper; public MemcacheTest() { LocalMemcacheServiceTestConfig memcacheConfig = new LocalMemcacheServiceTestConfig(); helper = new LocalServiceTestHelper(memcacheConfig); } /** * */ @BeforeMethod public void setUp() { helper.setUp(); } /** * @see LocalServiceTest#tearDown() */ @AfterMethod public void tearDown() { helper.tearDown(); } @Test public void memcacheConcurrentAccess() throws InterruptedException { final MemcacheService service = MemcacheServiceFactory.getMemcacheService(); Runnable runner = new Runnable() { @Override public void run() { // TODO Auto-generated method stub service.increment("test-key", 1L, 1L); try { Thread.sleep(200L); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } service.increment("test-key", 1L, 1L); } }; Thread t1 = new Thread(runner); Thread t2 = new Thread(runner); t1.start(); t2.start(); while (t1.isAlive()) { Thread.sleep(100L); } Assert.assertEquals((Long) (service.get("test-key")), new Long(4L)); } }

    Read the article

  • How to rename Java packages without breaking Subversion history?

    - by M. Joanis
    Hello everyone, The company I'm working for is starting up and they changed their name in the process. So we still use the package name com.oldname because we are afraid of breaking the file change history, or the ancestry links between versions, or whatever we could break (I don't think I use the right terms, but you get the concept). We use: Eclipse, TortoiseSVN, Subversion I found somewhere that I should do it in many steps to prevent incoherence between content of .svn folders and package names in java files: First use TortoiseSVN to rename the directory, updating the .svn directories. Then, manually rename the directory back to the original name. To finally use Eclipse to rename the packages (refactor) back to the new name, updating the java files. That seems good to me, but I need to know if the ancestry and history and everything else will still be coherent and working well. I don't have the keys to that server, that's why I don't hastily backup things and try one or two things. I would like to come up with a good reason not to do it, or a way of doing it which works. Thank you for your help, M. Joanis

    Read the article

  • Perl: Exporting variables in a subclass

    - by Jonathan
    I have a base class like this: package MyClass; use vars qw/$ME list of vars/; use Exporter; @ISA = qw/Exporter/; @EXPORT_OK = qw/ many variables & functions/; %EXPORT_TAGS = (all => \@EXPORT_OK ); sub my_method { } sub other_methods etc { } --- more code--- I want to subclass MyClass, but only for one method. package MySubclass; use MyClass; use vars qw/@ISA/; @ISA = 'MyClass'; sub my_method { --- new method } And I want to call this MySubclass like I would the original MyClass, and still have access to all of the variables and functions from exporter. However I am having problems getting the Exporter variables from the original class, MyClass, to export correctly. Do I need to run exporter again inside the subclass? That seems redundant and unclear. Example file: #!/usr/bin/perl use MySubclass /$ME/; -- rest of code But I get compile errors when I try to import the $ME variable. Any suggestions?

    Read the article

  • Problems with Acitivity LifeCycle with VideoView playback.

    - by Alex Volovoy
    Hi all i've ran into another problems with VideoView. Then video is playing, and i put device asleep, using hard button, onPause is called. But it followed by 03-17 11:26:33.779: WARN/ActivityManager(884): Activity pause timeout for HistoryRecord{4359f620 com.package/com.package.VideoViewActivity} And then i have onStart/onResume again and Video starts playing. I've try to move code around onStart/onStop - doesn't seems to make difference. sample code : public class VideoViewActivity extends Activity { private String path = ""; private VideoView mVideoView; private static final String MEDIA_URL = "media_url"; @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.videoview); mVideoView = (VideoView)findViewById(R.id.surface_view); path = getIntent().getStringExtra(MEDIA_URL); } @Override public void onResume() { super.onResume(); mVideoView.setVideoPath(path); mVideoView.setMediaController(new MediaController(this)); mVideoView.requestFocus(); mVideoView.start(); } @Override public void onPause() { super.onPause(); mVideoView.stopPlayback(); mVideoView.setMediaController(null); } }

    Read the article

  • LaTex: why partially showing up references?

    - by HH
    The bib.style part may be the problem. If I do not reference to references, do they show up? I have listed all errors below, the file compiles so I don't know whether they are related to partially-showing-up-references. For example, work with many authors gets only one author listed. I want to see references fully, not partially. Headers $ grep bib header.tex \usepackage{natbib} \bibliographystyle{abbrvnat} Errors $ grep -n -A 7 -B 7 Error *.log combined.log-505-! Illegal unit of measure (pt inserted). combined.log-506-<to be read again> combined.log-507- \futurelet combined.log-508-l.353 \hline combined.log-509- combined.log-510-? combined.log-511- combined.log:512:! Package caption Error: cite undefined. combined.log-513- combined.log-514-See the caption package documentation for explanation. combined.log-515-Type H <return> for immediate help. combined.log-516- ... combined.log-517- combined.log-518-l.374 ...n={CPU O(mlog(n))}, cite={topcoder:node}] combined.log-519- -- combined.log-559- [] combined.log-560- combined.log-561-) [10] combined.log-562-\openout2 = `references.aux'. combined.log-563- combined.log-564- (./references.tex combined.log-565- combined.log:566:! LaTeX Error: \include cannot be nested. combined.log-567- combined.log-568-See the LaTeX manual or LaTeX Companion for explanation. combined.log-569-Type H <return> for immediate help. combined.log-570- ... combined.log-571- combined.log-572-l.1 \include{timeUse.tex} Bibs.bib @misc{ Gundersen, author = "G. Gundersen", title = "Data Structures in Java for Matrix Computations", year = "2002" } @book{ Lennart, author = "R. Lennart", title = "Mathematics Handbook for Science and Engineering BETA", year = "2004" }

    Read the article

  • Restricting dynamically loaded classes and jars based on a security policy

    - by Max
    Hi, I would like to dynamically load a set of jars or classes (i.e. plugins loaded at runtime). At the same time, I would like to restrict what these plugins are able to do in the JVM. For a test case, I would like to restrict them to pretty much everything (right now I'm just allowing one System.getProperty value to be read). I am currently using a security policy file, but I'm having difficulty specifying a policy for one folder or package in my codeBase, but not another. Here is how my policy looks now: grant codeBase "file:/home/max/programming/java/plugin/plugins/" { permission java.util.PropertyPermission "java.version", "read"; }; grant codeBase "file:/home/max/programming/java/plugin/api/" { permission java.security.AllPermission; }; Where (for testing purposes), all files in the plugins package and folder are restricted, but the classes in the api folder are not. Is this possible? Do I have to create a custom class loader? Is there a better way to go about doing this? Thanks.

    Read the article

  • Google App Engine ClassNotPersistenceCapableException

    - by Frank
    I have the following class : import javax.jdo.annotations.IdGeneratorStrategy; import javax.jdo.annotations.IdentityType; import javax.jdo.annotations.PersistenceCapable; import javax.jdo.annotations.Persistent; import javax.jdo.annotations.PrimaryKey; import com.google.appengine.api.datastore.*; @PersistenceCapable(identityType=IdentityType.APPLICATION) public class PayPal_Message { @PrimaryKey @Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY) private Long id; @Persistent private Text content; @Persistent private String time; public PayPal_Message(Text content,String time) { this.content=content; this.time=time; } public Long getId() { return id; } public Text getContent() { return content; } public String getTime() { return time; } public void setContent(Text content) { this.content=content; } public void setTime(String time) { this.time=time; } } It used to be in a package, and works fine, now I put all classes in the default package, which caused me this error : org.datanucleus.jdo.exceptions.ClassNotPersistenceCapableException: The class "The class "PayPal_Message" is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data/annotations for the class are not found." is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data for the class is not found. NestedThrowables: org.datanucleus.exceptions.ClassNotPersistableException: The class "PayPal_Message" is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data/annotations for the class are not found. What should I do to fix it ?

    Read the article

  • Anything speaking against the bitnami.org Ruby/Rails/Redmine Stack?

    - by Pekka
    I am looking to set up a Redmine server on a Windows virtual machine on my local workstation. (Background in this related question.) I have zero knowledge of Ruby nor Rails, and while Redmine may be the opportunity to dip into those platforms somewhat, my first goal is to get it running as quickly and easily as possible. For that, I am eyeing the Bitnami Redmine Package. It promises point-and-click install, and a self-contained environment with everything you need. Apart from the learning factor, are there any serious limitations this method implies? Any serious cutdowns in customizability? I will be wanting to customize the template right away, for example, and install plugins. The package looks o.k. to me but before I install it, I was curious to know whether anybody would advise against it and why. Edit: The first impression is great. From 0 to a working Redmine installation in twelve minutes! Wow.

    Read the article

  • How can I share variables between a base class and subclass in Perl?

    - by Jonathan
    I have a base class like this: package MyClass; use vars qw/$ME list of vars/; use Exporter; @ISA = qw/Exporter/; @EXPORT_OK = qw/ many variables & functions/; %EXPORT_TAGS = (all => \@EXPORT_OK ); sub my_method { } sub other_methods etc { } --- more code--- I want to subclass MyClass, but only for one method. package MySubclass; use MyClass; use vars qw/@ISA/; @ISA = 'MyClass'; sub my_method { --- new method } And I want to call this MySubclass like I would the original MyClass, and still have access to all of the variables and functions from Exporter. However I am having problems getting the Exporter variables from the original class, MyClass, to export correctly. Do I need to run Exporter again inside the subclass? That seems redundant and unclear. Example file: #!/usr/bin/perl use MySubclass /$ME/; -- rest of code But I get compile errors when I try to import the $ME variable. Any suggestions?

    Read the article

  • Wrapping FUSE from Go

    - by Matt Joiner
    I'm playing around with wrapping FUSE with Go. However I've come stuck with how to deal with struct fuse_operations. I can't seem to expose the operations struct by declaring type Operations C.struct_fuse_operations as the members are lower case, and my pure-Go sources would have to use C-hackery to set the members anyway. My first error in this case is "can't set getattr" in what looks to be the Go equivalent of a default copy constructor. My next attempt is to expose an interface that expects GetAttr, ReadLink etc, and then generate C.struct_fuse_operations and bind the function pointers to closures that call the given interface. This is what I've got (explanation continues after code): package fuse // #include <fuse.h> // #include <stdlib.h> import "C" import ( //"fmt" "os" "unsafe" ) type Operations interface { GetAttr(string, *os.FileInfo) int } func Main(args []string, ops Operations) int { argv := make([]*C.char, len(args) + 1) for i, s := range args { p := C.CString(s) defer C.free(unsafe.Pointer(p)) argv[i] = p } cop := new(C.struct_fuse_operations) cop.getattr = func(*C.char, *C.struct_stat) int {} argc := C.int(len(args)) return int(C.fuse_main_real(argc, &argv[0], cop, C.size_t(unsafe.Sizeof(cop)), nil)) } package main import ( "fmt" "fuse" "os" ) type CpfsOps struct { a int } func (me *CpfsOps) GetAttr(string, *os.FileInfo) int { return -1; } func main() { fmt.Println(os.Args) ops := &CpfsOps{} fmt.Println("fuse main returned", fuse.Main(os.Args, ops)) } This gives the following error: fuse.go:21[fuse.cgo1.go:23]: cannot use func literal (type func(*_Ctype_char, *_Ctype_struct_stat) int) as type *[0]uint8 in assignment I'm not sure what to pass to these members of C.struct_fuse_operations, and I've seen mention in a few places it's not possible to call from C back into Go code. If it is possible, what should I do? How can I provide the "default" values for interface functions that acts as though the corresponding C.struct_fuse_operations member is set to NULL?

    Read the article

  • application packaging

    - by user285825
    The application packaging (software deliverables) is a vague concept to me. I worked in web application before so all I know how to prepare the deliverables ie WAR and then put them into the servlet container and rest of this being taken care of. But when considering other java or c++-based application (.exe or .class) how to prepare the package? What should be considered for preparation of the package and deployment of the application? Like I understand there should be some entries made into the windows registry or some libraries to be put in /usr/bin directory and so on. And during the development phase the execution and testing is done in more informal manner like writing some shell script or so on. But the deployment scenario is a more formal thing. As I said I have only the idea of deploying web applications I am not much knowledgeable in the other areas. Are there any kind of conventions like there are conventions for documentations like javadoc or doxygen/man pages? Application packaging is not that sort of a thing that there are much discussion on books or online materials. This is very disappointing.

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >