Search Results

Search found 1345 results on 54 pages for 'ant contrib'.

Page 47/54 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Lucene stop words not removed during searching

    - by iamrohitbanga
    I have created a Lucene index with the following analyzer. public class DocSpecAnalyzer extends Analyzer { private static CharArraySet stopSet;// = new HashSet<String>(Arrays.asList());//STOP_WORDS_SET; static { stopSet = new CharArraySet(FDConstants.stopwords, true); // uncommenting this displays all the stop words // for (String s: FDConstants.stopwords) { // System.out.println(s); // } } /** * Specifies whether deprecated acronyms should be replaced with HOST type. * See {@linkplain https://issues.apache.org/jira/browse/LUCENE-1068} */ private final boolean enableStopPositionIncrements; private final Version matchVersion; public DocSpecAnalyzer(Version matchVersion) { this.matchVersion = matchVersion; enableStopPositionIncrements = StopFilter.getEnablePositionIncrementsVersionDefault(matchVersion); } public TokenStream tokenStream(String fieldName, Reader reader) { StandardTokenizer tokenStream = new StandardTokenizer(matchVersion, reader); tokenStream.setMaxTokenLength(DEFAULT_MAX_TOKEN_LENGTH); TokenStream result = new StandardFilter(tokenStream); result = new LowerCaseFilter(result); result = new StopFilter(enableStopPositionIncrements, result, stopSet); result = new PorterStemFilter(result); return result; } /** Default maximum allowed token length */ public static final int DEFAULT_MAX_TOKEN_LENGTH = 255; } Now when I search for documents for a query containing stop words, i get hits for stop words also. As I post this problem, I found the bug. It is because of http://lucene.apache.org/java/2_9_2/api/contrib-misc/org/apache/lucene/queryParser/analyzing/AnalyzingQueryParser.html not handling stop words. Is there a substitute? Update: forgot to mention that I need to do a fuzzy search. that is why i am using an AnalyzingQueryParser.

    Read the article

  • Sysdeo Tomcat DevLoader - Hot deploy of java class causes whole application to restart

    - by Gala101
    Hi, I am using sysdeo tomcat plugin 3.2.1 with eclipse 3.5.1 (Galileo) and tomcat 5.5.23 on windows XP. I can get tomcat plugin working in eclipse, and have extracted devloader.zip into [tomcat]\server\classes. I have also updated the context and now it has this entry: <Context path="/myapp1" reloadable="true" docBase="F:\Work\eclipse_workspace\myapp1" workDir="F:\Work\eclipse_workspace\myapp1\work" > <Logger className="org.apache.catalina.logger.SystemOutLogger" verbosity="4" timestamp="true"/> <Loader className="org.apache.catalina.loader.DevLoader" reloadable="true" debug="1" useSystemClassLoaderAsParent="false" /> </Context> I have activated devloader (in Project Properties Tomcat Devloader Classpath) and have 'checked' all my classes and jars, I haven't 'checked' commons-loggin.jar jsp-api.jar, servlet-api.jar. So on launching tomcat via the plugin, I can get it running with devloader as shown in eclipse console view [DevLoader] Starting DevLoader [DevLoader] projectdir=F:\Work\eclipse_workspace\myapp1 [DevLoader] added file:/F:/Work/eclipse_workspace/myapp1/WEB-INF/classes/ [DevLoader] added file:/F:/Work/eclipse_workspace/myapp1/WEB-INF/lib/activation.jar However, if I even add a single System.out.println into any java file and save it, the whole application gets reloaded (takes ~80 sec) which is as good as stopping/starting tomcat itself. I've tried adding -Xdebug in JAVA_OPTS in the catalina.bat but no luck :( Can you please guide where I may be doing it wrong.. Please note that I can 'redeploy' the whole application on tomcat but that's not what I need, I am looking to be able to make small changes in java classes 'on-the-fly' while coding/debugging without having to wait for complete app restart. Another annoyance is that restarting tomcat/application causes the session/debug progress to be lost.. Can you please guide me how to go about it. PS: I am not using any ant/maven scripts explicitly, just relying on eclipse to do the build for me (which it does).

    Read the article

  • Limit foreign key choices in select in an inline form in admin

    - by mightyhal
    Edited :-) Hopefully a bit clearer now. The logic is of the model is: A Building has many Rooms A Room may be inside another Room (a closet, for instance--ForeignKey on 'self') A Room can only in inside of another Room in the same building (this is the tricky part) Here's the code I have: #spaces/models.py from django.db import models class Building(models.Model): name=models.CharField(max_length=32) def __unicode__(self): return self.name class Room(models.Model): number=models.CharField(max_length=8) building=models.ForeignKey(Building) inside_room=models.ForeignKey('self',blank=True,null=True) def __unicode__(self): return self.number and: #spaces/admin.py from ex.spaces.models import Building, Room from django.contrib import admin class RoomAdmin(admin.ModelAdmin): pass class RoomInline(admin.TabularInline): model = Room extra = 2 class BuildingAdmin(admin.ModelAdmin): inlines=[RoomInline] admin.site.register(Building, BuildingAdmin) admin.site.register(Room) The inline will display only rooms in the current building (which is what I want). The problem, though, is that for the inside_room drop down, it displays all of the rooms in the Rooms table (including those in other buildings). In the inline of rooms, I need to limit the inside_room choices to only rooms which are in the current building being displayed by the main form. I can't figure out a way to do it with either a limit_choices_to in the model, nor can I figure out how exactly to override the admin's inline formset properly (I feel like I should be somehow create a custom inline form, pass the building_id of the main form to the custom inline, then limit the queryset for the field's choices based on that--but I just can't wrap my head around how to do it). Maybe this is too complex for the admin site, but it seems like something that would be generally useful... Thanks again for your help!

    Read the article

  • Selectively intercepting methods using autofac and dynamicproxy2

    - by Mark Simpson
    I'm currently doing a bit of experimenting using Autofac-1.4.5.676, autofac contrib and castle DynamicProxy2. The goal is to create a coarse-grained profiler that can intercept calls to specific methods of a particular interface. The problem: I have everything working perfectly apart from the selective part. I gather that I need to marry up my interceptor with an IProxyGenerationHook implementation, but I can't figure out how to do this. My code looks something like this: The interface that is to be intercepted & profiled (note that I only care about profiling the Update() method) public interface ISomeSystemToMonitor { void Update(); // this is the one I want to profile void SomeOtherMethodWeDontCareAboutProfiling(); } Now, when I register my systems with the container, I do the following: // Register interceptor gubbins builder.RegisterModule(new FlexibleInterceptionModule()); builder.Register<PerformanceInterceptor>(); // Register systems (just one in this example) builder.Register<AudioSystem>() .As<ISomeSystemToMonitor>) .InterceptedBy(typeof(PerformanceInterceptor)); All ISomeSystemToMonitor instances pulled out of the container are intercepted and profiled as desired, other than the fact that it will intercept all of its methods, not just the Update method. Now, how can I extend this to exclude all methods other than Update()? As I said, I don't understand how I'm meant to say "for the ProfileInterceptor, use this implementation of IProxyHookGenerator". All help appreciated, cheers! Also, please note that I can't upgrade to autofac2.x right now; I'm stuck with 1.

    Read the article

  • PInvokeStackImbalance was detected when manually running Xna Content pipeline

    - by Miau
    So I m running this code static readonly string[] PipelineAssemblies = { "Microsoft.Xna.Framework.Content.Pipeline.FBXImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.XImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.TextureImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.EffectImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.XImporter" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.AudioImporters" + XnaVersion, "Microsoft.Xna.Framework.Content.Pipeline.VideoImporters" + XnaVersion, }; more code in between ..... // Register any custom importers or processors. foreach (string pipelineAssembly in PipelineAssemblies) { _buildProject.AddItem("Reference", pipelineAssembly); } more code in between ..... var execute = Task.Factory.StartNew(() => submission.ExecuteAsync(null, null), cancellationTokenSource.Token); var endBuild = execute.ContinueWith(ant => BuildManager.DefaultBuildManager.EndBuild()); endBuild.Wait(); Basically trying to build the content project with code ... this works well if you remove XImporter, AudioIMporters and VideoImporters but with those I get the following error: PInvokeStackImbalance was detected Message: A call to PInvoke function 'Microsoft.Xna.Framework.Content.Pipeline!Microsoft.Xna.Framework.Content.Pipeline.UnsafeNativeMethods+AudioHelper::GetFormatSize' has unbalanced the stack. This is likely because the managed PInvoke signature does not match the unmanaged target signature. Check that the calling convention and parameters of the PInvoke signature match the target unmanaged signature. Things I've tried: turn unsafe code on adding a lot of logging (the dll is found in case you are wondering) different wav and Mp3 files (just in case) Will update...

    Read the article

  • Checkstyle: cannot summarise issues per author ?

    - by David Michel
    Hi all, I'm trying to use checkstyle for a java project but I can't seem to get it working properly: While it apparently runs smoothly, the html report doesn't give any info per authors as it should, i.e. the authors table is empty. The thing is I don't know how checkstyle identify an author. Does it look at the java doc tag @author ? at the class level or at the method level ? The ant task I used is below: <taskdef resource="checkstyletask.properties" classpath="${libs.dir}/checkstyle-all-5.0.jar"/> <target name="checkstyle" description="Generates a report of code convention violations."> <mkdir dir="${checkstyle.dir}"/> <checkstyle config="${util.dir}/checkstyle/sun_checks.xml" failureProperty="checkstyle.failure" failOnViolation="false"> <formatter type="xml" tofile="${checkstyle.dir}/checkstyle_report.xml"/> <fileset dir="${src.dir}" includes="**/*.java"/> </checkstyle> <xslt in="${checkstyle.dir}/checkstyle_report.xml" out="${checkstyle.dir}/checkstyle_report.html" style="${util.dir}/checkstyle/checkstyle-author.xsl"/> </target> Many thanks for your help David

    Read the article

  • How to figure out what error my Java Eclipse project has?

    - by Greg Mattes
    I've created a Java project from existing source with an Ant build script in Eclipse. I cannot run my project because Eclipse tells me that there is at least one error in it. Now, I know that the project runs fine on the command line, so I suspect an Eclipse configuration error. As far as I can tell, the only feedback that I have from Eclipse is a little red X on my project in the Package Explorer window and dialog window when I try to run the project says there are errors in the project This is all wonderful, but what is the error? Is there a "show me the next error" button somewhere? In the past, on other Eclipse projects, I've notice other little red X's on folders containing source files with errors, the little red X's appear on the source files as well. I scanned (manually) through all of the source files and I haven't found any other red X's (again, where is the "next error" button?). If I select the "Proceed" button I am greeted with a java.lang.NoClassDefFoundError for my main class, which makes me suspect a classpath issue. I've checked the classpath, and I'm fairly certain that it's correct. Is there a way to see the exact jvm command line that Eclipse is invoking? I realize that it might be invoking the JVM programmatically, and not on a "real" command line. In any case, is there a way, other than the run configuration dialog, to see what is actually happening when I hit the "Proceed" button?

    Read the article

  • Steps in creating a web service using Axis2 - The client code

    - by zengr
    I am trying to create a web service, my tools of trade are: ** Axis2, Eclipse, Tomcat, Ant ** I need to create a web service from Code, i.e. Write a basic java class which will have the methods to be declared in the WSDL. Then use java2WSDL.sh to create my WSDL. So, is this approach correct: Write my Java class with actual business logic package packageNamel; public class Hello{ public void World(String name) { SOP("Hello" + name); } } Now, when I pass this Hello.java to java2WSDL.sh, this will give me the WSDL. Finally, I will write the services.xml file, and create the Hello.aar with following dir structure: Hello.aar packageName Hello.class META-INF services.xml MANIFEST.MF Hello.WSDL Now, I assume, my service will be deployed when I put the aar in tomcat1/webapps/axis2/WEB-INF/services But, here comes my problem, HOW DO I ACCESS THE METHOD World(String name)???!!, i.e. I am clueless about the client code! Please enlighten me on making a very basic web service and calling the method. The above described 3 steps might be wrong. It's a community wiki, feel free to edit. Thanks

    Read the article

  • Automate the signature of the update.rdf manifest for my firefox extension

    - by streetpc
    Hello, I'm developing a firefox extension and I'd like to provide automatic update to my beta-testers (who are not tech-savvy). Unfortunately, the update server doesn't provide HTTPS. According to the Extension Developer Guide on signing updates, I have to sign my update.rdf and provide an encoded public key in the install.rdf. There is the McCoy tool to do all of this, but it is an interactive GUI tool and I'd like to automate the extension packaging using an Ant script (as this is part of a much bigger process). I can't find a more precise description of what's happening to sign the update.rdf manifest than below, and McCoy source is an awful lot of javascript. The doc says: The add-on author creates a public/private RSA cryptographic key pair. The public part of the key is DER encoded and then base 64 encoded and added to the add-on's install.rdf as an updateKey entry. (...) Roughly speaking the update information is converted to a string, then hashed using a sha512 hashing algorithm and this hash is signed using the private key. The resultant data is DER encoded then base 64 encoded for inclusion in the update.rdf as an signature entry. I don't know well about DER encoding, but it seems like it needs some parameters. So would anyone know either the full algortihm to sign the update.rdf and install.rdf using a predefined keypair, or a scriptable alternative to McCoy whether a command-line tool like asn1coding will suffise a good/simple developer tutorial on DER encoding

    Read the article

  • Hadoop streaming with Python and python subprocess

    - by Ganesh
    I have established a basic hadoop master slave cluster setup and able to run mapreduce programs (including python) on the cluster. Now I am trying to run a python code which accesses a C binary and so I am using the subprocess module. I am able to use the hadoop streaming for a normal python code but when I include the subprocess module to access a binary, the job is getting failed. As you can see in the below logs, the hello executable is recognised to be used for the packaging, but still not able to run the code. . . packageJobJar: [/tmp/hello/hello, /app/hadoop/tmp/hadoop-unjar5030080067721998885/] [] /tmp/streamjob7446402517274720868.jar tmpDir=null JarBuilder.addNamedStream hello . . 12/03/07 22:31:32 INFO mapred.FileInputFormat: Total input paths to process : 1 12/03/07 22:31:32 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local] 12/03/07 22:31:32 INFO streaming.StreamJob: Running job: job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:31:32 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:31:33 INFO streaming.StreamJob: map 0% reduce 0% 12/03/07 22:32:05 INFO streaming.StreamJob: map 100% reduce 100% 12/03/07 22:32:05 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:32:05 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:32:05 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:32:05 ERROR streaming.StreamJob: Job not Successful! 12/03/07 22:32:05 INFO streaming.StreamJob: killJob... Streaming Job Failed! Command I am trying is : hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper /home/hduser/MARS.py -reducer /home/hduser/MARS_red.py -input /user/hduser/mars_inputt -output /user/hduser/mars-output -file /tmp/hello/hello -verbose where hello is the C executable. It is a simple helloworld program which I am using to check the basic functioning. My Python code is : #!/usr/bin/env python import subprocess subprocess.call(["./hello"]) Any help with how to get the executable run with Python in hadoop streaming or help with debugging this will get me forward in this. Thanks, Ganesh

    Read the article

  • Asynchronous background processes in Python?

    - by Geuis
    I have been using this as a reference, but not able to accomplish exactly what I need: http://stackoverflow.com/questions/89228/how-to-call-external-command-in-python/92395#92395 I also was reading this: http://www.python.org/dev/peps/pep-3145/ For our project, we have 5 svn checkouts that need to update before we can deploy our application. In my dev environment, where speedy deployments are a bit more important for productivity than a production deployment, I have been working on speeding up the process. I have a bash script that has been working decently but has some limitations. I fire up multiple 'svn updates' with the following bash command: (svn update /repo1) & (svn update /repo2) & (svn update /repo3) & These all run in parallel and it works pretty well. I also use this pattern in the rest of the build script for firing off each ant build, then moving the wars to Tomcat. However, I have no control over stopping deployment if one of the updates or a build fails. I'm re-writing my bash script with Python so I have more control over branches and the deployment process. I am using subprocess.call() to fire off the 'svn update /repo' commands, but each one is acting sequentially. I try '(svn update /repo) &' and those all fire off, but the result code returns immediately. So I have no way to determine if a particular command fails or not in the asynchronous mode. import subprocess subprocess.call( 'svn update /repo1', shell=True ) subprocess.call( 'svn update /repo2', shell=True ) subprocess.call( 'svn update /repo3', shell=True ) I'd love to find a way to have Python fire off each Unix command, and if any of the calls fails at any time the entire script stops.

    Read the article

  • How to migrate project from RCS to git? (SOLVED)

    - by Norman Ramsey
    I have a 20-year-old project that I would like to migrate from RCS to git, without losing the history. All web pages suggest that the One True Path is through CVS. But after an hour of Googling and trying different scripts, I have yet to find anything that successfully converts my RCS project tree to CVS. I'm hoping the good people at Stackoverflow will know what actually works, as opposed to what is claimed to work and doesn't. (I searched Stackoverflow using both the native SO search and a Google search, but if there's a helpful answer in the database, I missed it.) UPDATE: The rcs-fast-export tool at http://git.oblomov.eu/rcs-fast-export was repaired on 14 April 2009, and this version seems to work for me. This tool converts straight to git with no intermediate CVS. Thanks Giuseppe and Jakub!!! Things that did not work that I still remember: The rcs-to-cvs script that ships in the contrib directory of the CVS sources The rcs-fast-export tool at http://git.oblomov.eu/rcs-fast-export in versions before 13 April 2010 The rcs2cvs script found in a document called "CVS-RCS- HOW-TO Document for Linux"

    Read the article

  • asp.net MVC HandleError

    - by boris callens
    How do I go about the [HandleError] filter in asp.net MVC Preview 5? I set the customErrors in my web.config file <customErrors mode="On" defaultRedirect="Error.aspx"> <error statusCode="403" redirect="NoAccess.htm"/> <error statusCode="404" redirect="FileNotFound.htm"/> </customErrors> and put [HandleError] above my Controller Class like this: [HandleError] public class DSWebsiteController: Controller{ [snip] public ActionResult CrashTest() { throw new Exception("Oh Noes!"); } } Then I let my controllers inherit from this class and call CrashTest() on them. Visual studio halts at the error and after pressing f5 to continue, I get rerouted to Error.aspx?aspxerrorpath=/sxi.mvc/CrashTest (where sxi is the name of the used controller. Off course the path cannot be found and I get "Server Error in '/' Application." 404. This site was ported from preview 3 to 5. Everything runs (wasn't that much work to port) except the error handling. When I create a complete new project the error handling seems to work. Ideas? --Note-- Since this question has over 3K views now, I thought it would be beneficial to put in what I'm currently (asp.net mvc 1.0) using. In the mvc contrib project there is a brilliant attribute called "RescueAttribute" You should probably check it out too ;)

    Read the article

  • Should Development / Testing / QA / Staging environments be similar?

    - by Walter White
    Hi all, After much time and effort, we're finally using maven to manage our application lifecycle for development. We still unfortunately use ANT to build an EAR before deploying to Test / QA / Staging. My question is, while we made that leap forward, developers are still free to do as they please for testing their code. One issue that we have is half our team is using Tomcat to test on and the other half is using Jetty. I prefer Jetty slightly over Tomcat, but regardless we using WAS for all the other environments. My question is, should we develop on the same application server we're deploying to? We've had numerous bugs come up from these differences in environments. Tomcat, Jetty, and WAS are different under the hood. My opinion is that we all should develop on what we're deploying to production with so we don't have the problem of well, it worked fine on my machine. While I prefer Jetty, I just assume we all work on the same environment even if it means deploying to WAS which is slow and cumbersome. What are your team dynamics like? Our lead developers stepped down from the team and development has been a free for all since then. Walter

    Read the article

  • UnicodeDecodeError on attempt to save file through django default filebased backend

    - by Ivan Kuznetsov
    When i attempt to add a file with russian symbols in name to the model instance through default instance.file_field.save method, i get an UnicodeDecodeError (ascii decoding error, not in range (128) from the storage backend (stacktrace ended on os.exist). If i write this file through default python file open/write all goes right. All filenames in utf-8. I get this error only on testing Gentoo, on my Ubuntu workstation all works fine. class Article(models.Model): file = models.FileField(null=True, blank=True, max_length = 300, upload_to='articles_files/%Y/%m/%d/') Traceback: File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py" in get_response 100. response = callback(request, *callback_args, **callback_kwargs) File "/usr/lib/python2.6/site-packages/django/contrib/auth/decorators.py" in _wrapped_view 24. return view_func(request, *args, **kwargs) File "/var/www/localhost/help/wiki/views.py" in edit_article 338. new_article.file.save(fp, fi, save=True) File "/usr/lib/python2.6/site-packages/django/db/models/fields/files.py" in save 92. self.name = self.storage.save(name, content) File "/usr/lib/python2.6/site-packages/django/core/files/storage.py" in save 47. name = self.get_available_name(name) File "/usr/lib/python2.6/site-packages/django/core/files/storage.py" in get_available_name 73. while self.exists(name): File "/usr/lib/python2.6/site-packages/django/core/files/storage.py" in exists 196. return os.path.exists(self.path(name)) File "/usr/lib/python2.6/genericpath.py" in exists 18. st = os.stat(path) Exception Type: UnicodeEncodeError at /edit/ Exception Value: ('ascii', u'/var/www/localhost/help/i/articles_files/2010/03/17/\u041f\u0440\u0438\u0432\u0435\u0442', 52, 58, 'ordinal not in range(128)')

    Read the article

  • Problem calling web service from within JBOSS EJB Service

    - by Rob Goodwin
    I have a simple web service sitting on our internal network. I used SOAPUI to do a bit of testing, generated the service classes from the WSDL , and wrote some java code to access the service. All went as expected as I was able to create the service proxy classes and make calls. Pretty simple stuff. The only speed bump was getting java to trust the certificate from the machine providing the web service. That was not a technical problem, but rather my lack of experience with SSL based web services. Now onto my problem. I coded up a simple EJB service and deployed it into JBoss Application Server 4.3 and now get the following error in the code that previously worked. 12:21:50,235 WARN [ServiceDelegateImpl] Cannot access wsdlURL: https://WS-Test/TestService/v2/TestService?wsdl I can access the wsdl file from a web browser running on the same machine as the application server using the URL in the error message. I am at a loss as to where to go from here. I turned on the debug logs in JBOSS and got nothing more than what I showed above. I have done some searching on the net and found the same error in some questions, but those questions had no answers. The web services classes where generated with JAX-WS 2.2 using the wsimport ant task and placed in a jar that is included in the ejb package. JBoss is deployed in RHEL 5.4, where the standalone testing was done on Windows XP.

    Read the article

  • unable to place breakpoints in eclipse

    - by anjanb
    I am using eclipse europa (3.5) on windows vista home premium 64-bit using JDK 1.6.0_18 (32 BIT). Normally, I am able to put breakpoints just fine; However, for a particular class which is NOT part of the project (this class is inside a .JAR file (.JAR file is part of the project) ), although I have attached a source directory to this .JAR file, I am unable to place a breakpoint in this class. If I double-click on the breakpoint pane(left border), I notice that a class breakpoint is placed. I was wondering if there was NO debug info; However, found that this particular class was compiled using ant/javac task using debug="true" and debuglevel="lines,vars,source". I even ran jad on this class to confirm that it indeed contained the debug info. So, why is eclipse preventing me from placing a breakpoint ? EDIT : Just so everyone understands the context, this is a webapp running under tomcat 6.0. I am remote debugging the application from eclipse after having started tomcat outside. The application is working just fine. I am trying to understand the behavior of the above class which I'm unable to do since eclipse is not letting me set a BP. P.S : I saw a few threads here talking about BPs not being hit but in my case, I am unable to place the BP! P.P.S : I tried JDK 1.6.0_16 before trying out 1.6.0_18. Thanks for any pointers.

    Read the article

  • filterSecurityInterceptor and metadatasource implementation spring-security

    - by Mike
    Hi! I created a class that implements the FilterInvocationSecurityMetadataSource interface. I implemented it like this: public List<ConfigAttribute> getAttributes(Object object) { FilterInvocation fi = (FilterInvocation) object; Object principal = SecurityContextHolder.getContext().getAuthentication().getPrincipal(); Long companyId = ((ExtenededUser) principal).getCompany().getId(); String url = fi.getRequestUrl(); // String httpMethod = fi.getRequest().getMethod(); List<ConfigAttribute> attributes = new ArrayList<ConfigAttribute>(); FilterSecurityService service = (FilterSecurityService) SpringBeanFinder.findBean("filterSecurityService"); Collection<Role> roles = service.getRoles(companyId); for (Role role : roles) { for (View view : role.getViews()) { if (view.getUrl().equalsIgnoreCase(url)) attributes.add(new SecurityConfig(role.getName() + "_" + role.getCompany().getName())); } } return attributes; } when I debug my application I see it reaches this class, it only reaches getAllConfigAttributes method, that is empty as I said, and return null. after that it prints this warning: Could not validate configuration attributes as the SecurityMetadataSource did not return any attributes from getAllConfigAttributes(). My aplicationContext- security is like this: <beans:bean id="filterChainProxy" class="org.springframework.security.web.FilterChainProxy"> <filter-chain-map path-type="ant"> <filter-chain filters="sif,filterSecurityInterceptor" pattern="/**" /> </filter-chain-map> </beans:bean> <beans:bean id="filterSecurityInterceptor" class="org.springframework.security.web.access.intercept.FilterSecurityInterceptor"> <beans:property name="authenticationManager" ref="authenticationManager" /> <beans:property name="accessDecisionManager" ref="accessDecisionManager" /> <beans:property name="securityMetadataSource" ref="filterSecurityMetadataSource" /> </beans:bean> <beans:bean id="filterSecurityMetadataSource" class="com.mycompany.filter.FilterSecurityMetadataSource"> </beans:bean> what could be the problem?

    Read the article

  • Usage of static analysis tools - with Clear Case/Quest

    - by boyd4715
    We are in the process of defining our software development process and wanted to get some feed back from the group about this topic. Our team is spread out - US, Canada and India - and I would like to put into place some simple standard rules that all teams will apply to their code. We make use of Clear Case/Quest and RAD I have been looking at PMD, CPP, checkstyle and FindBugs as a start. My thought is to just put these into ANT and have the developers run these manually. I realize doing this you have to have some trust in that each developer will do this. The other thought is to add in some builders in to the IDE which would run a subset of the rules (keep the build process light) and then add another set (heavy) when they check in the code. Some other ideals is to make use of something like Cruse Control and have it set up to run these static analysis tools along with the unit test when ever Clear Case/Quest is idle. Wondering if others have done this and if it was successfully or can provide lessons learned.

    Read the article

  • Is it approproate it use django signals withing the same app

    - by Alex Lebedev
    Trying to add email notification to my app in the cleanest way possible. When certain fields of a model change, app should send a notification to a user. Here's my old solution: from django.contrib.auth import User class MyModel(models.Model): user = models.ForeignKey(User) field_a = models.CharField() field_b = models.CharField() def save(self, *args, **kwargs): old = self.__class__.objects.get(pk=self.pk) if self.pk else None super(MyModel, self).save(*args, **kwargs) if old and old.field_b != self.field_b: self.notify("b-changed") # Sevelar more events here # ... def notify(self, event) subj, text = self._prepare_notification(event) send_mail(subj, body, settings.DEFAULT_FROM_EMAIL, [self.user.email], fail_silently=True) This worked fine while I had one or two notification types, but after that just felt wrong to have so much code in my save() method. So, I changed code to signal-based: from django.db.models import signals def remember_old(sender, instance, **kwargs): """pre_save hanlder to save clean copy of original record into `old` attribute """ instance.old = None if instance.pk: try: instance.old = sender.objects.get(pk=instance.pk) except ObjectDoesNotExist: pass def on_mymodel_save(sender, instance, created, **kwargs): old = instance.old if old and old.field_b != instance.field_b: self.notify("b-changed") # Sevelar more events here # ... signals.pre_save.connect(remember_old, sender=MyModel, dispatch_uid="mymodel-remember-old") signals.post_save.connect(on_mymodel_save, sender=MyModel, dispatch_uid="mymodel-on-save") The benefit is that I can separate event handlers into different module, reducing size of models.py and I can enable/disable them individually. The downside is that this solution is more code and signal handlers are separated from model itself and unknowing reader can miss them altogether. So, colleagues, do you think it's worth it?

    Read the article

  • How to rename an existing Grails application

    - by Johan Pelgrim
    Hi there, Does anybody know how to (easily) "rename" an existing grails application? I'm running into this because my PaaS provider does not allow me to delete a subscription... So I want to deploy my application under a different name. Of course, I can do this manually, but I think it might be a useful 'top-level' script (i.e. "grails rename-app newappname") Manual hints: When I do a "grails create-app myappname" I can see the myappname exists in the following files (and filenames)... Of course this is done by the create-app script, which replaces @...@ tokens in the template. I guess once they are replaced, it's not trivial to do a rename. ./.project: <name>myappname</name> ./application.properties:app.name=myappname ./build.xml:<project xmlns:ivy="antlib:org.apache.ivy.ant" name="myappname" default="test"> ./ivy.xml: <info organisation="org.example" module="myappname"/> ./myappname-test.launch:<stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="myappname"/> ./myappname.launch:<listEntry value="/myappname"/> ./myappname.launch:<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry containerPath="org.eclipse.jdt.launching.JRE_CONTAINER" javaProject="myappname" path="1" type="4"/> "/> ./myappname.launch:<stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="myappname"/> ./myappname.launch:<stringAttribute key="org.eclipse.jdt.launching.VM_ARGUMENTS" value="-Dbase.dir="${project_loc:myappname}" -Dserver.port=8080 -Dgrails.env=development"/> ./myappname.tmproj: <string>myappname.launch</string> And of course... the top-level directory name is "myappname" Any hints, or information about ongoing initiatives in this area are welcome Greetz, Johan

    Read the article

  • how to handle multiple profiles per user?

    - by Scott Willman
    I'm doing something that doesn't feel very efficient. From my code below, you can probably see that I'm trying to allow for multiple profiles of different types attached to my custom user object (Person). One of those profiles will be considered a default and should have an accessor from the Person class. Can this be done better? from django.db import models from django.contrib.auth.models import User, UserManager class Person(User): public_name = models.CharField(max_length=24, default="Mr. T") objects = UserManager() def save(self): self.set_password(self.password) super(Person, self).save() def _getDefaultProfile(self): def_teacher = self.teacher_set.filter(default=True) if def_teacher: return def_teacher[0] def_student = self.student_set.filter(default=True) if def_student: return def_student[0] def_parent = self.parent_set.filter(default=True) if def_parent: return def_parent[0] return False profile = property(_getDefaultProfile) def _getProfiles(self): # Inefficient use of QuerySet here. Tolerated because the QuerySets should be very small. profiles = [] if self.teacher_set.count(): profiles.append(list(self.teacher_set.all())) if self.student_set.count(): profiles.append(list(self.student_set.all())) if self.parent_set.count(): profiles.append(list(self.parent_set.all())) return profiles profiles = property(_getProfiles) class BaseProfile(models.Model): person = models.ForeignKey(Person) is_default = models.BooleanField(default=False) class Meta: abstract = True class Teacher(BaseProfile): user_type = models.CharField(max_length=7, default="teacher") class Student(BaseProfile): user_type = models.CharField(max_length=7, default="student") class Parent(BaseProfile): user_type = models.CharField(max_length=7, default="parent")

    Read the article

  • _Painless_ integration of Eclipse with Vim?

    - by Adnan
    Hi, Has anyone managed to get Vim integrated into Eclipse painlessly? I just want to use Vim for the editor while retaining the general Eclipse interface. I have tried using Eclim plugin but the editor seemed to crash more often than work (the site said that the editor replacement functionality is still beta). On the flip side, is there any IDE which matches Eclipse's functionality -- mainly the integration with SVN, ant, etc. -- and is also able to use Vim? I mostly use eclipse for SAS SCL, Java and Javascript programming and find the eclipse editor too "mouse-y". I'd also like, in a perfect world, to use vimdiff as a diff viewer for SVN (we use TortoiseSVN) while checking for diffs or conflicts during merge etc. I admit I havent spent a lot of time trying to get these things to work. I feel guilty about spending too much time on potential wild-goose-chases while my other team members are working away at their code, perfectly content with all that eclipse has to offer. Edit: Just found this while desperately browsing around: Vim plugin.. Any experience using this? Its saturday afternoon in Kiwi land, so I'll try it later and update if no one has an opinion. From the claims on the site, it sounds perfect.

    Read the article

  • Java and .net interoperability

    - by dineshrekula
    I have a c# program through which i am opening cmd window as a a process. in this command window i am running a batch file. i am redirecting the output of that batch file commands to a Text File. When i run my application everything seems to be ok. But few times, Application is giving some error like "Can't access the file. it's being used by another application" at the same time cmd window is not getting closed. If we close the cmd process through the Task Manager, then it's writing the content to the file and getting closed. Even though i closed the cmd process, still file handle is not getting released. so that i am not able to run the application next time onwards.Always it's saying Can't access the file. Only after restarting the system, it's working. Here is my code: Process objProcess = new Process(); ProcessStartInfo objProInfo = new ProcessStartInfo(); objProInfo.WindowStyle = ProcessWindowStyle.Maximized; objProInfo.UseShellExecute = true; objProInfo.FileName = "Batch file path" objProInfo.Arguments = "Some Arguments"; if (Directory.Exists(strOutputPath) == false) { Directory.CreateDirectory(strOutputPath); } objProInfo.CreateNoWindow = false; objProcess.StartInfo = objProInfo; objProcess.Start(); objProcess.WaitForExit(); test.bat: java classname argument > output.txt Here is my question: I am not able to trace where the problem is.. How we can see the process which holding handle on ant file. Is there any suggestions for Java and .net interoperability

    Read the article

  • Caching sitemaps in django

    - by michuk
    I implemented a simple sitemap class using django's default sitemap app. As it was taking a long time to execute, I added manual caching: class ShortReviewsSitemap(Sitemap): changefreq = "hourly" priority = 0.7 def items(self): # try to retrieve from cache result = get_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews") if result!=None: return result result = ShortReview.objects.all().order_by("-created_at") # store in cache set_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews", result) return result def lastmod(self, obj): return obj.updated_at The problem is that memcache allows only max 1MB object. This one was bigger that 1MB, so storing into cache failed: >7 SERVER_ERROR object too large for cache The problem is that django has an automated way of deciding when it should divide the sitemap file into smalled ones. According to the docs (http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/): You should create an index file if one of your sitemaps has more than 50,000 URLs. In this case, Django will automatically paginate the sitemap, and the index will reflect that. What do you think would be the best way to enable caching sitemaps? - Hacking into django sitemaps framework to restrict a single sitemap size to, let's say, 10,000 records seems like the best idea. Why was 50,000 chosen in the first place? Google advice? random number? - Or maybe there is a way to allow memcached store bigger files? - Or perhaps onces saved, the sitemaps should be made available as static files? This would mean that instead of caching with memcached I'd have to manually store the results in the filesystem and retrieve them from there next time when the sitemap is requested (perhaps cleaning the directory daily in a cron job). All those seem very low level and I'm wondering if an obvious solution exists...

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >