Search Results

Search found 12588 results on 504 pages for 'memory allocation'.

Page 487/504 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Overwrite archetypes in Maven

    - by Random
    Hello again! I'm having some trouble using Maven for my archetypes and I will need to overwrite some. I launch an instruction that does an archetype:generate in an archetype already existing directory. Is there a parameter that let's me overwrite existing archetypes? I have search the maven definitve guide but it states that the only parameters accepted are: -DgroupId -DartifactId -Dversion -DpackageName -DarchetypeGroupId -DarchetypeArtifactId -DarchetypeVersion -DinteractiveMode I could just search the directory and delete the files, but this proccess is going to be done automatically (so no human involved, no brains involved) and I wouldn't like he machine deleting things around. Thanks for all! Edit: I almost forgot, here is some maven trace: [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'archetype'. [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Default Project [INFO] task-segment: [archetype:generate] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] Preparing archetype:generate [INFO] No goals needed for project - skipping [INFO] Setting property: classpath.resource.loader.class => 'org.codehaus.plexus.velocity.ContextClassLoaderResourceLoader'. [INFO] Setting property: velocimacro.messages.on => 'false'. [INFO] Setting property: resource.loader => 'classpath'. [INFO] Setting property: resource.manager.logwhenfound => 'false'. [INFO] [archetype:generate {execution: default-cli}] [INFO] Generating project in Batch mode [INFO] Archetype defined by properties [INFO] ---------------------------------------------------------------------------- [INFO] Using following parameters for creating OldArchetype: archetype-foo-lib:1.0 [INFO] ---------------------------------------------------------------------------- [INFO] Parameter: groupId, Value: foo.tecnologia [INFO] Parameter: packageName, Value: foo.tecnologia [INFO] Parameter: basedir, Value: C:\temp\Desarrollo [INFO] Parameter: package, Value: foo.tecnologia [INFO] Parameter: version, Value: 1.0 [INFO] Parameter: artifactId, Value: Foo-Lib-Test [ERROR] Directory Foo-Lib-Test already exists - please run from a clean directory org.apache.maven.archetype.old.ArchetypeTemplateProcessingException: Directory Foo-Lib-Test already exists - please run from a clean directory at org.apache.maven.archetype.old.DefaultOldArchetype.createArchetype(DefaultOldArchetype.java:242) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.processOldArchetype(DefaultArchetypeGenerator.java:253) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.generateArchetype(DefaultArchetypeGenerator.java:143) at org.apache.maven.archetype.generator.DefaultArchetypeGenerator.generateArchetype(DefaultArchetypeGenerator.java:286) at org.apache.maven.archetype.DefaultArchetype.generateProjectFromArchetype(DefaultArchetype.java:69) at org.apache.maven.archetype.mojos.CreateProjectFromArchetypeMojo.execute(CreateProjectFromArchetypeMojo.java:184) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at com.foo.model.CSMavenCli.main(CSMavenCli.java:391) at com.foo.model.MavenAdmin.generateArchetype(MavenAdmin.java:399) at com.foo.model.ValidarPom.validarPom(ValidarPom.java:167) at com.foo.prueba.GenerarPOM.execute(GenerarPOM.java:93) at org.apache.struts.chain.commands.servlet.ExecuteAction.execute(ExecuteAction.java:58) at org.apache.struts.chain.commands.AbstractExecuteAction.execute(AbstractExecuteAction.java:67) at org.apache.struts.chain.commands.ActionCommandBase.execute(ActionCommandBase.java:51) at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191) at org.apache.commons.chain.generic.LookupCommand.execute(LookupCommand.java:305) at org.apache.commons.chain.impl.ChainBase.execute(ChainBase.java:191) at org.apache.struts.chain.ComposableRequestProcessor.process(ComposableRequestProcessor.java:283) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913) at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:462) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:269) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:873) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) at java.lang.Thread.run(Unknown Source) [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] : org.apache.maven.archetype.old.ArchetypeTemplateProcessingException: Directory Foo-Lib-Test already exists - please run from a clean directory Directory Foo-Lib-Test already exists - please run from a clean directory [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Fri Apr 09 10:01:33 CEST 2010 [INFO] Final Memory: 15M/28M [INFO] ------------------------------------------------------------------------

    Read the article

  • How to use pthread_atfork() and pthread_once() to reinitialize mutexes in child processes

    - by Blair Zajac
    We have a C++ shared library that uses ZeroC's Ice library for RPC and unless we shut down Ice's runtime, we've observed child processes hanging on random mutexes. The Ice runtime starts threads, has many internal mutexes and keeps open file descriptors to servers. Additionally, we have a few of mutexes of our own to protect our internal state. Our shared library is used by hundreds of internal applications so we don't have control over when the process calls fork(), so we need a way to safely shutdown Ice and lock our mutexes while the process forks. Reading the POSIX standard on pthread_atfork() on handling mutexes and internal state: Alternatively, some libraries might have been able to supply just a child routine that reinitializes the mutexes in the library and all associated states to some known value (for example, what it was when the image was originally executed). This approach is not possible, though, because implementations are allowed to fail *_init() and *_destroy() calls for mutexes and locks if the mutex or lock is still locked. In this case, the child routine is not able to reinitialize the mutexes and locks. On Linux, the this test C program returns EPERM from pthread_mutex_unlock() in the child pthread_atfork() handler. Linux requires adding _NP to the PTHREAD_MUTEX_ERRORCHECK macro for it to compile. This program is linked from this good thread. Given that it's technically not safe or legal to unlock or destroy a mutex in the child, I'm thinking it's better to have pointers to mutexes and then have the child make new pthread_mutex_t on the heap and leave the parent's mutexes alone, thereby having a small memory leak. The only issue is how to reinitialize the state of the library and I'm thinking of reseting a pthread_once_t. Maybe because POSIX has an initializer for pthread_once_t that it can be reset to its initial state. #include <pthread.h> #include <stdlib.h> #include <string.h> static pthread_once_t once_control = PTHREAD_ONCE_INIT; static pthread_mutex_t *mutex_ptr = 0; static void setup_new_mutex() { mutex_ptr = malloc(sizeof(*mutex_ptr)); pthread_mutex_init(mutex_ptr, 0); } static void prepare() { pthread_mutex_lock(mutex_ptr); } static void parent() { pthread_mutex_unlock(mutex_ptr); } static void child() { // Reset the once control. pthread_once_t once = PTHREAD_ONCE_INIT; memcpy(&once_control, &once, sizeof(once_control)); setup_new_mutex(); } static void init() { setup_new_mutex(); pthread_atfork(&prepare, &parent, &child); } int my_library_call(int arg) { pthread_once(&once_control, &init); pthread_mutex_lock(mutex_ptr); // Do something here that requires the lock. int result = 2*arg; pthread_mutex_unlock(mutex_ptr); return result; } In the above sample in the child() I only reset the pthread_once_t by making a copy of a fresh pthread_once_t initialized with PTHREAD_ONCE_INIT. A new pthread_mutex_t is only created when the library function is invoked in the child process. This is hacky but maybe the best way of dealing with this skirting the standards. If the pthread_once_t contains a mutex then the system must have a way of initializing it from its PTHREAD_ONCE_INIT state. If it contains a pointer to a mutex allocated on the heap than it'll be forced to allocate a new one and set the address in the pthread_once_t. I'm hoping it doesn't use the address of the pthread_once_t for anything special which would defeat this. Searching comp.programming.threads group for pthread_atfork() shows a lot of good discussion and how little the POSIX standards really provides to solve this problem. There's also the issue that one should only call async-signal-safe functions from pthread_atfork() handlers, and it appears the most important one is the child handler, where only a memcpy() is done. Does this work? Is there a better way of dealing with the requirements of our shared library?

    Read the article

  • JAXB + JAK java.lang.ClassCastException

    - by Ivansek
    Hi, I read page about JAK implementation where is a piece of code from pom.xml file (Listing 1). This piece of code is actually commented in pom.xml file so i uncommented it in order to add my own schema to compile. After that i run command mvn -e clean install, and i get this error: ivansek ~/Sites/Xlab/KMLTest/javaapiforkml-read-only $ mvn -e clean install + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building a Java API for Kml [INFO] task-segment: [clean, install] [INFO] ------------------------------------------------------------------------ [INFO] [clean:clean {execution: default-clean}] [INFO] [antrun:run {execution: xjc-invocation}] [INFO] Executing tasks [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] An Ant BuildException has occured: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: An Ant BuildException has occured: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalWithLifecycle(DefaultLifecycleExecutor.java:556) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:535) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant BuildException has occured: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException at org.apache.maven.plugin.antrun.AbstractAntMojo.executeTasks(AbstractAntMojo.java:131) at org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:98) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Caused by: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:116) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:357) at org.apache.maven.plugin.antrun.AbstractAntMojo.executeTasks(AbstractAntMojo.java:118) ... 20 more Caused by: java.util.ServiceConfigurationError: com.sun.tools.xjc.Plugin: Provider org.jvnet.jaxb2_commons.javaforkmlapi.XJCJavaForKmlApiPlugin could not be instantiated: java.lang.ClassCastException at java.util.ServiceLoader.fail(ServiceLoader.java:207) at java.util.ServiceLoader.access$100(ServiceLoader.java:164) at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:353) at java.util.ServiceLoader$1.next(ServiceLoader.java:421) at com.sun.tools.xjc.Options.findServices(Options.java:910) at com.sun.tools.xjc.Options.getAllPlugins(Options.java:351) at com.sun.tools.xjc.Options.parseArgument(Options.java:650) at com.sun.tools.xjc.Options.parseArguments(Options.java:760) at com.sun.tools.xjc.XJC2Task._doXJC(XJC2Task.java:453) at com.sun.tools.xjc.XJC2Task.doXJC(XJC2Task.java:443) at com.sun.tools.xjc.XJC2Task.execute(XJC2Task.java:369) at com.sun.istack.tools.ProtectedTask.execute(ProtectedTask.java:55) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) ... 23 more Caused by: java.lang.ClassCastException at java.lang.Class.cast(Class.java:2990) at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:345) ... 38 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Thu May 13 09:53:19 CEST 2010 [INFO] Final Memory: 16M/79M [INFO] ------------------------------------------------------------------------ Any suggestions?

    Read the article

  • Objective-C woes: cellForRowAtIndexPath crashes.

    - by Mr. McPepperNuts
    I want to the user to be able to search for a record in a DB. The fetch and the results returned work perfectly. I am having a hard time setting the UItableview to display the result tho. The application continually crashes at cellForRowAtIndexPath. Please, someone help before I have a heart attack over here. Thank you. @implementation SearchViewController @synthesize mySearchBar; @synthesize textToSearchFor; @synthesize myGlobalSearchObject; @synthesize results; @synthesize tableView; @synthesize tempString; #pragma mark - #pragma mark View lifecycle - (void)viewDidLoad { [super viewDidLoad]; } #pragma mark - #pragma mark Table View - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { //handle selection; push view } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView{ /* if(nullResulSearch == TRUE){ return 1; }else { return[results count]; } */ return[results count]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return 1; // Test hack to display multiple rows. } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Search Cell Identifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if(cell == nil){ cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleValue2 reuseIdentifier:CellIdentifier] autorelease]; } NSLog(@"TEMPSTRING %@", tempString); cell.textLabel.text = tempString; return cell; } #pragma mark - #pragma mark Memory management - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; } - (void)viewDidUnload { self.tableView = nil; } - (void)dealloc { [results release]; [mySearchBar release]; [textToSearchFor release]; [myGlobalSearchObject release]; [super dealloc]; } #pragma mark - #pragma mark Search Function & Fetch Controller - (NSManagedObject *)SearchDatabaseForText:(NSString *)passdTextToSearchFor{ NSManagedObject *searchObj; UndergroundBaseballAppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; NSManagedObjectContext *managedObjectContext = appDelegate.managedObjectContext; NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"name == [c]%@", passdTextToSearchFor]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Entry" inManagedObjectContext:managedObjectContext]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [request setEntity: entity]; [request setPredicate: predicate]; NSError *error; results = [managedObjectContext executeFetchRequest:request error:&error]; if([results count] == 0){ NSLog(@"No results found"); searchObj = nil; nullResulSearch == TRUE; }else{ if ([[[results objectAtIndex:0] name] caseInsensitiveCompare:passdTextToSearchFor] == 0) { NSLog(@"results %@", [[results objectAtIndex:0] name]); searchObj = [results objectAtIndex:0]; nullResulSearch == FALSE; }else{ NSLog(@"No results found"); searchObj = nil; nullResulSearch == TRUE; } } [tableView reloadData]; [request release]; [sortDescriptors release]; return searchObj; } - (void)searchBarSearchButtonClicked:(UISearchBar *)searchBar{ textToSearchFor = mySearchBar.text; NSLog(@"textToSearchFor: %@", textToSearchFor); myGlobalSearchObject = [self SearchDatabaseForText:textToSearchFor]; NSLog(@"myGlobalSearchObject: %@", myGlobalSearchObject); tempString = [myGlobalSearchObject valueForKey:@"name"]; NSLog(@"tempString: %@", tempString); } @end *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[UILongPressGestureRecognizer isEqualToString:]: unrecognized selector sent to instance 0x3d46c20'

    Read the article

  • How to solve a deallocated connection in iPhone SDK 3.1.3? - Streams - CFSockets

    - by Christian
    Hi everyone, Debugging my implementation I found a memory leak issue. I know where is the issue, I tried to solve it but sadly without success. I will try to explain you, maybe someone of you can help with this. First I have two classes involved in the issue, the publish class (where publishing the service and socket configuration is done) and the connection (where the socket binding and the streams configuration is done). The main issue is in the connection via native socket. In the 'publish' class the "server" accepts a connection with a callback. The callback has the native-socket information. Then, a connection with native-socket information is created. Next, the socket binding and the streams configuration is done. When those actions are successful the instance of the connection is saved in a mutable array. Thus, the connection is established. static void AcceptCallback(CFSocketRef socket, CFSocketCallBackType type, CFDataRef address, const void *data, void *info) { Publish *rePoint = (Publish *)info; if ( type != kCFSocketAcceptCallBack) { return; } CFSocketNativeHandle nativeSocketHandle = *((CFSocketNativeHandle *)data); NSLog(@"The AcceptCallback was called, a connection request arrived to the server"); [rePoint handleNewNativeSocket:nativeSocketHandle]; } - (void)handleNewNativeSocket:(CFSocketNativeHandle)nativeSocketHandle{ Connection *connection = [[[Connection alloc] initWithNativeSocketHandle:nativeSocketHandle] autorelease]; // Create the connection if (connection == nil) { close(nativeSocketHandle); return; } NSLog(@"The connection from the server was created now try to connect"); if ( ! [connection connect]) { [connection close]; return; } [clients addObject:connection]; //save the connection trying to avoid the deallocation } The next step is receive the information from the client, thus a read-stream callback is triggered with the information of the established connection. But when the callback-handler tries to use this connection the error occurs, it says that such connection is deallocated. The issue here is that I don't know where/when the connection is deallocated and how to know it. I am using the debugger, but after some trials, I don't see more info. void myReadStreamCallBack (CFReadStreamRef stream, CFStreamEventType eventType, void *info) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; Connection *handlerEv = [[(Connection *)info retain] autorelease]; // The error -[Connection retain]: message sent to deallocated instance 0x1f5ef0 (Where 0x1f5ef0 is the reference to the established connection) [handlerEv readStreamHandleEvent:stream andEvent:eventType]; [pool drain]; } void myWriteStreamCallBack (CFWriteStreamRef stream, CFStreamEventType eventType, void *info){ NSAutoreleasePool *p = [[NSAutoreleasePool alloc] init]; Connection *handlerEv = [[(Connection *)info retain] autorelease]; //Sometimes the error also happens here, I tried without the pool, but it doesn't help neither. [handlerEv writeStreamHandleEvent:eventType]; [p drain]; } Something strange is that when I run the debugger(with breakpoints) everything goes well, the connection is not deallocated and the callbacks work fine and the server is able to receive the message. I will appreciate any hint!

    Read the article

  • maven sonar problem

    - by senzacionale
    I want to use sonar for analysis but i can't get any data in localhost:9000 <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <artifactId>KIS</artifactId> <groupId>KIS</groupId> <version>1.0</version> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.4</version> <executions> <execution> <id>compile</id> <phase>compile</phase> <configuration> <tasks> <property name="compile_classpath" refid="maven.compile.classpath"/> <property name="runtime_classpath" refid="maven.runtime.classpath"/> <property name="test_classpath" refid="maven.test.classpath"/> <property name="plugin_classpath" refid="maven.plugin.classpath"/> <ant antfile="${basedir}/build.xml"> <target name="maven-compile"/> </ant> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> output when running sonar: jar file is empty [INFO] Executed tasks [INFO] [resources:testResources {execution: default-testResources}] [WARNING] Using platform encoding (Cp1250 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] skip non existing resourceDirectory J:\ostalo_6i\KIS deploy\ANT\src\test\resources [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] No sources to compile [INFO] [surefire:test {execution: default-test}] [INFO] No tests to run. [INFO] [jar:jar {execution: default-jar}] [WARNING] JAR will be empty - no content was marked for inclusion! [INFO] Building jar: J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar [INFO] [install:install {execution: default-install}] [INFO] Installing J:\ostalo_6i\KIS deploy\ANT\target\KIS-1.0.jar to C:\Documents and Settings\MitjaG\.m2\repository\KIS\KIS\1.0\KIS-1.0.jar [INFO] ------------------------------------------------------------------------ [INFO] Building Unnamed - KIS:KIS:jar:1.0 [INFO] task-segment: [sonar:sonar] (aggregator-style) [INFO] ------------------------------------------------------------------------ [INFO] [sonar:sonar {execution: default-cli}] [INFO] Sonar host: http://localhost:9000 [INFO] Sonar version: 2.1.2 [INFO] [sonar-core:internal {execution: default-internal}] [INFO] Database dialect class org.sonar.api.database.dialect.Oracle [INFO] ------------- Analyzing Unnamed - KIS:KIS:jar:1.0 [INFO] Selected quality profile : KIS, language=java [INFO] Configure maven plugins... [INFO] Sensor SquidSensor... [INFO] Sensor SquidSensor done: 16 ms [INFO] Sensor JavaSourceImporter... [INFO] Sensor JavaSourceImporter done: 0 ms [INFO] Sensor AsynchronousMeasuresSensor... [INFO] Sensor AsynchronousMeasuresSensor done: 15 ms [INFO] Sensor SurefireSensor... [INFO] parsing J:\ostalo_6i\KIS deploy\ANT\target\surefire-reports [INFO] Sensor SurefireSensor done: 47 ms [INFO] Sensor ProfileSensor... [INFO] Sensor ProfileSensor done: 16 ms [INFO] Sensor ProjectLinksSensor... [INFO] Sensor ProjectLinksSensor done: 0 ms [INFO] Sensor VersionEventsSensor... [INFO] Sensor VersionEventsSensor done: 31 ms [INFO] Sensor CpdSensor... [INFO] Sensor CpdSensor done: 0 ms [INFO] Sensor Maven dependencies... [INFO] Sensor Maven dependencies done: 16 ms [INFO] Execute decorators... [INFO] ANALYSIS SUCCESSFUL, you can browse http://localhost:9000 [INFO] Database optimization... [INFO] Database optimization done: 172 ms [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESSFUL [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6 minutes 16 seconds [INFO] Finished at: Fri Jun 11 08:28:26 CEST 2010 [INFO] Final Memory: 24M/43M [INFO] ------------------------------------------------------------------------ any idea why, i successfully compile with maven ant plugin java project.

    Read the article

  • How do I go about overloading C++ operators to allow for chaining?

    - by fneep
    I, like so many programmers before me, am tearing my hair out writing the right-of-passage-matrix-class-in-C++. I have never done very serious operator overloading and this is causing issues. Essentially, by stepping through This is what I call to cause the problems. cMatrix Kev = CT::cMatrix::GetUnitMatrix(4, true); Kev *= 4.0f; cMatrix Baz = Kev; Kev = Kev+Baz; //HERE! What seems to be happening according to the debugger is that Kev and Baz are added but then the value is lost and when it comes to reassigning to Kev, the memory is just its default dodgy values. How do I overload my operators to allow for this statement? My (stripped down) code is below. //header class cMatrix { private: float* _internal; UInt32 _r; UInt32 _c; bool _zeroindexed; //fast, assumes zero index, no safety checks float cMatrix::_getelement(UInt32 r, UInt32 c) { return _internal[(r*this->_c)+c]; } void cMatrix::_setelement(UInt32 r, UInt32 c, float Value) { _internal[(r*this->_c)+c] = Value; } public: cMatrix(UInt32 r, UInt32 c, bool IsZeroIndexed); cMatrix( cMatrix& m); ~cMatrix(void); //operators cMatrix& operator + (cMatrix m); cMatrix& operator += (cMatrix m); cMatrix& operator = (const cMatrix &m); }; //stripped source file cMatrix::cMatrix(cMatrix& m) { _r = m._r; _c = m._c; _zeroindexed = m._zeroindexed; _internal = new float[_r*_c]; UInt32 size = GetElementCount(); for (UInt32 i = 0; i < size; i++) { _internal[i] = m._internal[i]; } } cMatrix::~cMatrix(void) { delete[] _internal; } cMatrix& cMatrix::operator+(cMatrix m) { return cMatrix(*this) += m; } cMatrix& cMatrix::operator*(float f) { return cMatrix(*this) *= f; } cMatrix& cMatrix::operator*=(float f) { UInt32 size = GetElementCount(); for (UInt32 i = 0; i < size; i++) { _internal[i] *= f; } return *this; } cMatrix& cMatrix::operator+=(cMatrix m) { if (_c != m._c || _r != m._r) { throw new cCTException("Cannot add two matrix classes of different sizes."); } if (!(_zeroindexed && m._zeroindexed)) { throw new cCTException("Zero-Indexed mismatch."); } for (UInt32 row = 0; row < _r; row++) { for (UInt32 column = 0; column < _c; column++) { float Current = _getelement(row, column) + m._getelement(row, column); _setelement(row, column, Current); } } return *this; } cMatrix& cMatrix::operator=(const cMatrix &m) { if (this != &m) { _r = m._r; _c = m._c; _zeroindexed = m._zeroindexed; delete[] _internal; _internal = new float[_r*_c]; UInt32 size = GetElementCount(); for (UInt32 i = 0; i < size; i++) { _internal[i] = m._internal[i]; } } return *this; }

    Read the article

  • processing an audio wav file with C

    - by sa125
    Hi - I'm working on processing the amplitude of a wav file and scaling it by some decimal factor. I'm trying to wrap my head around how to read and re-write the file in a memory-efficient way while also trying to tackle the nuances of the language (I'm new to C). The file can be in either an 8- or 16-bit format. The way I thought of doing this is by first reading the header data into some pre-defined struct, and then processing the actual data in a loop where I'll read a chunk of data into a buffer, do whatever is needed to it, and then write it to the output. #include <stdio.h> #include <stdlib.h> typedef struct header { char chunk_id[4]; int chunk_size; char format[4]; char subchunk1_id[4]; int subchunk1_size; short int audio_format; short int num_channels; int sample_rate; int byte_rate; short int block_align; short int bits_per_sample; short int extra_param_size; char subchunk2_id[4]; int subchunk2_size; } header; typedef struct header* header_p; void scale_wav_file(char * input, float factor, int is_8bit) { FILE * infile = fopen(input, "rb"); FILE * outfile = fopen("outfile.wav", "wb"); int BUFSIZE = 4000, i, MAX_8BIT_AMP = 255, MAX_16BIT_AMP = 32678; // used for processing 8-bit file unsigned char inbuff8[BUFSIZE], outbuff8[BUFSIZE]; // used for processing 16-bit file short int inbuff16[BUFSIZE], outbuff16[BUFSIZE]; // header_p points to a header struct that contains the file's metadata fields header_p meta = (header_p)malloc(sizeof(header)); if (infile) { // read and write header data fread(meta, 1, sizeof(header), infile); fwrite(meta, 1, sizeof(meta), outfile); while (!feof(infile)) { if (is_8bit) { fread(inbuff8, 1, BUFSIZE, infile); } else { fread(inbuff16, 1, BUFSIZE, infile); } // scale amplitude for 8/16 bits for (i=0; i < BUFSIZE; ++i) { if (is_8bit) { outbuff8[i] = factor * inbuff8[i]; if ((int)outbuff8[i] > MAX_8BIT_AMP) { outbuff8[i] = MAX_8BIT_AMP; } } else { outbuff16[i] = factor * inbuff16[i]; if ((int)outbuff16[i] > MAX_16BIT_AMP) { outbuff16[i] = MAX_16BIT_AMP; } else if ((int)outbuff16[i] < -MAX_16BIT_AMP) { outbuff16[i] = -MAX_16BIT_AMP; } } } // write to output file for 8/16 bit if (is_8bit) { fwrite(outbuff8, 1, BUFSIZE, outfile); } else { fwrite(outbuff16, 1, BUFSIZE, outfile); } } } // cleanup if (infile) { fclose(infile); } if (outfile) { fclose(outfile); } if (meta) { free(meta); } } int main (int argc, char const *argv[]) { char infile[] = "file.wav"; float factor = 0.5; scale_wav_file(infile, factor, 0); return 0; } I'm getting differing file sizes at the end (by 1k or so, for a 40Mb file), and I suspect this is due to the fact that I'm writing an entire buffer to the output, even though the file may have terminated before filling the entire buffer size. Also, the output file is messed up - won't play or open - so I'm probably doing the whole thing wrong. Any tips on where I'm messing up will be great. Thanks!

    Read the article

  • Maven changelog plugin with Mercurial problem

    - by doom2.wad
    I have configured my Maven2 project to generate a changelog report from a Mercurial repository (accessible via file:// protocol) but the goal execution fails with the following message: + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'changelog'. [INFO] ------------------------------------------------------------------------ [INFO] Building Phobos3 Prototype [INFO] task-segment: [changelog:changelog] [INFO] ------------------------------------------------------------------------ [INFO] [changelog:changelog {execution: default-cli}] [INFO] Generating changed sets xml to: D:\Documents and Settings\501845922\Workspace\phobos3.prototype\target\changelog.xml [INFO] EXECUTING: hg log --verbose [WARNING] Could not figure out: abort: Invalid argument [ERROR] EXECUTION FAILED Execution of cmd : log failed with exit code: -1. Working directory was: D:\Documents and Settings\501845922\Workspace\phobos3.prototype Your Hg installation seems to be valid and complete. Hg version: 1.4.3+20100201 (OK) [ERROR] Provider message: [ERROR] EXECUTION FAILED Execution of cmd : log failed with exit code: -1. Working directory was: D:\Documents and Settings\501845922\Workspace\phobos3.prototype Your Hg installation seems to be valid and complete. Hg version: 1.4.3+20100201 (OK) [ERROR] Command output: [ERROR] [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] An error has occurred in Change Log report generation. Embedded error: An error has occurred during changelog command : Command failed. [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: An error has occurred in Change Log report generation. at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: An error has occurred in Change Log report generation. at org.apache.maven.reporting.AbstractMavenReport.execute(AbstractMavenReport.java:79) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.reporting.MavenReportException: An error has occurred during changelog command : at org.apache.maven.plugin.changelog.ChangeLogReport.generateChangeSetsFromSCM(ChangeLogReport.java:555) at org.apache.maven.plugin.changelog.ChangeLogReport.getChangedSets(ChangeLogReport.java:393) at org.apache.maven.plugin.changelog.ChangeLogReport.executeReport(ChangeLogReport.java:340) at org.apache.maven.reporting.AbstractMavenReport.generate(AbstractMavenReport.java:98) at org.apache.maven.reporting.AbstractMavenReport.execute(AbstractMavenReport.java:73) ... 19 more Caused by: org.apache.maven.plugin.MojoExecutionException: Command failed. at org.apache.maven.plugin.changelog.ChangeLogReport.checkResult(ChangeLogReport.java:705) at org.apache.maven.plugin.changelog.ChangeLogReport.generateChangeSetsFromSCM(ChangeLogReport.java:467) ... 23 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Thu Apr 29 17:10:06 CEST 2010 [INFO] Final Memory: 5M/10M [INFO] ------------------------------------------------------------------------ What did I miss in a configuration? (I hope it is a configuration problem not a Maven plugins related bug!:) My repository URL seems to be ok (the plugin has been complaining before, I fixed that up), I also set a date format for parsing (also been complaining, also fixed). target/changelog.xml being promised was not generated at all. Maven 2.2.1 Mercurial 1.4.3 Windows XP SP3 mvn scm:changelog command provides an expected output. Thanks for any suggestions, I haven't googled up anything (nor binged up;).

    Read the article

  • How to save a large nhibernate collection without causing OutOfMemoryException

    - by Michael Hedgpeth
    How do I save a large collection with NHibernate which has elements that surpass the amount of memory allowed for the process? I am trying to save a Video object with nhibernate which has a large number of Screenshots (see below for code). Each Screenshot contains a byte[], so after nhibernate tries to save 10,000 or so records at once, an OutOfMemoryException is thrown. Normally I would try to break up the save and flush the session after every 500 or so records, but in this case, I need to save the collection because it automatically saves the SortOrder and VideoId for me (without the Screenshot having to know that it was a part of a Video). What is the best approach given my situation? Is there a way to break up this save without forcing the Screenshot to have knowledge of its parent Video? For your reference, here is the code from the simple sample I created: public class Video { public long Id { get; set; } public string Name { get; set; } public Video() { Screenshots = new ArrayList(); } public IList Screenshots { get; set; } } public class Screenshot { public long Id { get; set; } public byte[] Data { get; set; } } And mappings: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" default-access="property"> <class name="Screenshot" lazy="false"> <id name="Id" type="Int64"> <generator class="hilo"/> </id> <property name="Data" column="Data" type="BinaryBlob" length="2147483647" not-null="true" /> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" > <class name="Video" lazy="false" table="Video" discriminator-value="0" abstract="true"> <id name="Id" type="Int64" access="property"> <generator class="hilo"/> </id> <property name="Name" /> <list name="Screenshots" cascade="all-delete-orphan" lazy="false"> <key column="VideoId" /> <index column="SortOrder" /> <one-to-many class="Screenshot" /> </list> </class> </hibernate-mapping> When I try to save a Video with 10000 screenshots, it throws an OutOfMemoryException. Here is the code I'm using: using (var session = CreateSession()) { Video video = new Video(); for (int i = 0; i < 10000; i++) { video.Screenshots.Add(new Screenshot() {Data = camera.TakeScreenshot(resolution)}); } session.SaveOrUpdate(video); }

    Read the article

  • multiple valgrind errors: Conditional jump or move depends on uninitialised value(s)

    - by Hristo
    I'm running valgrind and I'm getting the following error (this is not the only one): ==21743== Conditional jump or move depends on uninitialised value(s) ==21743== at 0x4A06509: index (mc_replace_strmem.c:164) ==21743== by 0x33B7CBB3CD: gaih_inet (in /lib64/libc-2.5.so) ==21743== by 0x33B7CBD629: getaddrinfo (in /lib64/libc-2.5.so) ==21743== by 0x401A5F: tunnelURL (proxy.c:336) ==21743== by 0x40142A: client_thread (proxy.c:194) ==21743== by 0x33B8806616: start_thread (in /lib64/libpthread-2.5.so) ==21743== by 0x33B7CD3C2C: clone (in /lib64/libc-2.5.so) My tunnelURL() function looks like this: char * tunnelURL(char *url) { char * a = strstr(url, "//"); a += 2; char * path = strstr(a, "/"); char host[256]; strncpy (host, a, strlen(a)-strlen(path)); /* * The following is courtesy of Beej's Guide */ int status; int proxySocketFD; struct addrinfo hints; struct addrinfo *servinfo; // will point to the results memset(&hints, 0, sizeof(hints)); // make sure the struct is empty hints.ai_family = AF_INET; // don't care IPv4 or IPv6 hints.ai_socktype = SOCK_STREAM; // TCP stream sockets hints.ai_flags = AI_PASSIVE; // fill in my IP for me if ((status = getaddrinfo(host, "80", &hints, &servinfo)) != 0) { perror("getaddrinfo() fail"); exit(1); } // create socket if ((proxySocketFD = socket(servinfo->ai_family, servinfo->ai_socktype, servinfo->ai_protocol)) == -1) { perror("proxy socket() fail"); exit(1); } // connect if (connect(proxySocketFD, servinfo->ai_addr, servinfo->ai_addrlen) != 0) { printf("connect() fail"); exit(1); } // construct request char request[strlen(path) + strlen(host) + 26]; sprintf(request, "GET %s HTTP/1.1\r\nHost: %s\r\n\r\n", path, host); printf("%s", request); // send request send(proxySocketFD, request, strlen(request), 0); // receive response int i = 0; int amntRecvd = 0; char *pageContentBuffer = (char*) malloc(4096 * sizeof(char)); while ((amntRecvd = recv(proxySocketFD, pageContentBuffer + i, 4096, 0)) > 0) { i += amntRecvd; realloc(pageContentBuffer, i * 4096 * sizeof(char)); } // close proxy socket close(proxySocketFD); // deallocate memory freeaddrinfo(servinfo); return pageContentBuffer; } Line 336 corresponds to the if statement with the getaddrinfo() function call. I'm not really sure what I haven't initialized. The string I'm passing in "should" be already set... I'm printing it out just fine. I also get another error corresponding to the same line of code: ==21743== Use of uninitialised value of size 8 ==21743== at 0x33B7D05816: __nscd_cache_search (in /lib64/libc-2.5.so) ==21743== by 0x33B7D0438B: nscd_gethst_r (in /lib64/libc-2.5.so) ==21743== by 0x33B7D04B26: __nscd_gethostbyname2_r (in /lib64/libc-2.5.so) ==21743== by 0x33B7CE9F5E: gethostbyname2_r@@GLIBC_2.2.5 (in /lib64/libc-2.5.so) ==21743== by 0x33B7CBC522: gaih_inet (in /lib64/libc-2.5.so) ==21743== by 0x33B7CBD629: getaddrinfo (in /lib64/libc-2.5.so) ==21743== by 0x401A5F: tunnelURL (proxy.c:336) ==21743== by 0x40142A: client_thread (proxy.c:194) ==21743== by 0x33B8806616: start_thread (in /lib64/libpthread-2.5.so) ==21743== by 0x33B7CD3C2C: clone (in /lib64/libc-2.5.so) Any ideas as to what might becausing this? This is written in C btw... Thanks, Hristo

    Read the article

  • Changing html <-> ajax <-> php/mysql to threaded approach

    - by Saif Bechan
    I have an application that needs to be updated real-time. There are various counters and other information that have to come from the database and the system needs to be up to date for the user. My approach now is just a normal ajax request every second to get the new values from the database. There is a JavaScript which loops every second getting the values trough ajax. This works fine but I think its very inefficient. The problem There is an ajax script that loops every second requesting data from php # On the server it has to load the PHP interpeter The PHP file has to get the data and format it correctly # PHP has to make a connection with the mysql database Work with the database(reads,never writes) Format the data so it can be send Send the data back to the browser # Close the database connection, and close the php interpeter Last the browser has to read these values and update the various html parts Now with this approach it has to load the interpreter and make a db connection every second. I was thinking of a way to make this more efficient, and maybe use a threaded approach to this. Threaded aprouch Do a post to the PHP when you enter the page and keep the connection alive In PHP only load the interpreter once, and make a connection to the DB ones Every second send an ajax response to the javascript listener The javascript listener than just changes values as the response from php arrives. I think this approach will be a great optimization to the server load and overall performance. But I can spot some weak point in the system and i need some help with these. Problems with the approach PHP execution time limit I don't think PHP is designed for such a setup. I know there is a time limit on php script execution. I don't know if an everlasting loop in PHP will cause any serious cpu/memory problems. Sending ajax request without breaking I don't know if it is possible to have just one ajax post action and have open and accepting data. user exists the page What will happen when the user exists the page and the PHP script is still going. Will it go on forever. security issues so far i can't think of any security issues. Almost every setup you use have some security issues. Maybe there are some with this solution I do not know of. Open to other solution I really want to change the setup as it is now and move to a threaded approach or better. If someone has a better approach to tackle this I definitely want to hear that. Maybe the usage of some other scripts is better suited for having an ongoing runtime. I only know php and java so any suggestions are welcome and I am willing to dig trough. I know there are things like perl, python etcetera that are used for this type of threaded but i don't know which one is best suited. When using other script If the best way is to go with other type of script like perl,python etcetera I do have some critera. The script has to be accessible via ajax post If it accepts some kind of json encode/decode it would be nice The script has to be able to access the session file This is essential because I need to know if the user is logged in The script has to be able to easily talk to MySQL All comments are welcome, and I hope this question is helpful to other also. Cheers!

    Read the article

  • STL operator= behavior change with Visual Studio 2010?

    - by augnob
    Hi, I am attempting to compile QtScriptGenerator (gitorious) with Visual Studio 2010 (C++) and have run into a compile error. In searching for a solution, I have seen occasional references to compile breakages introduced since VS2008 due to changes in VS2010's implementation of STL and/or c++0x conformance changes. Any ideas what is happening below, or how I could go about fixing it? If the offending code appeared to be QtScriptGenerator's, I think I would have an easier time fixing it.. but it appears to me that the offending code may be in VS2010's STL implementation and I may be required to create a workaround? PS. I am pretty unfamiliar with templates and STL. I have a background in embedded and console projects where such things have until recently often been avoided to reduce memory consumption and cross-compiler risks. C:\Program Files\Microsoft Visual Studio 10.0\VC\INCLUDE\xutility(275) : error C2679: binary '=' : no operator found which takes a right-hand operand of type 'rpp::pp_output_iterator<_Container>' (or there is no acceptable conversion) with [ _Container=std::string ] c:\qt\qtscriptgenerator\generator\parser\rpp\pp-iterator.h(75): could be 'rpp::pp_output_iterator<_Container> &rpp::pp_output_iterator<_Container>::operator =(const char &)' with [ _Container=std::string ] while trying to match the argument list '(rpp::pp_output_iterator<_Container>, rpp::pp_output_iterator<_Container>)' with [ _Container=std::string ] C:\Program Files\Microsoft Visual Studio 10.0\VC\INCLUDE\xutility(2176) : see reference to function template instantiation '_Iter &std::_Rechecked<_OutIt,_OutIt>(_Iter &,_UIter)' being compiled with [ _Iter=rpp::pp_output_iterator<std::string>, _OutIt=rpp::pp_output_iterator<std::string>, _UIter=rpp::pp_output_iterator<std::string> ] c:\qt\qtscriptgenerator\generator\parser\rpp\pp-internal.h(83) : see reference to function template instantiation '_OutIt std::copy<std::_String_iterator<_Elem,_Traits,_Alloc>,_OutputIterator>(_InIt,_InIt,_OutIt)' being compiled with [ _OutIt=rpp::pp_output_iterator<std::string>, _Elem=char, _Traits=std::char_traits<char>, _Alloc=std::allocator<char>, _OutputIterator=rpp::pp_output_iterator<std::string>, _InIt=std::_String_iterator<char,std::char_traits<char>,std::allocator<char>> ] c:\qt\qtscriptgenerator\generator\parser\rpp\pp-engine-bits.h(500) : see reference to function template instantiation 'void rpp::_PP_internal::output_line<_OutputIterator>(const std::string &,int,_OutputIterator)' being compiled with [ _OutputIterator=rpp::pp_output_iterator<std::string> ] C:\Program Files\Microsoft Visual Studio 10.0\VC\INCLUDE\xutility(275) : error C2582: 'operator =' function is unavailable in 'rpp::pp_output_iterator<_Container>' with [ _Container=std::string ] Here's some context.. pp-internal.h-- #ifndef PP_INTERNAL_H #define PP_INTERNAL_H #include <algorithm> #include <stdio.h> namespace rpp { namespace _PP_internal { .. 68 template <typename _OutputIterator> 69 void output_line(const std::string &__filename, int __line, _OutputIterator __result) 70 { 71 std::string __msg; 72 73 __msg += "# "; 74 75 char __line_descr[16]; 76 pp_snprintf (__line_descr, 16, "%d", __line); 77 __msg += __line_descr; 78 79 __msg += " \""; 80 81 if (__filename.empty ()) 82 __msg += "<internal>"; 83 else 84 __msg += __filename; 85 86 __msg += "\"\n"; 87 std::copy (__msg.begin (), __msg.end (), __result); 88 }

    Read the article

  • Java thread dump where main thread has no call stack? (jsvc)

    - by dwhsix
    We have a java process running as a daemon (under jsvc). Every several days it just stops doing any work; output to the logfile stops (it is pretty verbose, on 5-minute intervals) and it consumes no CPU or IO. There are no exceptions logged in the logfile nor in syserr or sysout. The last log statement is just prior to a db commit being done, but there is no open connection on the db server (MySQL) and reviewing the code, there should always be additional log output after that, even if it had encountered an exception that was going to bubble up. The most curious thing I find is that in the thread dump (included below), there's no thread in our code at all, and the main thread seems to have no context whatsoever: "main" prio=10 tid=0x0000000000614000 nid=0x445d runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE As noted earlier, this is a daemon process running using jsvc, but I don't know if that has anything to do with it (I can restructure the code to also allow running it directly, to test). Any suggestions on what might be happening here? Thanks... dwh Full thread dump: Full thread dump Java HotSpot(TM) 64-Bit Server VM (14.2-b01 mixed mode): "MySQL Statement Cancellation Timer" daemon prio=10 tid=0x00002aaaf81b8800 nid=0x447b in Object.wait() [0x00002aaaf6a22000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab5556d50> (a java.util.TaskQueue) at java.lang.Object.wait(Object.java:485) at java.util.TimerThread.mainLoop(Timer.java:483) - locked <0x00002aaab5556d50> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:462) "Low Memory Detector" daemon prio=10 tid=0x00000000006a4000 nid=0x4479 runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE "CompilerThread1" daemon prio=10 tid=0x00000000006a1000 nid=0x4477 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "CompilerThread0" daemon prio=10 tid=0x000000000069d000 nid=0x4476 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Signal Dispatcher" daemon prio=10 tid=0x000000000069b000 nid=0x4465 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Finalizer" daemon prio=10 tid=0x0000000000678800 nid=0x4464 in Object.wait() [0x00002aaaf61d6000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab54a1cb8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118) - locked <0x00002aaab54a1cb8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159) "Reference Handler" daemon prio=10 tid=0x0000000000676800 nid=0x4463 in Object.wait() [0x00002aaaf60d5000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab54a1cf0> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:485) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116) - locked <0x00002aaab54a1cf0> (a java.lang.ref.Reference$Lock) "main" prio=10 tid=0x0000000000614000 nid=0x445d runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE "VM Thread" prio=10 tid=0x0000000000670000 nid=0x4462 runnable "GC task thread#0 (ParallelGC)" prio=10 tid=0x000000000061e000 nid=0x445e runnable "GC task thread#1 (ParallelGC)" prio=10 tid=0x0000000000620000 nid=0x445f runnable "GC task thread#2 (ParallelGC)" prio=10 tid=0x0000000000622000 nid=0x4460 runnable "GC task thread#3 (ParallelGC)" prio=10 tid=0x0000000000623800 nid=0x4461 runnable "VM Periodic Task Thread" prio=10 tid=0x00000000006a6800 nid=0x447a waiting on condition JNI global references: 797 Heap PSYoungGen total 162944K, used 48388K [0x00002aaadff40000, 0x00002aaaf2ab0000, 0x00002aaaf5490000) eden space 102784K, 47% used [0x00002aaadff40000,0x00002aaae2e81170,0x00002aaae63a0000) from space 60160K, 0% used [0x00002aaaeb850000,0x00002aaaeb850000,0x00002aaaef310000) to space 86720K, 0% used [0x00002aaae63a0000,0x00002aaae63a0000,0x00002aaaeb850000) PSOldGen total 699072K, used 699072K [0x00002aaab5490000, 0x00002aaadff40000, 0x00002aaadff40000) object space 699072K, 100% used [0x00002aaab5490000,0x00002aaadff40000,0x00002aaadff40000) PSPermGen total 21248K, used 9252K [0x00002aaab0090000, 0x00002aaab1550000, 0x00002aaab5490000) object space 21248K, 43% used [0x00002aaab0090000,0x00002aaab09993e8,0x00002aaab1550000)

    Read the article

  • New project created with Flex Mojo's archetype throws Cannot Find Parent Project-Maven Exception

    - by ignorant
    This is probably a silly question but I just cant seem to figure out. I'm completely new to flex and maven. Maven 2.2.1: Maven 2.2.1 unzipped,M2_HOME set and repository altered to point to different drive location in settings.xml Flex 4.0: Installed Created a multi-modular webapp project using flexmojo: mvn archetype:generate -DarchetypeRepository=http://repository.sonatype.org/content/groups/flexgroup -DarchetypeGroupId=org.sonatype.flexmojos -DarchetypeArtifactId=flexmojos-archetypes-modular-webapp -DarchetypeVersion=RELEASE with following options groupId=com.test artifactId=test version=1.0-snapshot package=com.tests * Creates * test |-- pom.xml |--swc -pom.xml |--swf -pom.xml `--war -pom.xml Parent pom has swc, swf, war as modules. Dependency is war-swf-swc. With parent artifactId of swf, swc, war set to swf, swc, test respectively. On executing mvn on test folder(for that matter clean or anything) I get this following error. G:\Projects\testmvn -e + Error stacktraces are turned on. [INFO] Scanning for projects... Downloading: http://repo1.maven.org/maven2/com/test/swc/1.0-snapshot/swc-1.0-snapshot.pom [INFO] Unable to find resource 'com.test:swc:pom:1.0-snapshot' in repository central (http://repo1.maven.org/maven2) [INFO] ------------------------------------------------------------------------ [ERROR] FATAL ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to resolve artifact. GroupId: com.test ArtifactId: swc Version: 1.0-snapshot Reason: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.reactor.MavenExecutionException: Cannot find parent: com.test:swc for project: com.test:swc-swc:swc:1.0-snapshot for project com.test:swc-swc:swc:1.0-snapshot at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:404) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:272) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.project.ProjectBuildingException: Cannot find parent: com.test:swc for project: com.test:swc-swc:swc:1.0-snapshot for project com.test:swc-swc:swc:1.0-snapshot at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(DefaultMavenProjectBuilder.java:1396) at org.apache.maven.project.DefaultMavenProjectBuilder.buildInternal(DefaultMavenProjectBuilder.java:823) at org.apache.maven.project.DefaultMavenProjectBuilder.buildFromSourceFileInternal(DefaultMavenProjectBuilder.java:508) at org.apache.maven.project.DefaultMavenProjectBuilder.build(DefaultMavenProjectBuilder.java:200) at org.apache.maven.DefaultMaven.getProject(DefaultMaven.java:604) at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:487) at org.apache.maven.DefaultMaven.collectProjects(DefaultMaven.java:560) at org.apache.maven.DefaultMaven.getProjects(DefaultMaven.java:391) ... 12 more Caused by: org.apache.maven.project.ProjectBuildingException: POM 'com.test:swc' not found in repository: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) for project com.test:swc at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:605) at org.apache.maven.project.DefaultMavenProjectBuilder.assembleLineage(DefaultMavenProjectBuilder.java:1392) ... 19 more Caused by: org.apache.maven.artifact.resolver.ArtifactNotFoundException: Unable to download the artifact from any repository com.test:swc:pom:1.0-snapshot from the specified remote repositories: central (http://repo1.maven.org/maven2) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:228) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:90) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:558) ... 20 more Caused by: org.apache.maven.wagon.ResourceDoesNotExistException: Unable to download the artifact from any repository at org.apache.maven.artifact.manager.DefaultWagonManager.getArtifact(DefaultWagonManager.java:404) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:216) ... 22 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1 second [INFO] Finished at: Tue Jun 15 19:22:15 GMT+02:00 2010 [INFO] Final Memory: 1M/2M [INFO] ------------------------------------------------------------------------ Looks like its trying to download the project from maven's central repository instead of building it. What am I missing?

    Read the article

  • Analyzing bitmaps produced by NSAffineTransform and CILineOverlay filters

    - by Adam
    I am trying to manipulate an image using a chain of CIFilters, and then examine each byte of the resulting image (bitmap). Long term, I do not need to display the resulting image (bitmap) -- I just need to "analyze" it in memory. But near-term I am displaying it on screen, to help with debugging. I have some "bitmap examination" code that works as expected when examining the NSImage (bitmap representation) I use as my input (loaded from a JPG file into an NSImage). And it SOMETIMES works as expected when I use it on the outputBitmap produced by the code below. More specifically, when I use an NSAffineTransform filter to create outputBitmap, then outputBitmap contains the data I would expect. But if I use a CILineOverlay filter to create the outputBitmap, none of the bytes in the bitmap have any data in them. I believe both of these filters are working as expected, because when I display their results on screen (via outputImageView), they look "correct." Yet when I examine the outputBitmaps, the one created from the CILineOverlay filter is "empty" while the one created from NSAffineTransfer contains data. Furthermore, if I chain the two filters together, the final resulting bitmap only seems to contain data if I run the AffineTransform last. Seems very strange, to me??? My understanding (from reading the CI programming guide) is that the CIImage should be considered an "image recipe" rather than an actual image, because the image isn't actually created until the image is "drawn." Given that, it would make sense that the CIimage bitmap doesn't have data -- but I don't understand why it has data after I run the NSAffineTransform but doesn't have data after running the CILineOverlay transform? Basically, I am trying to determine if creating the NSCIImageRep (ir in the code below) from the CIImage (myResult) is equivalent to "drawing" the CIImage -- in other words if that should force the bitmap to be populated? If someone knows the answer to this please let me know -- it will save me a few hours of trial and error experimenting! Finally, if the answer is "you must draw to a graphics context" ... then I have another question: would I need to do something along the lines of what is described in the Quartz 2D Programming Guide: Graphics Contexts, listing 2-7 and 2-8: drawing to a bitmap graphics context? That is the path down which I am about to head ... but it seems like a lot of code just to force the bitmap data to be dumped into an array where I can get at it. So if there is an easier or better way please let me know. I just want to take the data (that should be) in myResult and put it into a bitmap array where I can access it at the byte level. And since I already have code that works with an NSBitmapImageRep, unless doing it that way is a bad idea for some reason that is not readily apparent to me, then I would prefer to "convert" myResult into an NSBitmapImageRep. CIImage * myResult = [transform valueForKey:@"outputImage"]; NSImage *outputImage; NSCIImageRep *ir = [NSCIImageRep alloc]; ir = [NSCIImageRep imageRepWithCIImage:myResult]; outputImage = [[[NSImage alloc] initWithSize: NSMakeSize(inputImage.size.width, inputImage.size.height)] autorelease]; [outputImage addRepresentation:ir]; [outputImageView setImage: outputImage]; NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: myResult]; Thanks, Adam

    Read the article

  • Slow queries in Rails- not sure if my indexes are being used.

    - by Max Williams
    I'm doing a quite complicated find with lots of includes, which rails is splitting into a sequence of discrete queries rather than do a single big join. The queries are really slow - my dataset isn't massive, with none of the tables having more than a few thousand records. I have indexed all of the fields which are examined in the queries but i'm worried that the indexes aren't helping for some reason: i installed a plugin called "query_reviewer" which looks at the queries used to build a page, and lists problems with them. This states that indexes AREN'T being used, and it features the results of calling 'explain' on the query, which lists various problems. Here's an example find call: Question.paginate(:all, {:page=>1, :include=>[:answers, :quizzes, :subject, {:taggings=>:tag}, {:gradings=>[:age_group, :difficulty]}], :conditions=>["((questions.subject_id = ?) or (questions.subject_id = ? and tags.name = ?))", "1", 19, "English"], :order=>"subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id", :per_page=>30}) And here are the generated sql queries: SELECT DISTINCT `questions`.id FROM `questions` LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id LIMIT 0, 30 SELECT `questions`.`id` AS t0_r0 <..etc...> FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) AND `questions`.id IN (602, 634, 666, 698, 730, 762, 613, 645, 677, 709, 741, 592, 624, 656, 688, 720, 752, 603, 635, 667, 699, 731, 763, 614, 646, 678, 710, 742, 593, 625) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id SELECT count(DISTINCT `questions`.id) AS count_all FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) Actually, looking at these all nicely formatted here, there's a crazy amount of joining going on here. This can't be optimal surely. Anyway, it looks like i have two questions. 1) I have an index on each of the ids and foreign key fields referred to here. The second of the above queries is the slowest, and calling explain on it (doing it directly in mysql) gives me the following: +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | questions | range | PRIMARY,index_questions_on_subject_id | PRIMARY | 4 | NULL | 30 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | answers | ref | index_answers_on_question_id | index_answers_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | quiz_questions | ref | index_quiz_questions_on_question_id | index_quiz_questions_on_question_id | 5 | millionaire_development.questions.id | 1 | | | 1 | SIMPLE | quizzes | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.quiz_questions.quiz_id | 1 | | | 1 | SIMPLE | subjects | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.questions.subject_id | 1 | | | 1 | SIMPLE | taggings | ref | index_taggings_on_taggable_id_and_taggable_type,index_taggings_on_taggable_type | index_taggings_on_taggable_id_and_taggable_type | 263 | millionaire_development.questions.id,const | 1 | | | 1 | SIMPLE | tags | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.taggings.tag_id | 1 | Using where | | 1 | SIMPLE | gradings | ref | index_gradings_on_question_id | index_gradings_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | age_groups | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.age_group_id | 1 | | | 1 | SIMPLE | difficulties | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.difficulty_id | 1 | | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ The query_reviewer plugin has this to say about it - it lists several problems: Table questions: Using temporary table, Long key length (263), Using filesort MySQL must do an extra pass to find out how to retrieve the rows in sorted order. To resolve the query, MySQL needs to create a temporary table to hold the result. The key used for the index was rather long, potentially affecting indices in memory 2) It looks like rails isn't splitting this find up in a very optimal way. Is it, do you think? Am i better off doing several find queries manually rather than one big combined one? Grateful for any advice, max

    Read the article

  • VMRMixerControl9 & GDI problem!

    - by freefallr
    I'm attempting to overlay a bitmap on some video. I create a bitmap in memory, then I call SelectObject to select it into a memoryDC, so I can perform a DrawText operation on it. I'm not getting any results on screen at all - can anyone suggest why? thanks HRESULT MyGraph::MixTextOverVMR9(LPCTSTR szText) { // create a bitmap object using GDI, render the text to it accordingly // then Sets the bitmap as an alpha bitmap to the VMR9, so that it can be overlayed. HRESULT hr = S_OK; CBitmap bmpMem; CFont font; LOGFONT logicfont; CRect rcText; CRect rcVideo; VMR9AlphaBitmap alphaBmp; HWND hWnd = this->GetFirstRendererWindow(); COLORREF clrText = RGB(255, 255, 0); COLORREF clrBlack = RGB(0,0,0); HDC hdcHwnd = NULL; CDC dcMem; LONG lWidth; LONG lHeight; if( ! m_spVideoRenderer.p ) return E_NOINTERFACE; if( !m_spWindowlessCtrl.p ) return E_NOINTERFACE; if( ! m_spIMixerBmp9.p ) { m_spIMixerBmp9 = m_spVideoRenderer; if( ! m_spIMixerBmp9.p ) return E_NOINTERFACE; } // create the font.. LPCTSTR sFont = _T("Times New Roman"); memset(&logicfont, 0, sizeof(LOGFONT)); logicfont.lfHeight = 42; logicfont.lfWidth = 20; logicfont.lfStrikeOut = 0; logicfont.lfUnderline = 0; logicfont.lfItalic = FALSE; logicfont.lfWeight = FW_NORMAL; logicfont.lfEscapement = 0; logicfont.lfCharSet = ANSI_CHARSET; logicfont.lfQuality = ANTIALIASED_QUALITY; logicfont.lfPitchAndFamily = DEFAULT_PITCH | FF_DONTCARE; wcscpy_s( &logicfont.lfFaceName[0], wcslen(sFont)*2, sFont ); font.CreateFontIndirectW(&logicfont); // create a compatible memDC from the video window's HDC if( hWnd == NULL ) return E_FAIL; hdcHwnd = GetDC(hWnd); dcMem = CreateCompatibleDC(hdcHwnd); // get the required bitmap metrics from the MediaBuffer if( ! SUCCEEDED(m_spWindowlessCtrl->GetNativeVideoSize(&lWidth, &lHeight, NULL, NULL)) ) return E_FAIL; // create a bitmap for the text bmpMem.CreateCompatibleBitmap(dcMem.m_hDC, lWidth, lHeight); SelectBitmap (dcMem.m_hDC, bmpMem); SetBkMode (dcMem.m_hDC, TRANSPARENT); SetTextColor (dcMem.m_hDC, clrText); SelectFont (dcMem.m_hDC, font.m_hFont); // draw the text DrawTextW(dcMem.m_hDC, szText, wcslen(szText), rcText, DT_CALCRECT | DT_NOPREFIX ); DrawTextW(dcMem.m_hDC, szText, wcslen(szText), rcText, DT_NOPREFIX ); // Set the alpha bitmap on the VMR9 renderer memset(&alphaBmp, 0, sizeof(VMR9AlphaBitmap)); alphaBmp.rDest.left = 0; alphaBmp.rDest.top = 0.5; alphaBmp.rDest.right = 0.5; alphaBmp.rDest.bottom = 1; alphaBmp.dwFlags = VMR9AlphaBitmap_hDC; alphaBmp.hdc = dcMem.m_hDC; alphaBmp.pDDS = NULL; alphaBmp.rSrc = rcText; // rect to copy from the source image alphaBmp.fAlpha = 0.5f; // transparency value (1.0 is opaque, 0.0 is transparent) alphaBmp.clrSrcKey = clrText; // alphaBmp.dwFilterMode = MixerPref9_AnisotropicFiltering; hr = m_spIMixerBmp9->SetAlphaBitmap(&alphaBmp); DeleteDC(hdcHwnd); dcMem.DeleteDC(); bmpMem.DeleteObject(); font.DeleteObject(); return hr; }

    Read the article

  • php script dies when it calls a bash script, maybe a problem of server configuration

    - by user347501
    Hi!!! I have some problems with a PHP script that calls a Bash script.... in the PHP script is uploaded a XML file, then the PHP script calls a Bash script that cut the file in portions (for example, is uploaded a XML file of 30,000 lines, so the Bash script cut the file in portions of 10,000 lines, so it will be 3 files of 10,000 each one) The file is uploaded, the Bash script cut the lines, but when the Bash script returns to the PHP script, the PHP script dies, & I dont know why... I tested the script in another server and it works fine... I dont believe that is a memory problem, maybe it is a processor problem, I dont know, I dont know what to do, what can I do??? (Im using the function shell_exec in PHP to call the Bash script) The error only happens if the XML file has more than 8,000 lines, but if the file has less then 8,000 everything is ok (this is relative, it depends of the amount of data, of strings, of letters that contains each line) what can you suggest me??? (sorry for my bad english, I have to practice a lot xD) I leave the code here PHP script (at the end, after the ?, there is html & javascript code, but it doesnt appear, only the javascript code... basically the html is only to upload the file) " . date('c') . ": $str"; $file = fopen("uploadxmltest.debug.txt","a"); fwrite($file,date('c') . ": $str\n"); fclose($file); } try{ if(is_uploaded_file($_FILES['tfile']['tmp_name'])){ debug("step 1: the file was uploaded"); $norg=date('y-m-d')."_".md5(microtime()); $nfle="testfiles/$norg.xml"; $ndir="testfiles/$norg"; $ndir2="testfiles/$norg"; if(move_uploaded_file($_FILES['tfile']['tmp_name'],"$nfle")){ debug("step 2: the file was moved to the directory"); debug("memory_get_usage(): " . memory_get_usage()); debug("memory_get_usage(true): " . memory_get_usage(true)); debug("memory_get_peak_usage(): " . memory_get_peak_usage()); debug("memory_get_peak_usage(true): " . memory_get_peak_usage(true)); $shll=shell_exec("./crm_cutfile_v2.sh \"$nfle\" \"$ndir\" \"$norg\" "); debug("result: $shll"); debug("memory_get_usage(): " . memory_get_usage()); debug("memory_get_usage(true): " . memory_get_usage(true)); debug("memory_get_peak_usage(): " . memory_get_peak_usage()); debug("memory_get_peak_usage(true): " . memory_get_peak_usage(true)); debug("step 3: the file was cutted. END"); } else{ debug("ERROR: I didnt move the file"); exit(); } } else{ debug("ERROR: I didnt upload the file"); //exit(); } } catch(Exception $e){ debug("Exception: " . $e-getMessage()); exit(); } ? Test function uploadFile(){ alert("start"); if(document.test.tfile.value==""){ alert("First you have to upload a file"); } else{ document.test.submit(); } } Bash script with AWK #!/bin/bash #For single messages (one message per contact) function cutfile(){ lines=$( cat "$1" | awk 'END {print NR}' ) fline="$4"; if [ -d "$2" ]; then exsts=1 else mkdir "$2" fi cp "$1" "$2/datasource.xml" cd "$2" i=1 contfile=1 while [ $i -le $lines ] do currentline=$( cat "datasource.xml" | awk -v fl=$i 'NR==fl {print $0}' ) #creates first file if [ $i -eq 1 ]; then echo "$fline" "$3_1.txt" else #creates the rest of files when there are more than 10,000 contacts rsd=$(( ( $i - 2 ) % 10000 )) if [ $rsd -eq 0 ]; then echo "" "$3_$contfile.txt" contfile=$(( $contfile + 1 )) echo "$fline" "$3_$contfile.txt" fi fi echo "$currentline" "$3_$contfile.txt" i=$(( $i + 1 )) done echo "" "$3_$contfile.txt" return 1 } #For multiple messages (one message for all contacts) function cutfile_multi(){ return 1 } cutfile "$1" "$2" "$3" "$4" echo 1 thanks!!!!! =D

    Read the article

  • Spring, Hibernate, Blob lazy loading

    - by Alexey Khudyakov
    Dear Sirs, I need help with lazy blob loading in Hibernate. I have in my web application these servers and frameworks: MySQL, Tomcat, Spring and Hibernate. The part of database config. <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="user" value="${jdbc.username}"/> <property name="password" value="${jdbc.password}"/> <property name="driverClass" value="${jdbc.driverClassName}"/> <property name="jdbcUrl" value="${jdbc.url}"/> <property name="initialPoolSize"> <value>${jdbc.initialPoolSize}</value> </property> <property name="minPoolSize"> <value>${jdbc.minPoolSize}</value> </property> <property name="maxPoolSize"> <value>${jdbc.maxPoolSize}</value> </property> <property name="acquireRetryAttempts"> <value>${jdbc.acquireRetryAttempts}</value> </property> <property name="acquireIncrement"> <value>${jdbc.acquireIncrement}</value> </property> <property name="idleConnectionTestPeriod"> <value>${jdbc.idleConnectionTestPeriod}</value> </property> <property name="maxIdleTime"> <value>${jdbc.maxIdleTime}</value> </property> <property name="maxConnectionAge"> <value>${jdbc.maxConnectionAge}</value> </property> <property name="preferredTestQuery"> <value>${jdbc.preferredTestQuery}</value> </property> <property name="testConnectionOnCheckin"> <value>${jdbc.testConnectionOnCheckin}</value> </property> </bean> <bean id="lobHandler" class="org.springframework.jdbc.support.lob.DefaultLobHandler" /> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation" value="/WEB-INF/hibernate.cfg.xml" /> <property name="configurationClass" value="org.hibernate.cfg.AnnotationConfiguration" /> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${hibernate.dialect}</prop> </props> </property> <property name="lobHandler" ref="lobHandler" /> </bean> <tx:annotation-driven transaction-manager="txManager" /> <bean id="txManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> The part of entity class @Lob @Basic(fetch=FetchType.LAZY) @Column(name = "BlobField", columnDefinition = "LONGBLOB") @Type(type = "org.springframework.orm.hibernate3.support.BlobByteArrayType") private byte[] blobField; The problem description. I'm trying to display on a web page database records related to files, which was saved in MySQL database. All works fine if a volume of data is small. But the volume of data is big I'm recieving an error "java.lang.OutOfMemoryError: Java heap space" I've tried to write in blobFields null values on each row of table. In this case, application works fine, memory doesn't go out of. I have a conclusion that the blob field which is marked as lazy (@Basic(fetch=FetchType.LAZY)) isn't lazy, actually! How can I solve the issie? Many thanks for advance.

    Read the article

  • Lock-Free, Wait-Free and Wait-freedom algorithms for non-blocking multi-thread synchronization.

    - by GJ
    In multi thread programming we can find different terms for data transfer synchronization between two or more threads/tasks. When exactly we can say that some algorithem is: 1)Lock-Free 2)Wait-Free 3)Wait-Freedom I understand what means Lock-free but when we can say that some synchronization algorithm is Wait-Free or Wait-Freedom? I have made some code (ring buffer) for multi-thread synchronization and it use Lock-Free methods but: 1) Algorithm predicts maximum execution time of this routine. 2) Therad which call this routine at beginning set unique reference, what mean that is inside of this routine. 3) Other threads which are calling the same routine check this reference and if is set than count the CPU tick count (measure time) of first involved thread. If that time is to long interrupt the current work of involved thread and overrides him job. 4) Thread which not finished job because was interrupted from task scheduler (is reposed) at the end check the reference if not belongs to him repeat the job again. So this algorithm is not really Lock-free but there is no memory lock in use, and other involved threads can wait (or not) certain time before overide the job of reposed thread. Added RingBuffer.InsertLeft function: function TgjRingBuffer.InsertLeft(const link: pointer): integer; var AtStartReference: cardinal; CPUTimeStamp : int64; CurrentLeft : pointer; CurrentReference: cardinal; NewLeft : PReferencedPtr; Reference : cardinal; label TryAgain; begin Reference := GetThreadId + 1; //Reference.bit0 := 1 with rbRingBuffer^ do begin TryAgain: //Set Left.Reference with respect to all other cores :) CPUTimeStamp := GetCPUTimeStamp + LoopTicks; AtStartReference := Left.Reference OR 1; //Reference.bit0 := 1 repeat CurrentReference := Left.Reference; until (CurrentReference AND 1 = 0)or (GetCPUTimeStamp - CPUTimeStamp > 0); //No threads present in ring buffer or current thread timeout if ((CurrentReference AND 1 <> 0) and (AtStartReference <> CurrentReference)) or not CAS32(CurrentReference, Reference, Left.Reference) then goto TryAgain; //Calculate RingBuffer NewLeft address CurrentLeft := Left.Link; NewLeft := pointer(cardinal(CurrentLeft) - SizeOf(TReferencedPtr)); if cardinal(NewLeft) < cardinal(@Buffer) then NewLeft := EndBuffer; //Calcolate distance result := integer(Right.Link) - Integer(NewLeft); //Check buffer full if result = 0 then //Clear Reference if task still own reference if CAS32(Reference, 0, Left.Reference) then Exit else goto TryAgain; //Set NewLeft.Reference NewLeft^.Reference := Reference; SFence; //Try to set link and try to exchange NewLeft and clear Reference if task own reference if (Reference <> Left.Reference) or not CAS64(NewLeft^.Link, Reference, link, Reference, NewLeft^) or not CAS64(CurrentLeft, Reference, NewLeft, 0, Left) then goto TryAgain; //Calcolate result if result < 0 then result := Length - integer(cardinal(not Result) div SizeOf(TReferencedPtr)) else result := cardinal(result) div SizeOf(TReferencedPtr); end; //with end; { TgjRingBuffer.InsertLeft } RingBuffer unit you can find here: RingBuffer, CAS functions: FockFreePrimitives, and test program: RingBufferFlowTest Thanks in advance, GJ

    Read the article

  • Is the Unix Philosophy still relevant in the Web 2.0 world?

    - by David Titarenco
    Introduction Hello, let me give you some background before I begin. I started programming when I was 5 or 6 on my dad's PSION II (some primitive BASIC-like language), then I learned more and more, eventually inching my way up to C, C++, Java, PHP, JS, etc. I think I'm a pretty decent coder. I think most people would agree. I'm not a complete social recluse, but I do stuff like write a virtual machine for fun. I've never taken a computer course in college because I've been in and out for the past couple of years and have only been taking core classes; never having been particularly amazing at school, perhaps I'm missing some basic tenet that most learn in CS101. I'm currently reading Coders at Work and this question is based on some ideas I read in there. A Brief (Fictionalized) Example So a certain sunny day I get an idea. I hire a designer and hammer away at some C/C++ code for a couple of months, soon thereafter releasing silvr.com, a website that transmutes lead into silver. Yep, I started my very own start-up and even gave it a clever web 2.0 name with a vowel missing. Mom and dad are proud. I come up with some numbers I should be seeing after 1, 2, 3, 6, 9, 12 months and set sail. Obviously, my transmuting server isn't perfect, sometimes it segfaults, sometimes it leaks memory. I fix it and keep truckin'. After all, gdb is my best friend. Eventually, I'm at a position where a very small community of people are happily transmuting lead into silver on a semi-regular basis, but they want to let their friends on MySpace know how many grams of lead they transmuted today. And they want to post images of their lead and silver nuggets on flickr. I'm losing out on potential traffic unless I let them log in with their Yahoo, Google, and Facebook accounts. They want webcam support and live cock fighting, merry-go-rounds and Jabberwockies. All these things seem necessary. The Aftermath Of course, I have to re-write the transmuting server! After all, I've been losing money all these months. I need OAuth libraries and OpenID libraries, JSON support, and the only stable Jabberwocky API is for Java. C++ isn't even an option anymore. I'm just one guy! The Java binary just grows and grows since I need some legacy Apache include for the JSON library, and some antiquated Sun dependency for OAuth support. Then I pick up a book like Coders at Work and read what people like jwz say about complexity... I think to myself.. Keep it simple, stupid. I like simple things. I've always loved the Unix Philosophy but even after trying to keep the new server source modular and sleek, I loathe having to write one more line of code. It feels that I'm just piling crap on top of other crap. Maybe I'm naive thinking every piece of software can be simple and clever. Maybe it's just a phase.. or is the Unix Philosophy basically dead when it comes to the current state of (web) development? I'm just kind of disheartened :(

    Read the article

  • Problem with memset after an instance of a user defined class is created and a file is opened

    - by Liberalkid
    I'm having a weird problem with memset, that was something to do with a class I'm creating before it and a file I'm opening in the constructor. The class I'm working with normally reads in an array and transforms it into another array, but that's not important. The class I'm working with is: #include <vector> #include <algorithm> using namespace std; class PreProcess { public: PreProcess(char* fileName,char* outFileName); void SortedOrder(); private: vector< vector<double > > matrix; void SortRow(vector<double> &row); char* newFileName; vector< pair<double,int> > rowSorted; }; The other functions aren't important, because I've stopped calling them and the problem persists. Essentially I've narrowed it down to my constructor: PreProcess::PreProcess(char* fileName,char* outFileName):newFileName(outFileName){ ifstream input(fileName); input.close(); //this statement is inconsequential } I also read in the file in my constructor, but I've found that the problem persists if I don't read in the matrix and just open the file. Essentially I've narrowed it down to if I comment out those two lines the memset works properly, otherwise it doesn't. Now to the context of the problem I'm having with it: I wrote my own simple wrapper class for matrices. It doesn't have much functionality, I just need 2D arrays in the next part of my project and having a class handle everything makes more sense to me. The header file: #include <iostream> using namespace std; class Matrix{ public: Matrix(int r,int c); int &operator()(int i,int j) {//I know I should check my bounds here return matrix[i*columns+j]; } ~Matrix(); const void Display(); private: int *matrix; const int rows; const int columns; }; Driver: #include "Matrix.h" #include <string> using namespace std; Matrix::Matrix(int r,int c):rows(r),columns(c) { matrix=new int[rows*columns]; memset(matrix,0,sizeof(matrix)); } const void Matrix::Display(){ for(int i=0;i<rows;i++){ for(int j=0;j<columns;j++) cout << (*this)(i,j) << " "; cout << endl; } } Matrix::~Matrix() { delete matrix; } My main program runs: PreProcess test1(argv[1],argv[2]); //test1.SortedOrder(); Matrix test(10,10); test.Display(); And when I run this with the input line uncommented I get: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1371727776 32698 -1 0 0 0 0 0 6332656 0 -1 -1 0 0 6332672 0 0 0 0 0 0 0 0 0 0 0 0 0 -1371732704 32698 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I really don't have a clue what's going on in memory to cause this, on a side note if I replace memset with: for(int i=0;i<rows*columns;i++) *(matrix+i) &= 0x0; Then it works perfectly, it also works if I don't open the file. If it helps I'm running GCC 64-bit version 4.2.4 on Ubuntu.I assume there's some functionality of memset that I'm not properly understanding.

    Read the article

  • Blackberry - application settings save/load

    - by Max Gontar
    Hi! I know two ways to save/load application settings: use PersistentStore use filesystem (store, since SDCard is optional) I'd like to know what are you're practicies of working with application settings? Using PersistentStore to save/load application settings The persistent store provides a means for objects to persist across device resets. A persistent object consists of a key-value pair. When a persistent object is committed to the persistent store, that object's value is stored in flash memory via a deep copy. The value can then be retrieved at a later point in time via the key. Example of helper class for storing and retrieving settings: class PSOptions { private PersistentObject mStore; private LongHashtableCollection mSettings; private long KEY_URL = 0; private long KEY_ENCRYPT = 1; private long KEY_REFRESH_PERIOD = 2; public PSOptions() { // "AppSettings" = 0x71f1f00b95850cfeL mStore = PersistentStore.getPersistentObject(0x71f1f00b95850cfeL); } public String getUrl() { Object result = get(KEY_URL); return (null != result) ? (String) result : null; } public void setUrl(String url) { set(KEY_URL, url); } public boolean getEncrypt() { Object result = get(KEY_ENCRYPT); return (null != result) ? ((Boolean) result).booleanValue() : false; } public void setEncrypt(boolean encrypt) { set(KEY_ENCRYPT, new Boolean(encrypt)); } public int getRefreshPeriod() { Object result = get(KEY_REFRESH_PERIOD); return (null != result) ? ((Integer) result).intValue() : -1; } public void setRefreshRate(int refreshRate) { set(KEY_REFRESH_PERIOD, new Integer(refreshRate)); } private void set(long key, Object value) { synchronized (mStore) { mSettings = (LongHashtableCollection) mStore.getContents(); if (null == mSettings) { mSettings = new LongHashtableCollection(); } mSettings.put(key, value); mStore.setContents(mSettings); mStore.commit(); } } private Object get(long key) { synchronized (mStore) { mSettings = (LongHashtableCollection) mStore.getContents(); if (null != mSettings && mSettings.size() != 0) { return mSettings.get(key); } else { return null; } } } } Example of use: class Scr extends MainScreen implements FieldChangeListener { PSOptions mOptions = new PSOptions(); BasicEditField mUrl = new BasicEditField("Url:", "http://stackoverflow.com/"); CheckboxField mEncrypt = new CheckboxField("Enable encrypt", false); GaugeField mRefresh = new GaugeField("Refresh period", 1, 60 * 10, 10, GaugeField.EDITABLE|FOCUSABLE); ButtonField mLoad = new ButtonField("Load settings", ButtonField.CONSUME_CLICK); ButtonField mSave = new ButtonField("Save settings", ButtonField.CONSUME_CLICK); public Scr() { add(mUrl); mUrl.setChangeListener(this); add(mEncrypt); mEncrypt.setChangeListener(this); add(mRefresh); mRefresh.setChangeListener(this); HorizontalFieldManager hfm = new HorizontalFieldManager(USE_ALL_WIDTH); add(hfm); hfm.add(mLoad); mLoad.setChangeListener(this); hfm.add(mSave); mSave.setChangeListener(this); loadSettings(); } public void fieldChanged(Field field, int context) { if (field == mLoad) { loadSettings(); } else if (field == mSave) { saveSettings(); } } private void saveSettings() { mOptions.setUrl(mUrl.getText()); mOptions.setEncrypt(mEncrypt.getChecked()); mOptions.setRefreshRate(mRefresh.getValue()); } private void loadSettings() { mUrl.setText(mOptions.getUrl()); mEncrypt.setChecked(mOptions.getEncrypt()); mRefresh.setValue(mOptions.getRefreshPeriod()); } }

    Read the article

  • JPA IndirectSet changes not reflected in Spring frontend

    - by Jon
    I'm having an issue with Spring JPA and IndirectSets. I have two entities, Parent and Child, defined below. I have a Spring form in which I'm trying to create a new Child and link it to an existing Parent, then have everything reflected in the database and in the web interface. What's happening is that it gets put into the database, but the UI doesn't seem to agree. The two entities that are linked to each other in a OneToMany relationship like so: @Entity @Table(name = "parent", catalog = "myschema", uniqueConstraints = @UniqueConstraint(columnNames = "ChildLinkID")) public class Parent { private Integer id; private String childLinkID; private Set<Child> children = new HashSet<Child>(0); @Id @GeneratedValue(strategy = IDENTITY) @Column(name = "id", unique = true, nullable = false) public Integer getId() { return this.id; } public void setId(Integer id) { this.id = id; } @Column(name = "ChildLinkID", unique = true, nullable = false, length = 6) public String getChildLinkID() { return this.childLinkID; } public void setChildLinkID(String childLinkID) { this.childLinkID = childLinkID; } @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY, mappedBy = "parent") public Set<Child> getChildren() { return this.children; } public void setChildren(Set<Child> children) { this.children = children; } } @Entity @Table(name = "child", catalog = "myschema") public class Child extends private Integer id; private Parent parent; @Id @GeneratedValue(strategy = IDENTITY) @Column(name = "id", unique = true, nullable = false) public Integer getId() { return this.id; } public void setId(Integer id) { this.id = id; } @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "ChildLinkID", referencedColumnName = "ChildLinkID", nullable = false) public Parent getParent() { return this.parent; } public void setParent(Parent parent) { this.parent = parent; } } And of course, assorted simple properties on each of them. Now, the problem is that when I edit those simple properties from my Spring interface, everything works beautifully. I can persist new entities of these types and they'll appear when using the JPATemplate to do a find on, say, all Parents (getJpaTemplate().find("select p from Parent p")) or on individual entities by ID or another property. The problem I'm running into is that now, I'm trying to create a new Child linked to an existing Parent through a link from the Parent's page. Here's the important bits of the Controller (note that I've placed the JPA foo in the controller here to make it clearer; the actual JpaDaoSupport is actually in another class, appropriately tiered): protected Object formBackingObject(HttpServletRequest request) throws Exception { String parentArg = request.getParameter("parent"); int parentId = Integer.parseInt(parentArg); Parent parent = getJpaTemplate().find(Parent.class, parentId); Child child = new Child(); child.setParent(parent); NewChildCommand command = new NewChildCommand(); command.setChild(child); return command; } protected ModelAndView onSubmit(Object cmd) throws Exception { NewChildCommand command = (NewChildCommand)cmd; Child child = command.getChild(); child.getParent().getChildren().add(child); getJpaTemplate().merge(child); return new ModelAndView(new RedirectView(getSuccessView())); } Like I said, I can run through the form and fill in the new values for the Child -- the Parent's details aren't even displayed. When it gets back to the controller, it goes through and saves it to the underlying database, but the interface never reflects it. Once I restart the app, it's all there and populated appropriately. What can I do to clear this up? I've tried to call extra merges, tried refreshes (which gave a transaction exception), everything short of just writing my own database access code. I've made sure that every class has an appropriate equals() and hashCode(), have full JPA debugging on to see that it's making appropriate SQL calls (it doesn't seem to make any new calls to the Child table) and stepped through in the debugger (it's all in IndirectSets, as expected, and between saving and displaying the Parent the object takes on a new memory address). What's my next step?

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >