Search Results

Search found 10553 results on 423 pages for 'apache commons beanutils'.

Page 215/423 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Netbeans automatically changes the file owner when updating files

    - by Alon_A
    We use Netbeans IDE 7.2 to edit our PHP files. In the Run Configuration it is configured as Remote Web Site to automatically save the changes on our web server (Centos OS 6.3). The problem is that every time it is updating the files the owner of the file is changed from apache:apache to userThatUploadedTheFile:users. This causes us problems with SOAP cache files that are configured with apache:apache ownership, and we need to manually chownit back to apache:apache. We've checked the "Preserve Remote File Permissions" checkbox, so the permissions are not changed, only the owner. Is there any solution to preserve the ownership ?

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Is Apache 2.2.22 able to sustain 1.000 simultaneous connected clients?

    - by Fnux
    For an article in a news paper, I'm benchmarking 5 different web servers (Apache2, Cherokee, Lighttpd, Monkey and Nginx). The tests made consist of measuring the execution times as well as different parameters such as the number of request served per second, the amount of RAM, the CPU used, during a growing load of simultaneous clients (from 1 to 1.000 with a step of 10) each client sending 1.000.000 requets of a small fixed file, then of a medium fixed file, then a small dynamic content (hello.php) and finally a complex dynamic content (the computation of the reimbursment of a loan). All the web servers are able to sustain such a load (up to 1.000 clients) but Apache2 which always stops to respond when the test reach 450 to 500 simultaneous clients. My configuration is : CPU: AMD FX 8150 8 cores @ 4.2 GHz RAM: 32 Gb. SSD: 2 x Crucial 240 Gb SATA6 OS: Ubuntu 12.04.3 64 bit WS: Apache 2.2.22 My Apache2 configuration is as follows: /etc/apache2/apache2.conf LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 30 KeepAlive On MaxKeepAliveRequests 1000000 KeepAliveTimeout 2 ServerName "fnux.net" <IfModule mpm_prefork_module> StartServers 16 MinSpareServers 16 MaxSpareServers 16 ServerLimit 2048 MaxClients 1024 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType None HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel emerg Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ /etc/apache2/ports.conf NameVirtualHost *:8180 Listen 8180 <IfModule mod_ssl.c> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> /etc/apache2/mods-available <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /cgi-bin/php5.external <Location "/cgi-bin/php5.external"> Order Deny,Allow Deny from All Allow from env=REDIRECT_STATUS </Location> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:8180> ServerAdmin webmaster@localhost DocumentRoot /var/www/apache2 <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel emerg ##### CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization </IfModule> </VirtualHost> /etc/security/limits.conf * soft nofile 1000000 * hard nofile 1000000 So, I would trully appreciate your advice to setup Apache2 to make it able to sustain 1.000 simultaneous clients, if this is even possible. TIA for your help. Cheers.

    Read the article

  • java AbstractMethodError

    - by Akhil
    How to handle this error in lucene: java.lang.AbstractMethodError: org.apache.lucene.store.Directory.listAll()[Ljava/lang/String; at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:568) at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:69) at org.apache.lucene.index.IndexReader.open(IndexReader.java:316) at org.apache.lucene.index.IndexReader.open(IndexReader.java:188) I am making a lucene function call but unfortunately it itself calls an abstract method of some class, as is evident from the error above. What is the work around for this? Thanks, --Akhil

    Read the article

  • My httpd.conf was accidentally deleted, can't start apache. How do I recreate httpd.conf in Godaddy'

    - by mdm414
    Hi, I don't know if it's appropriate to ask for this but I really need to know what's the initial content of /var/www/vhosts/mydomain.com/conf/httpd.include on Godaaddy's VDS because the conf directory was accidentally deleted by somebody and now I cannot start Apache. We didn't purchase an assited service plan so Godaddy won't help me with this. So please, if you're using Godaddy's VDS plan, please help me with this. ** I also need to include in httpd.conf the lines needed for Plesk to work. Thanks a lot

    Read the article

  • How to upgrade all dependencies to a specific version

    - by Calm Storm
    Hi, I tried doing a mvn dependency:tree and I get a tree of dependencies. My question is, My project depends on many modules which internally depends on many spring artifacts. There are a few version clashes. I want to upgrade all spring related libraries to say the latest one (2.6.x or above). What is the preferred way to do this? Should I declare all the deps spring-context, spring-support (and 10 other artifacts) in my pom.xml and point them to 2.6.x ? Is there any other better method ? [INFO] +- com.xxxx:yyy-jar:jar:1.0-SNAPSHOT:compile [INFO] | +- com.xxxx:zzz-commons:jar:1.0-SNAPSHOT:compile [INFO] | | +- org.springframework:spring-dao:jar:2.0.7:compile [INFO] | | +- org.springframework:spring-jdbc:jar:2.0.7:compile [INFO] | | +- org.springframework:spring-web:jar:2.0.7:compile [INFO] | | +- org.springframework:spring-support:jar:2.0.7:compile [INFO] | | +- net.sf.ehcache:ehcache:jar:1.2:compile [INFO] | | +- commons-collections:commons-collections:jar:3.2:compile [INFO] | | +- aspectj:aspectjweaver:jar:1.5.3:compile [INFO] | | +- betex-commons:betex-commons:jar:5.5.1-2:compile [INFO] | | \- javax.servlet:servlet-api:jar:2.4:compile [INFO] | +- org.springframework:spring-beans:jar:2.0.7:compile [INFO] | +- org.springframework:spring-jmx:jar:2.0.7:compile [INFO] | +- org.springframework:spring-remoting:jar:2.0.7:compile [INFO] | +- org.apache.cxf:cxf-rt-core:jar:2.0.2-incubator:compile [INFO] | | +- org.apache.cxf:cxf-api:jar:2.0.2-incubator:compile [INFO] | | | +- org.apache.geronimo.specs:geronimo-activation_1.1_spec:jar:1.0-M1:compile [INFO] | | | +- org.codehaus.woodstox:wstx-asl:jar:3.2.1:compile [INFO] | | | +- org.apache.neethi:neethi:jar:2.0.2:compile [INFO] | | | \- org.apache.cxf:cxf-common-schemas:jar:2.0.2-incubator:compile UPDATE : I have removed the extra question about "\-" so my question is now what the subject asks for :)

    Read the article

  • Reslet with GWT and Tomcat Error

    - by Holograham
    I am getting this error on startup I am using GWT 2.0.3 and Reslet RC3 type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception javax.servlet.ServletException: Servlet.init() for servlet adapter threw exception org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:619) root cause java.lang.NoSuchMethodError: org.restlet.Client.handle(Lorg/restlet/Request;)Lorg/restlet/Response; org.restlet.ext.servlet.ServerServlet.createComponent(ServerServlet.java:423) org.restlet.ext.servlet.ServerServlet.getComponent(ServerServlet.java:763) org.restlet.ext.servlet.ServerServlet.init(ServerServlet.java:881) javax.servlet.GenericServlet.init(GenericServlet.java:212) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:619) My web XML is as follows: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd" > <web-app> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> <context-param> <param-name>org.restlet.clients</param-name> <param-value>CLAP FILE WAR</param-value> </context-param> <servlet> <servlet-name>adapter</servlet-name> <servlet-class>org.restlet.ext.servlet.ServerServlet</servlet-class> <init-param> <param-name>org.restlet.application</param-name> <param-value>com.tdc.Propspace.server.TestServerApplication</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>adapter</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app> Any ideas what would cause this?

    Read the article

  • Strange exception of Httpcore nio in java

    - by bruce dou
    Exception in thread "Thread-0" java.lang.NullPointerException at org.apache.http.impl.nio.reactor.AbstractIOReactor.closeActiveChannels(AbstractIOReactor.java:532) at org.apache.http.impl.nio.reactor.AbstractIOReactor.hardShutdown(AbstractIOReactor.java:564) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.doShutdown(AbstractMultiworkerIOReactor.java:411) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:340) at com.***.clawer.Clawer$1.run(Clawer.java:81) at java.lang.Thread.run(Unknown Source) Exception in thread "Thread-1" java.lang.IllegalStateException: I/O reactor has been shut down at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.connect(DefaultConnectingIOReactor.java:190) at com.***.clawer.Run.run(Run.java:29)

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

  • What does the \- mean in the mvn dependency tree output

    - by Calm Storm
    Hi, I tried doing a mvn dependency:tree and I get a tree of dependencies. The output looks like below. I want to know what is the "-" symbol that is shown at times and the "+-" symbol for other dependencies (it doesnt seem to be the scope) My actual question is, My project depends on many modules which internally depends on many spring artifacts. There are a few version clashes. I want to upgrade all spring related libraries to say the latest one (2.6.x or above). What is the preferred way to do this? Should I declare all the deps spring-context, spring-support (and 10 other artifacts) in my pom.xml and point them to 2.6.x ? Is there any other better method ? [INFO] +- com.xxxx:yyy-jar:jar:1.0-SNAPSHOT:compile [INFO] | +- com.xxxx:zzz-commons:jar:1.0-SNAPSHOT:compile [INFO] | | +- org.springframework:spring-dao:jar:2.0.7:compile [INFO] | | +- org.springframework:spring-jdbc:jar:2.0.7:compile [INFO] | | +- org.springframework:spring-web:jar:2.0.7:compile [INFO] | | +- org.springframework:spring-support:jar:2.0.7:compile [INFO] | | +- net.sf.ehcache:ehcache:jar:1.2:compile [INFO] | | +- commons-collections:commons-collections:jar:3.2:compile [INFO] | | +- aspectj:aspectjweaver:jar:1.5.3:compile [INFO] | | +- betex-commons:betex-commons:jar:5.5.1-2:compile [INFO] | | \- javax.servlet:servlet-api:jar:2.4:compile [INFO] | +- org.springframework:spring-beans:jar:2.0.7:compile [INFO] | +- org.springframework:spring-jmx:jar:2.0.7:compile [INFO] | +- org.springframework:spring-remoting:jar:2.0.7:compile [INFO] | +- org.apache.cxf:cxf-rt-core:jar:2.0.2-incubator:compile [INFO] | | +- org.apache.cxf:cxf-api:jar:2.0.2-incubator:compile [INFO] | | | +- org.apache.geronimo.specs:geronimo-activation_1.1_spec:jar:1.0-M1:compile [INFO] | | | +- org.codehaus.woodstox:wstx-asl:jar:3.2.1:compile [INFO] | | | +- org.apache.neethi:neethi:jar:2.0.2:compile [INFO] | | | \- org.apache.cxf:cxf-common-schemas:jar:2.0.2-incubator:compile

    Read the article

  • Apache tiles and body onload event. Known issue?

    - by Angus
    We're prototyping a new visual framework using Apache Tiles for markup templating. Often the document onload event (actually prototype's dom:loaded event) is getting fired before all DOM objects are actually loaded Is it possible that the tiles templates are loading in an asynchronous fashion and therefore doing an end run around the event handler? Has anyone else had this experience? Can anyone share a robust workaround? thanks in advance

    Read the article

  • How do I change the root directory of an apache server?

    - by Spencer Cooley
    Does anyone know how to change the document root of the Apache server? I basically want localhost to come from /users/spencer/projects directory instead of /var/www. Edit I ended up figuring it out. Some suggested I change the httpd.conf file, but I ended up finding a file in /etc/apache2/sites-available/default and changed the root directory from /var/www to /home/myusername/projects_folder and that worked.

    Read the article

  • Blocking on DBCP connection pool (open and close connnection). Is database connection pooling in OpenEJB pluggable?

    - by topchef
    We use OpenEJB on Tomcat (used to run on JBoss, Weblogic, etc.). While running load tests we experience significant performance problems with handling JMS messages (queues). Problem was localized to blocking on database connection pool getting or releasing connection to the pool. Blocking prevented concurrent MDB instances (threads) from running hence performance suffered 10-fold and worse. The same code used to run on application servers (with their respective connection pool implementations) with no blocking at all. Example of thread blocked: Name: JMS Resource Adapter-worker-23 State: BLOCKED on org.apache.commons.pool.impl.GenericObjectPool@1ea6b4a owned by: JMS Resource Adapter-worker-19 Total blocked: 18,426 Total waited: 0 Stack trace: org.apache.commons.pool.impl.GenericObjectPool.returnObject(GenericObjectPool.java:916) org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:91) - locked org.apache.commons.dbcp.PoolableConnection@1bcba8 org.apache.commons.dbcp.managed.ManagedConnection.close(ManagedConnection.java:147) com.xxxxx.persistence.DbHelper.closeConnection(DbHelper.java:290) .... Couple of questions. I am almost certain that some transactional attributes and properties contribute to this blocking, but MDBs are defined as non-transactional (we use both annotations and ejb-jar.xml). Some EJBs do use container-managed transactions though (and we can observe blocking there as well). Are there any DBCP configurations that may fix blocking? Is DBCP connection pool implementation replaceable in OpenEJB? How easy (difficult) to replace it with another library? Just in case this is how we define data source in OpenEJB (openejb.xml): <Resource id="MyDataSource" type="DataSource"> JdbcDriver oracle.jdbc.driver.OracleDriver JdbcUrl ${oracle.jdbc} UserName ${oracle.user} Password ${oracle.password} JtaManaged true InitialSize 5 MaxActive 30 ValidationQuery SELECT 1 FROM DUAL TestOnBorrow true </Resource>

    Read the article

  • Rich faces3.3.2 is not working in WebSphear 7

    - by palakolanusrinu
    Hi, I'm new to this rich faces and JSF i have developed one sample app and its working fine in tomcat6 now same app i have moved to WebSphear7 here i'm facing the problem. Exception on console: Unable to locate the tag library for uri //richfaces.org/rich" List of jars in build path: Commons-beanutils-1.8.3.jar commons-codec-1.4.jar commons-collections-3.2.1.jar commons-digester-2.0.jar commons-discovery-0.4.jar conmmons-logging.jar jsf-api.jar jsf-impl.jar jstl-1.2.jar jstl-api-1.2.jar jst-impl-1.2.jar richfaces-api-3.3.2.SR1.jar richfaces-impl-3.3.2.SR1.jar richfaces-ui-3.3.2.SR1.jar Web.xml is http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5" bbh index.html index.htm index.jsp default.html default.htm default.jsp Faces Servlet javax.faces.webapp.FacesServlet 1 Faces Servlet *.jsf State saving method: 'client' or 'server' (=default). See JSF Specification 2.5.2 javax.faces.STATE_SAVING_METHOD client javax.servlet.jsp.jstl.fmt.localizationContext resources.application com.sun.faces.config.ConfigureListener Faces Servlet *.faces <context-param> <param-name>org.richfaces.SKIN</param-name> <param-value>blueSky</param-value> </context-param> <filter> <display-name>RichFaces Filter</display-name> <filter-name>richfaces</filter-name> <filter-class>org.ajax4jsf.Filter</filter-class> <init-param> <param-name>createTempFiles</param-name> true maxRequestSize 2000000 richfaces Faces Servlet REQUEST FORWARD INCLUDE My JSP including the tag libs are like <%@taglib uri="http://java.sun.com/jsf/core" prefix="f"% <%@taglib uri="http://java.sun.com/jsf/html" prefix="h"% <%@taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"% <%@ taglib uri="http://richfaces.org/a4j" prefix="a4j"% <%@ taglib uri="http://richfaces.org/rich" prefix="rich"% Please Help me and thx in advance

    Read the article

  • assamble.xml with dependencies

    - by Blanca
    Hi! I would like to generate a .jar applicatio from a project made in Maven. am working in Eclipse, and I made: run as/Maven assembly:assembly This is the error message: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-assembly-plugin:2.2-beta-4:assembly (default-cli) on project FeelIndexer: Error reading assemblies: No assembly descriptors found. - [Help 1] This is my assamble.xml exe jar false true runtime commons-lang:commons-lang commons-cli:commons-cli target/classes I think i have to include something else for adding the dependencies of muy project, but i don't know how to do it!! suggestions?? thank you!

    Read the article

  • file upload in JSF using myfaces component

    - by prt
    Hi, all i am creating a JSF application where file uploading functionality is required.I have added all the required jar files in my /WEB-INF/lib folder. jsf-api.jar jsf-impl.jar jstl.jar standard.jar myfaces-extensions.jar commons-collections.jar commons-digester.jar commons-beanutils.jar commons-logging.jar commons-fileupload-1.0.jar but still when trying to deploy the application on apache 6.0.29 i am getting the following error. org.apache.catalina.core.StandardContext addApplicationListener INFO: The listener "com.sun.faces.config.ConfigureListener" is already configured for this context. The duplicate definition has been ignored. org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart PM org.apache.catalina.core.StandardContext start SEVERE: Context [/jsfApplication] startup failed due to previous errors org.apache.catalina.loader.WebappClassLoader clearReferencesJdbc The web application [/jsfApplication] registered the JBDC driver [com.mysql.jdbc.Driver] but failed to unregister it when the web application was stopped. To prevent a memory leak, the JDBC Driver has been forcibly unregistered. org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/jsfApplication] appears to have started a thread named [Timer-0] but has failed to stop it. This is very likely to create a memory leak. org.apache.catalina.loader.WebappClassLoader clearReferencesThreads SEVERE: The web application [/jsfApplication] appears to have started a thread named [MySQL Statement Cancellation Timer] but has failed to stop it. This is very likely to create a memory leak. log4j:ERROR LogMananger.repositorySelector was null likely due to error in class reloading, using NOPLoggerRepository. i am using also using hibernate and spring framework for this application. please help. thanks,

    Read the article

  • Apache Virtual Host in Windows - how do I deal with Symbolic links?

    - by Peeaytchpee
    Hi Folks, I'm trying to run a virtual host on a WAPP stack. My virtual host has the FollowSymLinks option, but in Windows, all those symbolic links (I'm using shortcuts, and I think this may be the problem) have the .lnk extension. So if I'm trying to access settings.html, Apache can't find it because all i have sitting there is settings.html.lnk. Apologies if my question is unclear.

    Read the article

  • Maven giving error Missing artifact

    - by newbie
    I'm adding Google App Engine, Spring and Tiles2 to same project, for some reason Apache Maven gives this error. Missing artifact org.apache.commons:com.springsource.org.apache.commons.collections:jar:3.2.1:compile here is my pom.xml where i include dependency: <dependency> <groupId>org.apache.commons</groupId> <artifactId>com.springsource.org.apache.commons.collections</artifactId> <version>3.2.1</version> <type>pom</type> </dependency>

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >