Search Results

Search found 10808 results on 433 pages for 'apache regexp'.

Page 218/433 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • How to setup tomcat 7 as a server on windows 2008 server

    - by birdy
    I setup the tomcat7 as a service as below: c:\Program Files (x86)\Tomcat7\apache-tomcat-7.0.32\bin>service.bat install Installing the service 'Tomcat7' ... Using CATALINA_HOME: "C:\Program Files (x86)\Tomcat7\apache-tomcat-7.0.32" Using CATALINA_BASE: "C:\Program Files (x86)\Tomcat7\apache-tomcat-7.0.32" Using JAVA_HOME: "C:\Program Files (x86)\Java\jdk1.7.0_09" Using JRE_HOME: "C:\Program Files (x86)\Java\jdk1.7.0_09\jre" Using JVM: "C:\Program Files (x86)\Java\jdk1.7.0_09\jre\bin\server\ jvm.dll" However, when I try to start the service, I Get the error below: c:\Program Files (x86)\Tomcat7\apache-tomcat-7.0.32\bin>tomcat7.exe %1 is not a valid Win32 application. Failed to run service as console application This is the file I downloaded from apache: apache-tomcat-7.0.32-windows-x64.zip. I am able to successfully start tomcat on port 8080 as a standalong thing. Meaning I go to command prompt and type startup.bat and it starts up successfully. Question How can I resolve this and what are the things I should be troubleshooting for?

    Read the article

  • Netbeans automatically changes the file owner when updating files

    - by Alon_A
    We use Netbeans IDE 7.2 to edit our PHP files. In the Run Configuration it is configured as Remote Web Site to automatically save the changes on our web server (Centos OS 6.3). The problem is that every time it is updating the files the owner of the file is changed from apache:apache to userThatUploadedTheFile:users. This causes us problems with SOAP cache files that are configured with apache:apache ownership, and we need to manually chownit back to apache:apache. We've checked the "Preserve Remote File Permissions" checkbox, so the permissions are not changed, only the owner. Is there any solution to preserve the ownership ?

    Read the article

  • HDFS some datanodes of cluster are suddenly disconnected while reducers are running

    - by user1429825
    I have 8 slave computers and 1 master computer for running Hadoop (ver 0.21) some datanodes of cluster are suddenly disconnected while I was running MapReduce code on 10GB data After all mappers finished and around 80% of reducers was processed, randomly one or more datanode disconned from network. and then the other datanodes start to disappear from network even if I killed the MapReduce job when I found some datanode was disconnected. I've tried to change dfs.datanode.max.xcievers to 4096, turned off fire-walls of all computing node, disabled selinux and increased the number of file open limit to 20000 but they didn't work at all... anyone have a idea to solve this problem? followings are error log from mapreduce 12/06/01 12:31:29 INFO mapreduce.Job: Task Id : attempt_201206011227_0001_r_000006_0, Status : FAILED java.io.IOException: Bad connect ack with firstBadLink as ***.***.***.148:20010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:889) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:820) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:427) and followings are logs from datanode 2012-06-01 13:01:01,118 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-5549263231281364844_3453 src: /*.*.*.147:56205 dest: /*.*.*.142:20010 2012-06-01 13:01:01,136 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020) Starting thread to transfer block blk_-3849519151985279385_5906 to *.*.*.147:20010 2012-06-01 13:01:19,135 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-5797481564121417802_3453 to *.*.*.146:20010 got java.net.ConnectException: > Connection timed out at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:701) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:373) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1257) at java.lang.Thread.run(Thread.java:722) 2012-06-01 13:06:20,342 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_6674438989226364081_3453 2012-06-01 13:09:01,781 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(*.*.*.142:20010, storageID=DS-1534489105-*.*.*.142-20010-1337757934836, infoPort=20075, ipcPort=20020):Failed to transfer blk_-3849519151985279385_5906 to *.*.*.147:20010 got java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/*.*.*.142:60057 remote=/*.*.*.147:20010] at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246) at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164) at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:388) at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:476) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1284) at java.lang.Thread.run(Thread.java:722) hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/name</value> </property> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data/hdfs1,/home/hadoop/data/hdfs2,/home/hadoop/data/hdfs3,/home/hadoop/data/hdfs4,/home/hadoop/data/hdfs5</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.http.address</name> <value>0.0.0.0:20070</value> <description>50070 The address and the base port where the dfs namenode web ui will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:20075</value> <description>50075 The datanode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.secondary.http.address</name> <value>0.0.0.0:20090</value> <description>50090 The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:20010</value> <description>50010 The address where the datanode server will listen to. If the port is 0 then the server will start on a free port. </description> <property> <name>dfs.datanode.ipc.address</name> <value>0.0.0.0:20020</value> <description>50020 The datanode ipc server address and port. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>dfs.datanode.https.address</name> <value>0.0.0.0:20475</value> </property> <property> <name>dfs.https.address</name> <value>0.0.0.0:20470</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>masternode:29001</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/data/mapreduce/system</value> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/data/mapreduce/local</value> </property> <property> <name>mapred.map.tasks</name> <value>32</value> <description> default number of map tasks per job.</description> </property> <property> <name>mapred.tasktracker.map.tasks.maximum</name> <value>4</value> </property> <property> <name>mapred.reduce.tasks</name> <value>8</value> <description> default number of reduce tasks per job.</description> </property> <property> <name>mapred.map.child.java.opts</name> <value>-Xmx2048M</value> </property> <property> <name>io.sort.mb</name> <value>500</value> </property> <property> <name>mapred.task.timeout</name> <value>1800000</value> <!-- 30 minutes --> </property> <property> <name>mapred.job.tracker.http.address</name> <value>0.0.0.0:20030</value> <description> 50030 The job tracker http server address and port the server will listen on. If the port is 0 then the server will start on a free port. </description> </property> <property> <name>mapred.task.tracker.http.address</name> <value>0.0.0.0:20060</value> <description> 50060 </property> </configuration>

    Read the article

  • Is Apache 2.2.22 able to sustain 1.000 simultaneous connected clients?

    - by Fnux
    For an article in a news paper, I'm benchmarking 5 different web servers (Apache2, Cherokee, Lighttpd, Monkey and Nginx). The tests made consist of measuring the execution times as well as different parameters such as the number of request served per second, the amount of RAM, the CPU used, during a growing load of simultaneous clients (from 1 to 1.000 with a step of 10) each client sending 1.000.000 requets of a small fixed file, then of a medium fixed file, then a small dynamic content (hello.php) and finally a complex dynamic content (the computation of the reimbursment of a loan). All the web servers are able to sustain such a load (up to 1.000 clients) but Apache2 which always stops to respond when the test reach 450 to 500 simultaneous clients. My configuration is : CPU: AMD FX 8150 8 cores @ 4.2 GHz RAM: 32 Gb. SSD: 2 x Crucial 240 Gb SATA6 OS: Ubuntu 12.04.3 64 bit WS: Apache 2.2.22 My Apache2 configuration is as follows: /etc/apache2/apache2.conf LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 30 KeepAlive On MaxKeepAliveRequests 1000000 KeepAliveTimeout 2 ServerName "fnux.net" <IfModule mpm_prefork_module> StartServers 16 MinSpareServers 16 MaxSpareServers 16 ServerLimit 2048 MaxClients 1024 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType None HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel emerg Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ /etc/apache2/ports.conf NameVirtualHost *:8180 Listen 8180 <IfModule mod_ssl.c> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> /etc/apache2/mods-available <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /cgi-bin/php5.external <Location "/cgi-bin/php5.external"> Order Deny,Allow Deny from All Allow from env=REDIRECT_STATUS </Location> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:8180> ServerAdmin webmaster@localhost DocumentRoot /var/www/apache2 <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel emerg ##### CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization </IfModule> </VirtualHost> /etc/security/limits.conf * soft nofile 1000000 * hard nofile 1000000 So, I would trully appreciate your advice to setup Apache2 to make it able to sustain 1.000 simultaneous clients, if this is even possible. TIA for your help. Cheers.

    Read the article

  • java AbstractMethodError

    - by Akhil
    How to handle this error in lucene: java.lang.AbstractMethodError: org.apache.lucene.store.Directory.listAll()[Ljava/lang/String; at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:568) at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:69) at org.apache.lucene.index.IndexReader.open(IndexReader.java:316) at org.apache.lucene.index.IndexReader.open(IndexReader.java:188) I am making a lucene function call but unfortunately it itself calls an abstract method of some class, as is evident from the error above. What is the work around for this? Thanks, --Akhil

    Read the article

  • My httpd.conf was accidentally deleted, can't start apache. How do I recreate httpd.conf in Godaddy'

    - by mdm414
    Hi, I don't know if it's appropriate to ask for this but I really need to know what's the initial content of /var/www/vhosts/mydomain.com/conf/httpd.include on Godaaddy's VDS because the conf directory was accidentally deleted by somebody and now I cannot start Apache. We didn't purchase an assited service plan so Godaddy won't help me with this. So please, if you're using Godaddy's VDS plan, please help me with this. ** I also need to include in httpd.conf the lines needed for Plesk to work. Thanks a lot

    Read the article

  • Reslet with GWT and Tomcat Error

    - by Holograham
    I am getting this error on startup I am using GWT 2.0.3 and Reslet RC3 type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception javax.servlet.ServletException: Servlet.init() for servlet adapter threw exception org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:619) root cause java.lang.NoSuchMethodError: org.restlet.Client.handle(Lorg/restlet/Request;)Lorg/restlet/Response; org.restlet.ext.servlet.ServerServlet.createComponent(ServerServlet.java:423) org.restlet.ext.servlet.ServerServlet.getComponent(ServerServlet.java:763) org.restlet.ext.servlet.ServerServlet.init(ServerServlet.java:881) javax.servlet.GenericServlet.init(GenericServlet.java:212) org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) java.lang.Thread.run(Thread.java:619) My web XML is as follows: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd" > <web-app> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> <context-param> <param-name>org.restlet.clients</param-name> <param-value>CLAP FILE WAR</param-value> </context-param> <servlet> <servlet-name>adapter</servlet-name> <servlet-class>org.restlet.ext.servlet.ServerServlet</servlet-class> <init-param> <param-name>org.restlet.application</param-name> <param-value>com.tdc.Propspace.server.TestServerApplication</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>adapter</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> </web-app> Any ideas what would cause this?

    Read the article

  • Strange exception of Httpcore nio in java

    - by bruce dou
    Exception in thread "Thread-0" java.lang.NullPointerException at org.apache.http.impl.nio.reactor.AbstractIOReactor.closeActiveChannels(AbstractIOReactor.java:532) at org.apache.http.impl.nio.reactor.AbstractIOReactor.hardShutdown(AbstractIOReactor.java:564) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.doShutdown(AbstractMultiworkerIOReactor.java:411) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:340) at com.***.clawer.Clawer$1.run(Clawer.java:81) at java.lang.Thread.run(Unknown Source) Exception in thread "Thread-1" java.lang.IllegalStateException: I/O reactor has been shut down at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.connect(DefaultConnectingIOReactor.java:190) at com.***.clawer.Run.run(Run.java:29)

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

  • Apache tiles and body onload event. Known issue?

    - by Angus
    We're prototyping a new visual framework using Apache Tiles for markup templating. Often the document onload event (actually prototype's dom:loaded event) is getting fired before all DOM objects are actually loaded Is it possible that the tiles templates are loading in an asynchronous fashion and therefore doing an end run around the event handler? Has anyone else had this experience? Can anyone share a robust workaround? thanks in advance

    Read the article

  • How do I change the root directory of an apache server?

    - by Spencer Cooley
    Does anyone know how to change the document root of the Apache server? I basically want localhost to come from /users/spencer/projects directory instead of /var/www. Edit I ended up figuring it out. Some suggested I change the httpd.conf file, but I ended up finding a file in /etc/apache2/sites-available/default and changed the root directory from /var/www to /home/myusername/projects_folder and that worked.

    Read the article

  • Apache Virtual Host in Windows - how do I deal with Symbolic links?

    - by Peeaytchpee
    Hi Folks, I'm trying to run a virtual host on a WAPP stack. My virtual host has the FollowSymLinks option, but in Windows, all those symbolic links (I'm using shortcuts, and I think this may be the problem) have the .lnk extension. So if I'm trying to access settings.html, Apache can't find it because all i have sitting there is settings.html.lnk. Apologies if my question is unclear.

    Read the article

  • JavaScript Node.replaceChild() doesn't count new child's innerHtml

    - by manuna
    While creating a Firefox addon, I've run into a weird problem. I have an array of nodes, returned by some iterator. Iterator returns only nodes, containing Node.TEXT_NODE as one or more of it's children. The script runs on page load. I have to find some text in that nodes by regexp and surround it with a SPAN tag. //beginning skipped var node = nodeList[i]; var node_html = node.innerHTML; var node_content = node.textContent; if(node_content.length > 1){ var new_str = "<SPAN class='bar'>" + foo + "</SPAN>"; var regexp = new RegExp( foo , 'g' ); node_html = node_html.replace(regexp, new_str); node.innerHTML = node_html; } Basic version looked like this, and it worked except one issue - node.innerHTML could contain attributes, event handlers, that could also contain foo, that should not be surrounded with <span> tags. So I decided to make replacements in text nodes only. But text nodes can't contain a HTML tag, so I had to wrap them with <div>. Like this: var node = nodeList[i]; for(var j=0; j<node.childNodes.length; j++){ var child = node.childNodes[j]; var child_content = child.textContent; if(child.nodeType == Node.TEXT_NODE && child_content.length >1){ var newChild = document.createElement('div'); // var newTextNode = document.createTextNode(child_content); // newChild.appendChild(newTextNode); var new_html = child_content; var new_str = "<SPAN class='bar'>" + foo + "</SPAN>"; var regexp = new RegExp( foo , 'g' ); new_html = new_html.replace(regexp, new_str); newChild.innerHTML = new_html; alert(newChild.innerHTML); node.replaceChild(newChild, child); } } In this case, alert(newChild.innerHTML); shows right html. But after the page is rendered, all <div>s created are empty! I'm puzzled. If I uncomment this code: // var newTextNode = document.createTextNode(child_content); // newChild.appendChild(newTextNode); alert also shows things right, and <div>s are filled with text (textNode adding works ok) , but again without <span>s. And another funny thing is that I can't highlight that new <div>s' content with a mouse in browser. Looks like it doesn't take new innerHTML into account, and I can't understand why. Do I do something wrong? (I certainly do, but what? Or, is that a FF bug/feature?)

    Read the article

  • chosen add multiple row dinamical

    - by Mario Jose Mixco
    I have a question with the plugin ajax-chosen, I need to add multiple dynamically on a form on page load the first no problem but when I try to dynamically add a new element does not work, I hope you can help me and again sorry for my English $ ("a.add-nested-field"). each (function (index, element) { return $ (element). on ("click", function () { var association, new_id, regexp, template; association = $ (element). attr ("data-association"); template = $ ("#" + association + "_fields_template"). html (); regexp = new RegExp ("new_" + association, "g"); new_id = new Date (). getTime (); $ (element). closest ("form"). FIND (". nested-field: visible: last"). after (template.replace (regexp, new_id)); return false; }); });

    Read the article

  • How to escape regular expression in javascript ?

    - by Relax
    My codes is like pattern = 'arrayname[1]'; // fetch from dom, make literal here just for example reg = new RegExp(RegExp.quote(pattern), 'g'); mystring.replace(reg, 'arrayname[2]'); But it just cannot get running with error message says: "RegExp.quote is not a function", am i missing something simple?

    Read the article

  • From where these links are coming

    - by Ramesh
    i am having an web site and it displays in google for some keywords and i need to know these sites are getting displayed in google like this Welcome! - The Apache Software Foundation Supports the development of a number of open-source software projects, including the Apache web server. Includes license information, latest news, ... www.apache.org/ - Cached - Similar Apache web server Tomcat Mirrors Projects from where these '"apache" "web server" tomcat links are coming from ...how to do this for my site///

    Read the article

  • Help needed in writing regular expression -- TCL

    - by user330727
    Hello Everyone, Just seeking a favour to write a regular expression to match the following set of strings. I want to write an expression which matches all the following strings TCL i) ( XYZ XZZ XVZ XWZ ) Clue : Starting string is X and Z ending string is same for all the pairs. Only the middle string is differs Y Z V W. My trial: [regexp {^X([Y|Z|V|W]*)Z$}] I want to write an another regexp which catches/matches only the following string wherever comes ii) (XYZ) My trial: [regexp {^X([Y]*)Z$}]

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >