Search Results

Search found 146 results on 6 pages for 'francis huang'.

Page 3/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Erratic WiFi 2.4 GHz channel spikes, what gives?

    - by Francis W. Usher
    Sorry guys, first a gripe about my neighbor's WiFi access point (it is related): they totally hog the center nine 2.4 GHz channels (3-11), centered right at 7! I know the outer regions of the signal don't make as much of a difference, and technically they're running channels 5 & 9. Anyway, their signal is clearly interfering with mine, which is necessarily centered at 3 or 11 to evade their interference. I guess it's somewhat a case of access point envy: they happen to have both a stronger signal and a higher data rate, while occupying twice the band width that I do. Getting to the point, I've noticed that they tend to sit nice and pretty centered at 7, but they definitely auto-select their channel, and I've noticed that the auto-selection algorithm tends to shift towards the higher channels; hence I decided to pick channel 3, and I don't get so many intermittent lag spikes any more. Anyway, the thing that weirded me out was the reason they have to auto-select sometimes: unexplained, powerful (talking order of 0dB here), giant spikes of 2.4 GHz activity in consistent regions of the spectrum. I don't think it's just noise, since my wireless monitoring software is registering a MAC address, a manufacturer, and usually a fairly coherent ascii name... and it seems to be a fairly well-confined signal. But these signals are fairly common, and they do some weird stuff to my signal. So my question is what are these signals? Where are they coming from? Where are they going? Why are they so ridiculously strong? Why don't they ever last very long? Here's an inSSIDer screenshot I took, for your perusal. I am labeled with "me", my greedy neighbor labeled with "neighbor", and the 2 quasar signals are labeled with "WTF?".

    Read the article

  • Check if folders exist in Git repository... testing if a sub-string exists in bash with NULL as a separator

    - by Craig Francis
    I have a common git "post-receive" script for several projects, and it needs to perform different actions if an /app/ or /public/ folder exists in the root. Using: FOLDERS=`git ls-tree -d --name-only -z master`; I can see the directory listing, and I would like to use the RegExp support in bash to run something like: if [[ "$FOLDERS" =~ app ]]; then ... fi But that won't work if there was something like an "app lication" folder... I specified the "-z" option in the git "ls-tree" command so I could use the \0 (null) character as a separator, but not sure how to test for that in the bash RegExp. Likewise I know there is support for specifying a particular path in the ls-tree command, and could then pipe that to "wc -l", but I'd have thought it was quicker to get a full directory listing of the root (not recursive) then test for the 2 (or more) folders with the returned output. Possibly related to: http://stackoverflow.com/questions/7938094/git-how-to-check-which-files-exist-and-their-content-in-a-shared-bare-repos

    Read the article

  • CentOS 6.5 x64 + Ajendi +nginx + php-fpm + mysql setup unable to load the PHP page

    - by Francis
    I'm using Ajendi to setup a PHP web server, I can load the PHP info page, when I try to load the PHP copy from other server, I get a blank page with this log appear on access log, I have no clue what was wrong and troubleshoot further, please advice. x.x.x.x - - [28/May/2014:10:08:37 -0400] "GET /index.php HTTP/1.1" 200 31 "-" "Mozilla/5.0 (Windows NT 6.1; rv:29.0) Gecko/20100101 Firefox/29.0" 2.6.32-431.1.2.0.1.el6.x86_64 cat /etc/redhat-release CentOS release 6.5 (Final) The vhost config AUTOMATICALLY GENERATED - DO NO EDIT! server { listen *:80; server_name test.com www.test.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; root /www; index index.html index.htm index.php; location ~ \.php$ { alias /www; fastcgi_index index.php; include fcgi.conf; fastcgi_pass unix:/var/run/php-fcgi-php-fcgi-0.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } }

    Read the article

  • How to setup server to accept pem(private RSA key) login w/o password like EC2?

    - by Chandler.Huang
    I am manage a group of VM and I need to setup all vm create a ssh tunnel to a specific host A. One way to do this is append public key of each VM to host's authorized_keys, but I guess I have to do the append each time i create a VM. So I am trying to config host A to accept pem or private key login without passowrd, just like EC2, client can use "ssh -i PEM" to login host A. But I have tried in vain for hours. I create a rsa public/private key and let VM use the private key to login, no matter what I do, host a still ask for password. Is there anything I missed ? Thanks.

    Read the article

  • How can I detect hard drive failures?

    - by Francis
    I am in charge of a large number of Windows servers. Recently, many have been reporting hard drive errors with event codes 11 and 55. CHKDSK indicates that the drives are fine most of the time. What other diagnostic tools could I use to more accurately detect hard drive failures? Could these Windows events be false positives? I have already evaluated S.M.A.R.T., and it seems to have significant sensitivity and specificity issues.

    Read the article

  • assign user and group to site log files

    - by Francis
    When in a site apache conf file, is there a way to set the user and group for the CustomLog and ErrorLog? Right now, these 2 records create the error and access log with root:root permissions, but I would like them to be flewis:admin CustomLog /var/log/httpd/domain.com-access.log combined ErrorLog /var/log/httpd/domain.com-error.log If I change the user:group of the files, when the logs rotate, the new logs are root:root

    Read the article

  • Does it make a difference to read from a file instead of from MySQL?

    - by Joe Huang
    My web server currently is quite loaded. And I have a PHP file that is accessed very often remotely. The PHP file basically makes a MySQL query and returns a JSON formatted string. I am thinking to use a Cron job to write the necessary data into a file every 15 mins, so the PHP file doesn't make a MySQL query, instead it reads from the file. Does it make a difference? I mean to alleviate the server loading (CPU/MySQL) a bit?

    Read the article

  • Oracle OpenWorld Series: All Things Mobile

    - by Michelle Kimihira
    I caught up with Joe Huang, Senior Principal Product Manager, Mobile Application Development Framework to hear about his recommendations for Oracle OpenWorld. Use this Focus On document, which provides a roadmap to must-attend sessions and demos. By Joe Huang This year’s OpenWorld promises to be “THE” event for anyone interested in mobile enterprise applications.  Although Oracle has had a rich portfolio of mobile products for many years now, there is a much stronger focus on mobile this year.  Every single one of our customers is looking to develop a mobile strategy and bring key business processes to mobile users, and as you will see in the various keynotes, sessions, and demos during OpenWorld, Oracle is clearly the leader in mobile technologies and applications. Look for mobile development technologies being demonstrated in the Oracle Red Lounge located at Moscone North Upper Lobby, where innovative technologies from Oracle are being showcased.  A few select sessions where mobile development technologies will be highlighted: Monday, 10/1 10:45 AM – 11:45 AM GEN9398: The Future Development for Oracle Fusion – From Desktop to Mobile to Cloud See the latest and greatest in Oracle development technologies.  A key customer will be demonstrating the application they built using beta version of ADF Mobile. Marriott Marquis, Salon 8 Monday, 10/1 1:45 PM – 2:45 PM GEN11554: Extend Oracle Applications to Mobile Devices with Oracle’s Mobile Technologies – See how to leverage Oracle’s development technology like ADF Mobile to mobilize Oracle applications. Moscone West, 3002/3004 Monday, 10/1 4:45 PM – 5:45 PM GEN11451: Building a Mobile Applications with Oracle Cloud See how Oracle offers a simpler way of developing and deploying cross-device mobile applications, enabling you to access applications, data and services from mobile channels in an easier way. Moscone West, 2002/2004 Tuesday, 10/2 11:45 AM – 12:45 PM CON3824: Mobile-Enabled Oracle Fusion Middleware and Enterprise Applications with Oracle ADF See how Oracle Fusion Middleware and ADF Mobile together delivers a complete and powerful platform for enterprise mobile applications.  A key customer will also be demonstrating a application built using ADF Mobile beta, that extends Oracle application to mobile devices. Moscone South, 306 Additional Information ·         Relevant Blogs: Oracle OpenWorld Countdown Begins ,  Best of Oracle Fusion Middleware, Fusion Middleware for Enterprise Applications, Amit Zavery’s General Session, Hassan Rizvi's General Session, Oracle OpenWorld Blog ·         Focus On Docs: Best of Oracle Fusion Middleware, Fusion Middleware for Enterprise Applications,  Mobile ·         Product Information on Oracle.com: Oracle Fusion Middleware ·         Subscribe to our regular Fusion Middleware Newsletter ·         Follow us on Twitter and Facebook

    Read the article

  • Problems with Airtel

    I am a housewife from Delhi, currently living in Chennai. My husband is a businessman and I usually use skype so I can constantly keep in touch with my husband. But due to frequent robberies in my ar... [Author: Francis Regan - Computers and Internet - April 16, 2010]

    Read the article

  • Beware & Be Aware Of Internet Scams

    Internet has many advantages. It provides mankind all the information he needs right from the dust to the boundless sky. It has brought mankind together, which helped eliminating distances between pe... [Author: Francis Regan - Computers and Internet - April 10, 2010]

    Read the article

  • Protect Your Virtual World

    Technology is definitely very seductive. The thrill of getting more into newer technologies and the wow factor of sleeker design are irresistible. But as we all know, every step we take has some draw... [Author: Francis Regan - Computers and Internet - May 04, 2010]

    Read the article

  • How to interact with checkbox actions ? (QTableView with QStandardItemModel)

    - by Claire Huang
    I'm using QTableView and QStandardItemModel to show some data. For each row, there is a column which has a check Box, this check box is inserted by setItem, the code is as follows: int rowNum; QStandardItemModel *tableModel; QStandardItem* __tmpItem = new QStandardItem(); __tmpItem->setCheckable(true); __tmpItem->setCheckState(Qt::Unchecked); tableModel->setItem(rowNum,0,__tmpItem); Now I want to interact with the check box. If a check box changes its state by user (from checked to unchecked or vice versa), I want to do something on the corresponding data row. I know I can use signal-slot to catch the change of checkbox, but since there are lots of data row, I don't want to connect each row one by one. Is there anyway to interact with the check action more effectively? Thanks :)

    Read the article

  • gcc 4.5 installation problem under ubuntu

    - by Claire Huang
    I tried to install gcc 4.5 on ubuntu 10.04 but failed. Here is a compile error that I don't know how to solve. Is there anyone successfully install the latest gcc on ubuntu? Following is my steps and the error message, I'd like to know where is the problem.... Step1: download these files: gcc-core-4.5.0.tar.gz gcc-g++-4.5.0.tar.gz gmp-4.3.2.tar.bz2 mpc-0.8.1.tar.gz mpfr-2.4.2.tar.gz Step2: Unzip above files Step3: move gmp, mpc, mpfr to the gcc-4.5.0/ directory. mv gmp-4.3.2 gcc-4.5.0/gmp mv mpc-0.8.1 gcc-4.5.0/mpc mv mpfr-2.4.2 gcc-4.5.0/mpfr Step4: go to gcc-4.5.0 directory and do configuration: sudo ./configure Step5: compile and install sudo make sudo make install The first 4 steps is OK, I can configure it successfully. However, when I try to compile it, following error message comes out, I cannot figure out what the problem is. Should I change the name from "gcc 4.5" to "gcc"?? It's a little strange that we need to do this by ourself. Is there anything I missed during the installation? xxx@xxx-laptop:/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0$ sudo make [sudo] password for xxx: [ -f stage_final ] || echo stage3 > stage_final /bin/bash: line 2: test: /media/Data/Tool/linux/gcc: binary operator expected /bin/bash: /media/Data/Tool/linux/gcc: No such file or directory make[1]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[2]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[3]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' rm -f stage_current make[3]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[2]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[2]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' Configuring stage 1 in host-x86_64-unknown-linux-gnu/intl /bin/bash: /media/Data/Tool/linux/gcc: No such file or directory make[2]: *** [configure-stage1-intl] Error 127 make[2]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[1]: *** [stage1-bubble] Error 2 make[1]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make: *** [all] Error 2

    Read the article

  • python sqlite3 won't execute a join, but sqlite3 alone will

    - by Francis Davey
    Using the sqlite3 standard library in python 2.6.4, the following query works fine on sqlite3 command line: select segmentid, node_t, start, number,title from ((segments inner join position using (segmentid)) left outer join titles using (legid, segmentid)) left outer join numbers using (start, legid, version); But If I execute it via the sqlite3 library in python I get an error: >>> conn=sqlite3.connect('data/test.db') >>> conn.execute('''select segmentid, node_t, start, number,title from ((segments inner join position using (segmentid)) left outer join titles using (legid, segmentid)) left outer join numbers using (start, legid, version)''') Traceback (most recent call last): File "<stdin>", line 1, in <module> sqlite3.OperationalError: cannot join using column start - column not present in both tables The (computed) table on the left hand side of the join appears to have the relevant column because if I check it by itself I get: >>> conn.execute('''select * from ((segments inner join position using (segmentid)) left outer join titles using (legid, segmentid)) limit 20''').description (('segmentid', None, None, None, None, None, None), ('html', None, None, None, None, None, None), ('node_t', None, None, None, None, None, None), ('legid', None, None, None, None, None, None), ('version', None, None, None, None, None, None), ('start', None, None, None, None, None, None), ('title', None, None, None, None, None, None)) My schema is: CREATE TABLE leg (legid integer primary key, t char(16), year char(16), no char(16)); CREATE TABLE numbers ( number char(16), legid integer, version integer, start integer, end integer, prev integer, prev_number char(16), next integer, next_number char(16), primary key (number, legid, version)); CREATE TABLE position ( segmentid integer, legid integer, version integer, start integer, primary key (segmentid, legid, version)); CREATE TABLE 'segments' (segmentid integer primary key, html text, node_t integer); CREATE TABLE titles (legid integer, segmentid integer, title text, primary key (legid, segmentid)); CREATE TABLE versions (legid integer, version integer, primary key (legid, version)); CREATE INDEX idx_numbers_start on numbers (legid, version, start); I am baffled as to what I am doing wrong. I have tried quitting/restarting both the python and sqlite command lines and can't see what I'm doing wrong. It may be completely obvious.

    Read the article

  • Error about 'invalid JSON' with couchDB view but the json's fine...

    - by Chris Huang-Leaver
    I am trying to setup the following view on CouchDB { "_id":"_design/id", "_rev":"1-9be2e55e05ac368da3047841f301203d", "language":"javascript", "views":{ "by_id":{ "map" : "function(doc) { emit(doc.id, doc)}" },"from_user_id":{ "map" : "function(doc) { if (doc.from_user_id) {emit(doc.from_user_id, doc)}}"}, "from_user":{ "map" : "function(doc) { if (doc.from_user) {emit(doc.from_user, doc)}}"}, "to_user_id":{ "map" : "function(doc) {if (doc.to_user_id){ emit(doc.to_user_id, doc)}}"}, "to_user":{ "map" : "function(doc) {if (doc.to_user){ emit(doc.to_user, doc)}}" }, "max_id":{ "map" : "function(doc) { if (doc.id) {emit(doc._id, eval(doc.id))}}", "reduce" :"function(key,value) { a = value[0]; for (i=1; i <value.length; ++i){a = Math.max(a,value[i])} return a}" } } } when I try to 'PUT' this using curl: curl -X PUT -d keys.json $CDB/_design/id {"error":"bad_request","reason":"invalid UTF-8 JSON"} I know it's not invalid JSON, because I tested it using the 'json' library built into Python 2.6, it loads fine. JS screw ups give me the error 'must evaluate to a function' What else might be wrong with it?

    Read the article

  • the Memory problem about MySQL "SELECT *"

    - by Austin Huang
    Dear all: I'm new to MySQL, and I have a question about the memory. I have a 200mb table(MyISAM, 2,000,000 rows), and I try to load all of it to the memory. I use python(actually MySQLdb in python) with sql: SELECT * FROM table. However, from my linux "top" I saw this python process uses 50% of my memory(which is total 6GB) I'm curious about why it uses about 3GB memory only for a 200 mb table. Thanks in advance!

    Read the article

  • Dynamically creating GWT screens using Metadata?

    - by Francis Shanahan
    I have an AWT applet application that needs to be ported over to GWT. The applet screens are described in meta data and the applet renders each screen dynamically using reflection. We'd like the same thing in GWT/ExtGWT. I've built a working version of this ExtJS whereby the metadata is turned into ExtJS Screen configs in the form of JSON. The drawback with this approach is the "wiring" of controls to data needs to be written in Javascript. GWT is preferred since it'd be all Java code, no JS. Upon digging in it's possible to render the screens using GWT off the metadata using GWT.create(). The problem I'm having is the wiring to hook a dynamically created button for example to an event handler requires reflection which is not supported in GWT. Is this conclusion correct? and if so, are there any other ways to achieve this type of dynamic UI using ExtGWT?

    Read the article

  • Modify an object without using it as parameter

    - by Claire Huang
    I have a global object "X" and a class "A". I need a function F in A which have the ability to modify the content of X. For some reason, X cannot be a data member of A (but A can contain some member Y as reference of X), and also, F cannot have any parameter, so I cannot pass X as an parameter into F. (Here A is an dialog and F is a slot without any parameter, such as accept() ) How can I modify X within F if I cannot pass X into it? Is there any way to let A know that "X" is the object it need to modify?? I try to add something such as SetItem to specify X in A, but failed.

    Read the article

  • How can I use Qt to get html code of the redirected page??

    - by Claire Huang
    I'm trying to use Qt to download the html code from the following url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=nucleotide&cmd=search&term=AB100362 this url will re-direct to www.ncbi.nlm.nih.gov/nuccore/27884304 I try to do it by following way, but I cannot get anything. it works for some webpage such as www.google.com, but not for this NCBI page. is there any way to get this page?? QNetworkReply::NetworkError downloadURL(const QUrl &url, QByteArray &data) { QNetworkAccessManager manager; QNetworkRequest request(url); QNetworkReply *reply = manager.get(request); QEventLoop loop; QObject::connect(reply, SIGNAL(finished()), &loop, SLOT(quit())); loop.exec(); if (reply->error() != QNetworkReply::NoError) { return reply->error(); } data = reply->readAll(); delete reply; return QNetworkReply::NoError; } void GetGi() { int pos; QString sGetFromURL = "http://www.ncbi.nlm.nih.gov/entrez/query.fcgi"; QUrl url(sGetFromURL); url.addQueryItem("db", "nucleotide"); url.addQueryItem("cmd", "search"); url.addQueryItem("term", "AB100362"); QByteArray InfoNCBI; int errorCode = downloadURL(url, InfoNCBI); if (errorCode != 0 ) { QMessageBox::about(0,tr("Internet Error "), tr("Internet Error %1: Failed to connect to NCBI.\t\nPlease check your internect connection.").arg(errorCode)); return "ERROR"; } }

    Read the article

  • Reading Xml with XmlReader in C#

    - by Gloria Huang
    I'm trying to read the following Xml document as fast as I can and let additional classes manage the reading of each sub block. <ApplicationPool><Accounts><Account><NameOfKin></NameOfKin><StatementsAvailable><Statement></Statement></StatementsAvailable></Account></Accounts></ApplicationPool> I can't seem to format the above nicely :( However, I'm trying to use the XmlReader object to read each Account and subsequently the "StatementsAvailable". Do you suggest using XmlReader.Read and check each element and handle it? I've thought of seperating my classes to handle each node properly. So theres an AccountBase class that accepts a XmlReader instance that reads the NameOfKin and several other properties about the account. Then I was wanting to interate through the Statements and let another class fill itself out about the Statement (and subsequently add it to an IList). Thus far I have the "per class" part done by doing XmlReader.ReadElementString() but I can't workout how to tell the pointer to move to the StatementsAvailable element and let me iterate through them and let another class read each of those proeprties. Sounds easy!

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >