Search Results

Search found 7239 results on 290 pages for 'job satisfaction'.

Page 40/290 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • How to reliably run a batch job every 5 seconds?

    - by Benjamin
    I'm building an application where the sending of all notifications (email, SMS, fax) will be asynchronous. The application will write the notifications to the database, and a batch job will read these notifications and send them with the appropriate transport. I was first reading at ways to run cron faster than the minute, and realized this was a bad idea. The batch scripts are written in PHP, and I guess that writing a proper daemon would be quite an overhead (though I'm open to any suggestion, as PHP car run indefinitely as well). What I have in mind is a solution that would: Run the PHP script every 5 seconds Check that the previous run has finished, or abort (never 2 concurrent batches running) Kill the script if live for more than x minutes (a security in case it hangs) Start with the system (if a reboot occurs) Any idea how to do this?

    Read the article

  • Rsync to take the newest file. And a cron job?

    - by user1704877
    I have a log file on two different servers. The servers are under a load balancer so half the traffic goes to one server, and half the traffic goes to the other server. I need to take the newest log file from one machine and transfer that log file to the other machine. So if one log file is changed on one server, it gets updated on the other server. I think I need to use rsync. And do I also need to put it in a cron job?

    Read the article

  • Strange results - I obtain same value for all keys

    - by Pietro Luciani
    I have a problem with mapreduce. Giving as input a list of song ("Songname"#"UserID"#"boolean") i must have as result a song list in which is specified how many time different useres listen them... so a output ("Songname","timelistening"). I used hashtable to allow only one couple . With short files it works well but when I put as input a list about 1000000 of records it returns me the same value (20) for all records. This is my mapper: public static class CanzoniMapper extends Mapper<Object, Text, Text, IntWritable>{ private IntWritable userID = new IntWritable(0); private Text song = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { /*StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); }*/ String[] caratteri = value.toString().split("#"); if(caratteri[2].equals("1")){ song.set(caratteri[0]); userID.set(Integer.parseInt(caratteri[1])); context.write(song,userID); } } } This is my reducer: public static class CanzoniReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { Hashtable<IntWritable,Text> doppioni = new Hashtable<IntWritable,Text>(); for (IntWritable val : values) { doppioni.put(val,key); } result.set(doppioni.size()); //doppioni.clear(); context.write(key,result); } } and main: Configuration conf = new Configuration(); Job job = new Job(conf, "word count"); job.setJarByClass(Canzoni.class); job.setMapperClass(CanzoniMapper.class); //job.setCombinerClass(CanzoniReducer.class); //job.setNumReduceTasks(2); job.setReducerClass(CanzoniReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); Any idea???

    Read the article

  • CakePHP: How can I change this find call to include all records that do not exist in the associated

    - by Stephen
    I have a few tables with the following relationships: Company hasMany Jobs, Employees, and Trucks, Users I've got all my foreign keys set up properly, along with the tables' Models, Controllers, and Views. Originally, the Jobs table had a boolean field called "assigned". The following find operation (from the JobsController) successfully returns all employees, all trucks, and any jobs that are not assigned and fall on a certain day for a single company (without returning users by utilizing the containable behavior): $this->set('resources', $this->Job->Company->find('first', array( 'conditions' => array( 'Company.id' => $company_id ), 'contain' => array( 'Employee', 'Truck', 'Job' => array( 'conditions' => array( 'Job.assigned' => false, 'Job.pickup_date' => date('Y-m-d', strtotime('Today')); ) ) ) ))); Now, since writing this code, I decided to do a lot more with the job assignments. So I've created a new model "Assignment" that belongsTo Truck and belongsTo Job. I've added the hasMany Assignments to both the Truck model and the Jobs Model. I have both foreign keys in the assignments table, along with some other assignment fields. Now, I'm trying to get the same information above, only instead of checking the assigned field from the job table, I want to check the assignments table to ensure that the job does not exist there. I can no longer use the containable behavior if I'm going to use the "joins" feature of the find method due to mysql errors (according to the cookbook). But, the following query returns all jobs, even if they fall on different days. $this->set('resources', $this->Job->Company->find('first', array( 'joins' => array( array( 'table' => 'employees', 'alias' => 'Employee', 'type' => 'LEFT', 'conditions' => array( 'Company.id = Employee.company_id' ) ), array( 'table' => 'trucks', 'alias' => 'Truck', 'type' => 'LEFT', 'conditions' => array( 'Company.id = Truck.company_id' ) ), array( 'table' => 'jobs', 'alias' => 'Job', 'type' => 'LEFT', 'conditions' => array( 'Company.id = Job.company_id' ) ), array( 'table' => 'assignments', 'alias' => 'Assignment', 'type' => 'LEFT', 'conditions' => array( 'Job.id = Assignment.job_id' ) ) ), 'conditions' => array( 'Job.pickup_date' => $day, 'Company.id' => $company_id, 'Assignment.job_id IS NULL' ) )));

    Read the article

  • what factors should a fresher(for programmer job) consider and learn before saying yes to employer f

    - by Senthil
    what factors should a fresher(for programmer job) consider and learn before saying yes to employer for job offer? and to contract? and most importantly how should one get the details?how can I approach them? I know some employers dont want to give such details..right? I have shortlisted by a Software COmpany..that is parter with microsoft. and works on technology like VB ADO.DOTNET,and some other reporting stuffs.,sql servers etc.,Tell me about scope of that..because They are asking me to sign for 2 year certificate bond agreement..I want to be a great programmer and Project Leader after 5 years..advise me guys..Language/OS not problem for me,As I curious to learn more things. Most of the SO members are programmers..so yours advice is greatly appreciated

    Read the article

  • How do I control output files name and content of an Hadoop streaming job?

    - by Eran Kampf
    Is there a way to control the output filenames of an Hadoop Streaming job? Specifically I would like my job's output files content and name to be organized by the ket the reducer outputs - each file would only contain values for one key and its name would be the key. Update: Just found the answer - Using a Java class that derives from MultipleOutputFormat as the jobs output format allows control of the output file names. http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/lib/MultipleOutputFormat.htmlhttp://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/lib/MultipleOutputFormat.html I havent seen any samples for this out there... Can anyone point out to an Hadoop Streaming sample that makes use of a custom output format Java class?

    Read the article

  • Hudson, is it possible to make a plugin configuration non-visible depending on job type?

    - by Haju
    With the plugin (SCM plugin) I'm working on the problem is that it doesn't work in any other job/project type than Freestyle-project. I'd like to hide the plugin configuration from project configuration page on other job/project types (maven, matrix etc), because it seems to distract people. I wonder if there's a "right" way of doing this, or any way at all? Currently the project type is checked in checkout-method as a first thing, and if it doesn't match, the build is failed instantly, but this is not completely satisfactory solution, since it causes a bit more work to the end user.

    Read the article

  • Waiting for a submitted job to finish in Oracle PL/SQL?

    - by vicjugador
    I'm looking for the equivalent of Java's thread.join() in PL/SQL. I.e. I want to kick off a number of jobs (threads), and then wait for them to finish. How is this possible in PL/SQL? I'm thinking of using dbms_job.submit (I know it's deprecated). dbms_scheduler is also an alternative. My code: DECLARE jobno1 number; jobno2 number; BEGIN dbms_job.submit(jobno1,'begin dbms_lock.sleep(10); dbms_output.put_line(''job 1 exit'');end;'); dbms_job.submit(jobno2,'begin dbms_lock.sleep(10); dbms_output.put_line(''job 2 exit'');end;'); dbms_job.run(jobno1); dbms_job.run(jobno2); //Need code to Wait for jobno1 to finish //Need code to Wait for jobno2 to finish END;

    Read the article

  • Need help on a problemset in a programming contest

    - by topher
    I've attended a local programming contest on my country. The name of the contest is "ACM-ICPC Indonesia National Contest 2013". The contest has ended on 2013-10-13 15:00:00 (GMT +7) and I am still curious about one of the problems. You can find the original version of the problem here. Brief Problem Explanation: There are a set of "jobs" (tasks) that should be performed on several "servers" (computers). Each job should be executed strictly from start time Si to end time Ei Each server can only perform one task at a time. (The complicated thing goes here) It takes some time for a server to switch from one job to another. If a server finishes job Jx, then to start job Jy it will need an intermission time Tx,y after job Jx completes. This is the time required by the server to clean up job Jx and load job Jy. In other word, job Jy can be run after job Jx if and only if Ex + Tx,y = Sy. The problem is to compute the minimum number of servers needed to do all jobs. Example: For example, let there be 3 jobs S(1) = 3 and E(1) = 6 S(2) = 10 and E(2) = 15 S(3) = 16 and E(3) = 20 T(1,2) = 2, T(1,3) = 5 T(2,1) = 0, T(2,3) = 3 T(3,1) = 0, T(3,2) = 0 In this example, we need 2 servers: Server 1: J(1), J(2) Server 2: J(3) Sample Input: Short explanation: The first 3 is the number of test cases, following by number of jobs (the second 3 means that there are 3 jobs for case 1), then followed by Ei and Si, then the T matrix (sized equal with number of jobs). 3 3 3 6 10 15 16 20 0 2 5 0 0 3 0 0 0 4 8 10 4 7 12 15 1 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 8 10 4 7 12 15 1 4 0 50 50 50 50 0 50 50 50 50 0 50 50 50 50 0 Sample Output: Case #1: 2 Case #2: 1 Case #3: 4 Personal Comments: The time required can be represented as a graph matrix, so I'm supposing this as a directed acyclic graph problem. Methods I tried so far is brute force and greedy, but got Wrong Answer. (Unfortunately I don't have my code anymore) Could probably solved by dynamic programming too, but I'm not sure. I really have no clear idea on how to solve this problem. So a simple hint or insight will be very helpful to me.

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • mysql: job failded to start. mysqld.sock is missing

    - by Freefri
    How can I fix this and start mysql-server? After /etc/init.d/mysql start or service mysql start I get the message start: "Job failed to start" And after # mysqld I get this: mysqld 121123 11:33:33 [ERROR] Can't find messagefile '/usr/share/mysql/errmsg.sys' 121123 11:33:33 [Note] Plugin 'FEDERATED' is disabled. mysqld: Unknown error 1146 121123 11:33:33 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 121123 11:33:33 InnoDB: The InnoDB memory heap is disabled 121123 11:33:33 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121123 11:33:33 InnoDB: Compressed tables use zlib 1.2.3.4 121123 11:33:33 InnoDB: Initializing buffer pool, size = 128.0M 121123 11:33:33 InnoDB: Completed initialization of buffer pool 121123 11:33:33 InnoDB: highest supported file format is Barracuda. 121123 11:33:33 InnoDB: Waiting for the background threads to start 121123 11:33:34 InnoDB: 1.1.8 started; log sequence number 1595675 121123 11:33:34 [ERROR] Aborting 121123 11:33:34 InnoDB: Starting shutdown... 121123 11:33:35 InnoDB: Shutdown completed; log sequence number 1595675 121123 11:33:35 [Note] I try to do what mysql say me to do: mysql_upgrade Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect FATAL ERROR: Upgrade failed And yes, /var/run/mysql is empty: mysql_upgrade Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect FATAL ERROR: Upgrade failed And this is my file /etc/mysql/my.cnf # cat /etc/mysql/my.cnf |grep sock # Remember to edit /etc/mysql/debian.cnf when changing the socket location. socket = /var/run/mysqld/mysqld.sock socket = /var/run/mysqld/mysqld.sock socket = /var/run/mysqld/mysqld.sock Then I try to reinstall mysql from cero: apt-get purge mysql-client mysql-common mysql-server rm -R /var/lib/mysql rm -R /etc/mysql rm -R /var/run/mysqld userdel mysql apt-get install mysql-server mysql-client Then, after typing my root password for mysql I get this error: | Unable to set password for the MySQL "root" user ¦ ¦ ¦ ¦ An error occurred while setting the password for the MySQL administrative ¦ ¦ user. This may have happened because the account already has a password, or ¦ ¦ because of a communication problem with the MySQL server. ¦ ¦ ¦ ¦ You should check the account's password after the package installation. ¦ ¦ ¦ ¦ Please read the /usr/share/doc/mysql-server-5.5/README.Debian file for more ¦ ¦ information. And again I can't start mysql getting the same messages.

    Read the article

  • Reading Data from DDFS ValueError: No JSON object could be decoded

    - by secumind
    I'm running dozens of map reduce jobs for a number of different purposes using disco. My data has grown enormous and I thought I would try using DDFS for a change rather than standard txt files. I've followed the DISCO map/reduce example Counting Words as a map/reduce job, without to much difficulty and with the help of others, Reading JSON specific data into DISCO I've gotten past one of my latest problems. I'm trying to read data in/out of ddfs to better chunk and distribute it but am having a bit of trouble. Here's an example file: file.txt {"favorited": false, "in_reply_to_user_id": null, "contributors": null, "truncated": false, "text": "I'll call him back tomorrow I guess", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": null, "entities": {"user_mentions": [], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016843603968", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/305726905/FASHION-3.png", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1818996723/image_normal.jpg", "profile_sidebar_fill_color": "292727", "is_translator": false, "id": 113532729, "profile_text_color": "000000", "followers_count": 78, "protected": false, "location": "With My Niggas In Paris!", "default_profile_image": false, "listed_count": 0, "utc_offset": -21600, "statuses_count": 6733, "description": "Made in CHINA., Educated && Making My Own $$. Fear GOD && Put Him 1st. #TeamFollowBack #TeamiPhone\n", "friends_count": 74, "profile_link_color": "b03f3f", "profile_image_url": "http://a2.twimg.com/profile_images/1818996723/image_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "1f9199", "id_str": "113532729", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/305726905/FASHION-3.png", "name": "Bee'Jay", "lang": "en", "profile_background_tile": true, "favourites_count": 19, "screen_name": "OohMyBEEsNice", "url": "http://www.bitchimpaid.org", "created_at": "Fri Feb 12 03:32:54 +0000 2010", "contributors_enabled": false, "time_zone": "Central Time (US & Canada)", "profile_sidebar_border_color": "000000", "default_profile": false, "following": null}, "in_reply_to_screen_name": null, "retweet_count": 0, "geo": null, "id": 168931016843603968, "source": "<a href=\"http://twitter.com/#!/download/iphone\" rel=\"nofollow\">Twitter for iPhone</a>"} {"favorited": false, "in_reply_to_user_id": 50940453, "contributors": null, "truncated": false, "text": "@LegaMrvica @MimozaBand makasi om artis :D kadoo kadoo", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": "168653037894770688", "coordinates": null, "in_reply_to_user_id_str": "50940453", "entities": {"user_mentions": [{"indices": [0, 11], "screen_name": "LegaMrvica", "id": 50940453, "name": "Lega_thePianis", "id_str": "50940453"}, {"indices": [12, 23], "screen_name": "MimozaBand", "id": 375128905, "name": "Mimoza", "id_str": "375128905"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": 168653037894770688, "id_str": "168931016868761600", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "profile_sidebar_fill_color": "DDFFCC", "is_translator": false, "id": 48293450, "profile_text_color": "333333", "followers_count": 182, "protected": false, "location": "\u00dcT: -6.906799,107.622383", "default_profile_image": false, "listed_count": 0, "utc_offset": -28800, "statuses_count": 3052, "description": "Fashion design maranatha '11 // traditional dancer (bali) at sanggar tampak siring & Natya Nataraja", "friends_count": 206, "profile_link_color": "0084B4", "profile_image_url": "http://a3.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "9AE4E8", "id_str": "48293450", "profile_background_image_url": "http://a0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "name": "nana afiff", "lang": "en", "profile_background_tile": true, "favourites_count": 2, "screen_name": "hasnfebria", "url": null, "created_at": "Thu Jun 18 08:50:29 +0000 2009", "contributors_enabled": false, "time_zone": "Pacific Time (US & Canada)", "profile_sidebar_border_color": "BDDCAD", "default_profile": false, "following": null}, "in_reply_to_screen_name": "LegaMrvica", "retweet_count": 0, "geo": null, "id": 168931016868761600, "source": "<a href=\"http://blackberry.com/twitter\" rel=\"nofollow\">Twitter for BlackBerry\u00ae</a>"} {"favorited": false, "in_reply_to_user_id": 27260086, "contributors": null, "truncated": false, "text": "@justinbieber u were born to be somebody, and u're super important in beliebers' life. thanks for all biebs. I love u. follow me? 84", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": "27260086", "entities": {"user_mentions": [{"indices": [0, 13], "screen_name": "justinbieber", "id": 27260086, "name": "Justin Bieber", "id_str": "27260086"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016856178688", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/416005864/Captura.JPG", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "profile_sidebar_fill_color": "f5e7f3", "is_translator": false, "id": 406750700, "profile_text_color": "333333", "followers_count": 1122, "protected": false, "location": "Adentro de una supra.", "default_profile_image": false, "listed_count": 0, "utc_offset": -14400, "statuses_count": 20966, "description": "Mi \u00eddolo es @justinbieber , si te gusta \u00a1genial!, si no, solo respetalo. El cambi\u00f3 mi vida completamente y mi sue\u00f1o es conocerlo #TrueBelieber . ", "friends_count": 1015, "profile_link_color": "9404b8", "profile_image_url": "http://a1.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "notifications": null, "show_all_inline_media": false, "geo_enabled": false, "profile_background_color": "f9fcfa", "id_str": "406750700", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/416005864/Captura.JPG", "name": "neversaynever,right?", "lang": "es", "profile_background_tile": false, "favourites_count": 22, "screen_name": "True_Belieebers", "url": "http://www.wehavebieber-fever.tumblr.com", "created_at": "Mon Nov 07 04:17:40 +0000 2011", "contributors_enabled": false, "time_zone": "Santiago", "profile_sidebar_border_color": "C0DEED", "default_profile": false, "following": null}, "in_reply_to_screen_name": "justinbieber", "retweet_count": 0, "geo": null, "id": 168931016856178688, "source": "<a href=\"http://yfrog.com\" rel=\"nofollow\">Yfrog</a>"} I load it into DDFS with: # ddfs chunk data:test1 ./file.txt created: disco://localhost/ddfs/vol0/blob/44/file_txt-0$549-db27b-125e1 I test that the file is indeed loaded into ddfs with: # ddfs xcat data:test1 {"favorited": false, "in_reply_to_user_id": null, "contributors": null, "truncated": false, "text": "I'll call him back tomorrow I guess", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": null, "entities": {"user_mentions": [], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016843603968", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/305726905/FASHION-3.png", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1818996723/image_normal.jpg", "profile_sidebar_fill_color": "292727", "is_translator": false, "id": 113532729, "profile_text_color": "000000", "followers_count": 78, "protected": false, "location": "With My Niggas In Paris!", "default_profile_image": false, "listed_count": 0, "utc_offset": -21600, "statuses_count": 6733, "description": "Made in CHINA., Educated && Making My Own $$. Fear GOD && Put Him 1st. #TeamFollowBack #TeamiPhone\n", "friends_count": 74, "profile_link_color": "b03f3f", "profile_image_url": "http://a2.twimg.com/profile_images/1818996723/image_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "1f9199", "id_str": "113532729", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/305726905/FASHION-3.png", "name": "Bee'Jay", "lang": "en", "profile_background_tile": true, "favourites_count": 19, "screen_name": "OohMyBEEsNice", "url": "http://www.bitchimpaid.org", "created_at": "Fri Feb 12 03:32:54 +0000 2010", "contributors_enabled": false, "time_zone": "Central Time (US & Canada)", "profile_sidebar_border_color": "000000", "default_profile": false, "following": null}, "in_reply_to_screen_name": null, "retweet_count": 0, "geo": null, "id": 168931016843603968, "source": "<a href=\"http://twitter.com/#!/download/iphone\" rel=\"nofollow\">Twitter for iPhone</a>"} {"favorited": false, "in_reply_to_user_id": 50940453, "contributors": null, "truncated": false, "text": "@LegaMrvica @MimozaBand makasi om artis :D kadoo kadoo", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": "168653037894770688", "coordinates": null, "in_reply_to_user_id_str": "50940453", "entities": {"user_mentions": [{"indices": [0, 11], "screen_name": "LegaMrvica", "id": 50940453, "name": "Lega_thePianis", "id_str": "50940453"}, {"indices": [12, 23], "screen_name": "MimozaBand", "id": 375128905, "name": "Mimoza", "id_str": "375128905"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": 168653037894770688, "id_str": "168931016868761600", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "profile_sidebar_fill_color": "DDFFCC", "is_translator": false, "id": 48293450, "profile_text_color": "333333", "followers_count": 182, "protected": false, "location": "\u00dcT: -6.906799,107.622383", "default_profile_image": false, "listed_count": 0, "utc_offset": -28800, "statuses_count": 3052, "description": "Fashion design maranatha '11 // traditional dancer (bali) at sanggar tampak siring & Natya Nataraja", "friends_count": 206, "profile_link_color": "0084B4", "profile_image_url": "http://a3.twimg.com/profile_images/1803845596/Picture_20124_normal.jpg", "notifications": null, "show_all_inline_media": false, "geo_enabled": true, "profile_background_color": "9AE4E8", "id_str": "48293450", "profile_background_image_url": "http://a0.twimg.com/profile_background_images/347686061/Galungan_dan_Kuningan.jpg", "name": "nana afiff", "lang": "en", "profile_background_tile": true, "favourites_count": 2, "screen_name": "hasnfebria", "url": null, "created_at": "Thu Jun 18 08:50:29 +0000 2009", "contributors_enabled": false, "time_zone": "Pacific Time (US & Canada)", "profile_sidebar_border_color": "BDDCAD", "default_profile": false, "following": null}, "in_reply_to_screen_name": "LegaMrvica", "retweet_count": 0, "geo": null, "id": 168931016868761600, "source": "<a href=\"http://blackberry.com/twitter\" rel=\"nofollow\">Twitter for BlackBerry\u00ae</a>"} {"favorited": false, "in_reply_to_user_id": 27260086, "contributors": null, "truncated": false, "text": "@justinbieber u were born to be somebody, and u're super important in beliebers' life. thanks for all biebs. I love u. follow me? 84", "created_at": "Mon Feb 13 05:34:27 +0000 2012", "retweeted": false, "in_reply_to_status_id_str": null, "coordinates": null, "in_reply_to_user_id_str": "27260086", "entities": {"user_mentions": [{"indices": [0, 13], "screen_name": "justinbieber", "id": 27260086, "name": "Justin Bieber", "id_str": "27260086"}], "hashtags": [], "urls": []}, "in_reply_to_status_id": null, "id_str": "168931016856178688", "place": null, "user": {"follow_request_sent": null, "profile_use_background_image": true, "profile_background_image_url_https": "https://si0.twimg.com/profile_background_images/416005864/Captura.JPG", "verified": false, "profile_image_url_https": "https://si0.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "profile_sidebar_fill_color": "f5e7f3", "is_translator": false, "id": 406750700, "profile_text_color": "333333", "followers_count": 1122, "protected": false, "location": "Adentro de una supra.", "default_profile_image": false, "listed_count": 0, "utc_offset": -14400, "statuses_count": 20966, "description": "Mi \u00eddolo es @justinbieber , si te gusta \u00a1genial!, si no, solo respetalo. El cambi\u00f3 mi vida completamente y mi sue\u00f1o es conocerlo #TrueBelieber . ", "friends_count": 1015, "profile_link_color": "9404b8", "profile_image_url": "http://a1.twimg.com/profile_images/1808883280/Captura6_normal.JPG", "notifications": null, "show_all_inline_media": false, "geo_enabled": false, "profile_background_color": "f9fcfa", "id_str": "406750700", "profile_background_image_url": "http://a3.twimg.com/profile_background_images/416005864/Captura.JPG", "name": "neversaynever,right?", "lang": "es", "profile_background_tile": false, "favourites_count": 22, "screen_name": "True_Belieebers", "url": "http://www.wehavebieber-fever.tumblr.com", "created_at": "Mon Nov 07 04:17:40 +0000 2011", "contributors_enabled": false, "time_zone": "Santiago", "profile_sidebar_border_color": "C0DEED", "default_profile": false, "following": null}, "in_reply_to_screen_name": "justinbieber", "retweet_count": 0, "geo": null, "id": 168931016856178688, "source": "<a href=\"http://yfrog.com\" rel=\"nofollow\">Yfrog</a> At this point everything is great, I load up the script that resulted from a previous Stack Post: from disco.core import Job, result_iterator import gzip def map(line, params): import unicodedata import json r = json.loads(line).get('text') s = unicodedata.normalize('NFD', r).encode('ascii', 'ignore') for word in s.split(): yield word, 1 def reduce(iter, params): from disco.util import kvgroup for word, counts in kvgroup(sorted(iter)): yield word, sum(counts) if __name__ == '__main__': job = Job().run(input=["tag://data:test1"], map=map, reduce=reduce) for word, count in result_iterator(job.wait(show=True)): print word, count NOTE: That this script runs file if the input=["file.txt"], however when I run it with "tag://data:test1" I get the following error: # DISCO_EVENTS=1 python count_normal_words.py Job@549:db30e:25bd8: Status: [map] 0 waiting, 1 running, 0 done, 0 failed 2012/11/25 21:43:26 master New job initialized! 2012/11/25 21:43:26 master Starting job 2012/11/25 21:43:26 master Starting map phase 2012/11/25 21:43:26 master map:0 assigned to solice 2012/11/25 21:43:26 master ERROR: Job failed: Worker at 'solice' died: Traceback (most recent call last): File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/__init__.py", line 329, in main job.worker.start(task, job, **jobargs) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/__init__.py", line 290, in start self.run(task, job, **jobargs) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/classic/worker.py", line 286, in run getattr(self, task.mode)(task, params) File "/home/DISCO/data/solice/01/Job@549:db30e:25bd8/usr/local/lib/python2.7/site-packages/disco/worker/classic/worker.py", line 299, in map for key, val in self['map'](entry, params): File "count_normal_words.py", line 12, in map File "/usr/lib64/python2.7/json/__init__.py", line 326, in loads return _default_decoder.decode(s) File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded 2012/11/25 21:43:26 master WARN: Job killed Status: [map] 1 waiting, 0 running, 0 done, 1 failed Traceback (most recent call last): File "count_normal_words.py", line 28, in <module> for word, count in result_iterator(job.wait(show=True)): File "/usr/local/lib/python2.7/site-packages/disco/core.py", line 348, in wait timeout, poll_interval * 1000) File "/usr/local/lib/python2.7/site-packages/disco/core.py", line 309, in check_results raise JobError(Job(name=jobname, master=self), "Status %s" % status) disco.error.JobError: Job Job@549:db30e:25bd8 failed: Status dead The Error states: ValueError: No JSON object could be decoded. Again, this works fine using the text file as input but now DDFS. Any ideas, I'm open to suggestions?

    Read the article

  • How important is the programming language when you choose a new job?

    - by Luhmann
    We are currently hiring at the company where I work, and here the codebase is in VB.Net. We are worried that we miss out on a lot of brilliant programmers, who would never ever consider working with VB.Net. My own background is Java and C#, and I was somewhat sceptical as to whether it would work out with VB, as - to be honest - i didn't care much for VB. After a month or so, I was completely fluent in VB, and a few months later i discovered to my surprise, that I actually like VB. I still code my free time projects in C# and Boo though. So my question is firstly, how important is language for you, when you choose a new programming job? Lets say if its a great company, salary is good, and generally an attractive work-place. Would you say no to the perfect job, if the language wasn't your preferred dialect? VB or C# is one thing, but how about Java or C# etc. Secondly if the best developers won't join your company because of your language or platform, would you consider changing, to get the right people? (This is not a language bashing thread, so please no religious language wars) NB: This is Community Wiki

    Read the article

  • How important is the .NET programming language when you choose a new job?

    - by Luhmann
    We are currently hiring at the company where I work, and here the codebase is in VB.Net. We are worried that we miss out on a lot of brilliant programmers, who would never ever consider working with VB.Net. My own background is Java and C#, and I was somewhat sceptical as to whether it would work out with VB, as - to be honest - i didn't care much for VB. After a month or so, I was completely fluent in VB, and a few months later i discovered to my surprise, that I actually like VB. I still code my free time projects in C# and Boo though. So my question is firstly, how important is language for you, when you choose a new programming job? Lets say if its a great company, salary is good, and generally an attractive work-place. Would you say no to the perfect job, if the language wasn't your preferred dialect? VB or C# is one thing, but how about Java or C# etc. Secondly if the best developers won't join your company because of your language or platform, would you consider changing, to get the right people? (This is not a language bashing thread, so please no religious language wars) NB: This is Community Wiki

    Read the article

  • Unable to execute gs program: No such file or directory

    - by Imran
    I've setup CUPS + Avahi on my NAS box in order to enable AirPrint with my existing network printer. Printing a test page via CUPS and printing us lp works fine, and I am able to see my printer on the printer list on my iOS device. However when sending a print job from my iOS device the printer status is set to paused and doesnt print anything. When checking the error_logs I have found this line which I believe is causing the error. D [04/Sep/2012:03:20:25 +0100] [Job 11] Started filter gs (PID 7485) D [04/Sep/2012:03:20:25 +0100] [Job 11] Started filter pstops (PID 7486) D [04/Sep/2012:03:20:25 +0100] [Job 11] Set job-printer-state-message to "Unable to execute gs program: No such file or directory", current level=ERROR D [04/Sep/2012:03:20:25 +0100] [Job 11] PID 7485 (gs) stopped with status 1! D [04/Sep/2012:03:20:25 +0100] [Job 11] PID 7486 (pstops) stopped with status 1! D [04/Sep/2012:03:20:25 +0100] [Job 11] Backend returned status 1 (failed) D [04/Sep/2012:03:20:25 +0100] [Job 11] Printer stopped due to backend errors; please consult the error_log file for details. I have installed Ghostscript, so I'm not quite sure why its saying its unable to execute the program, unless there are configurations for GS that I havent set yet. Any ideas?

    Read the article

  • Bacula windows client could not connect to Bacula director

    - by pr0f-r00t
    I have a Bacula server on my Linux Debian squeeze host (Bacula version 5.0.2) and a Bacula client on Windows XP SP3. On my network each client can see each other, can share files and can ping. On my local server I could run bconsole and the server responds but when I run bconsole or bat on my windows client the server does not respond. Here are my configuration files: bacula-dir.conf: # # Default Bacula Director Configuration file # # The only thing that MUST be changed is to add one or more # file or directory names in the Include directive of the # FileSet resource. # # For Bacula release 5.0.2 (28 April 2010) -- debian squeeze/sid # # You might also want to change the default email address # from root to your address. See the "mail" and "operator" # directives in the Messages resource. # Director { # define myself Name = nima-desktop-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 1 Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" # Console password Messages = Daemon DirAddress = 127.0.0.1 # DirAddress = 72.16.208.1 } JobDefs { Name = "DefaultJob" Type = Backup Level = Incremental Client = nima-desktop-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultJob" } #Job { # Name = "BackupClient2" # Client = nima-desktop2-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=nima-desktop-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /nonexistant/path/to/file/archive/dir/bacula-restores } # job for vmware windows host Job { Name = "nimaxp-fd" Type = Backup Client = nimaxp-fd FileSet = "nimaxp-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # job for vmware windows host Job { Name = "arg-michael-fd" Type = Backup Client = nimaxp-fd FileSet = "arg-michael-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } # # Put your list of files here, preceded by 'File =', one per line # or include an external list with: # # File = <file-name # # Note: / backs up everything on the root partition. # if you have other partitions such as /usr or /home # you will probably want to add them too. # # By default this is defined to point to the Bacula binary # directory to give a reasonable FileSet to backup to # disk storage during initial testing. # File = /usr/sbin } # # If you backup the root directory, the following two excluded # files can be useful # Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "nimaxp-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # List of files to be backed up FileSet { Name = "arg-michael-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # # When to do the backups, full backup on first sunday of the month, # differential (i.e. incremental since full) every other sunday, # and incremental backups other days Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = nima-desktop-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V6" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = nimaxp-fd Address = nimaxp FDPort = 9102 Catalog = MyCatalog Password = "Ku8F1YAhDz5EMUQjiC9CcSw95Aho9XbXailUmjOaAXJP" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = arg-michael-fd Address = 192.168.0.61 FDPort = 9102 Catalog = MyCatalog Password = "b4E9FU6s/9Zm4BVFFnbXVKhlyd/zWxj0oWITKK6CALR/" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = nima-desktop2-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V62" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" Device = FileStorage Media Type = File } # Definition of DDS tape storage device #Storage { # Name = DDS-4 # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # password for Storage daemon # Device = DDS-4 # must be same as Device in Storage daemon # Media Type = DDS-4 # must be same as MediaType in Storage daemon # Autochanger = yes # enable for autochanger device #} # Definition of 8mm tape storage device #Storage { # Name = "8mmDrive" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "Exabyte 8mm" # MediaType = "8mm" #} # Definition of DVD storage device #Storage { # Name = "DVD" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "DVD Writer" # MediaType = "DVD" #} # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; dbuser = ""; dbpassword = "" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard # # NOTE! If you send to two email or more email addresses, you will need # to replace the %r in the from field (-f part) with a single valid # email address in both the mailcommand and the operatorcommand. # What this does is, it sets the email address that emails would display # in the FROM field, which is by default the same email as they're being # sent to. However, if you send email to more than one address, then # you'll have to set the FROM address manually, to a single address. # for example, a '[email protected]', is better since that tends to # tell (most) people that its coming from an automated source. # mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = nima-desktop-mon Password = "-T0h6HCXWYNy0wWqOomysMvRGflQ_TA6c" CommandACL = status, .status } bacula-fd.conf on client: # # Default Bacula File Daemon Configuration file # # For Bacula release 5.0.3 (08/05/10) -- Windows MinGW32 # # There is not much to change here except perhaps the # File daemon Name # # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = nimaxp-fd FDport = 9102 # where we listen for the director WorkingDirectory = "C:\\Program Files\\Bacula\\working" Pid Directory = "C:\\Program Files\\Bacula\\working" # Plugin Directory = "C:\\Program Files\\Bacula\\plugins" Maximum Concurrent Jobs = 10 } # # List Directors who are permitted to contact this File daemon # Director { Name = Nima-desktop-dir Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = nimaxp-mon Password = "q5b5g+LkzDXorMViFwOn1/TUnjUyDlg+gRTBp236GrU3" Monitor = yes } # Send all messages except skipped files back to Director Messages { Name = Standard director = Nima-desktop = all, !skipped, !restored } I have checked my firewall and disabled the firewall but it doesn't work.

    Read the article

  • Hadoop streaming with Python and python subprocess

    - by Ganesh
    I have established a basic hadoop master slave cluster setup and able to run mapreduce programs (including python) on the cluster. Now I am trying to run a python code which accesses a C binary and so I am using the subprocess module. I am able to use the hadoop streaming for a normal python code but when I include the subprocess module to access a binary, the job is getting failed. As you can see in the below logs, the hello executable is recognised to be used for the packaging, but still not able to run the code. . . packageJobJar: [/tmp/hello/hello, /app/hadoop/tmp/hadoop-unjar5030080067721998885/] [] /tmp/streamjob7446402517274720868.jar tmpDir=null JarBuilder.addNamedStream hello . . 12/03/07 22:31:32 INFO mapred.FileInputFormat: Total input paths to process : 1 12/03/07 22:31:32 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local] 12/03/07 22:31:32 INFO streaming.StreamJob: Running job: job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:31:32 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:31:33 INFO streaming.StreamJob: map 0% reduce 0% 12/03/07 22:32:05 INFO streaming.StreamJob: map 100% reduce 100% 12/03/07 22:32:05 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:32:05 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:32:05 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:32:05 ERROR streaming.StreamJob: Job not Successful! 12/03/07 22:32:05 INFO streaming.StreamJob: killJob... Streaming Job Failed! Command I am trying is : hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper /home/hduser/MARS.py -reducer /home/hduser/MARS_red.py -input /user/hduser/mars_inputt -output /user/hduser/mars-output -file /tmp/hello/hello -verbose where hello is the C executable. It is a simple helloworld program which I am using to check the basic functioning. My Python code is : #!/usr/bin/env python import subprocess subprocess.call(["./hello"]) Any help with how to get the executable run with Python in hadoop streaming or help with debugging this will get me forward in this. Thanks, Ganesh

    Read the article

  • Loading a UITableView From A Nib

    - by Garry
    Hi, I keep getting a crash when loading a UITableView. I am trying to use a cell defined in a nib file. I have an IBOutlet defined in the view controller header file: UITableViewCell *jobCell; @property (nonatomic, assign) IBOutlet UITableViewCell *jobCell; This is synthesised in the implementation file. I have a UITableViewCell created in IB and set it's identifier to JobCell. Here is the cellForRowAtIndexPath method: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *cellIdentifier = @"JobCell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:cellIdentifier]; if (cell == nil) { [[NSBundle mainBundle] loadNibNamed:@"JobsRootViewController" owner:self options:nil]; cell = jobCell; self.jobCell = nil; } // Get this job Job *job = [fetchedResultsController objectAtIndexPath:indexPath]; // Job title UILabel *jobTitle; jobTitle = (UILabel *)[cell viewWithTag:tagJobTitle]; jobTitle.text = job.title; // Job due date UILabel *dueDate; dueDate = (UILabel *)[cell viewWithTag:tagJobDueDate]; dueDate.text = [self.dateFormatter stringFromDate:job.dueDate]; // Notes icon UIImageView *notesImageView; notesImageView = (UIImageView *)[cell viewWithTag:tagNotesImageView]; if ([job.notes length] > 0) { // This job has a note attached to it - show the notes icon notesImageView.hidden = NO; } else { // Hide the notes icon notesImageView.hidden = YES; } // Job completed button // Return the cell return cell; } When I run the app - I get a hard crash and the console reports the following: objc[1291]: FREED(id): message style sent to freed object=0x4046400 I have hooked up all the outlets in IB correctly. What is the issue? Thanks,

    Read the article

  • How can i use cron job in Social engine?

    - by Rajendra Banker
    Hi All, I am new with cron job in php,basically i want to send email to user on certain time of period.i want to send email daily,weekly,monthly,quarterly,yearly,or specific amount of days. In smarty template i want to use this type of function Can any body know how to do tihs?

    Read the article

  • How would you explain your job to a 5-year old?

    - by Canavar
    Sometimes it's difficult to define programming to people. Especially too old or too young people can not understand what I do to earn money. They think that I repair computers, or they want to think that I (as an engineer) build computers at work. :) It's really hard to tell people that you produce something they can't touch. Here is a funny question, how would you explain your job to a 5-year-old?

    Read the article

  • Having a problem displaying data from last inserted data

    - by Gideon
    I'm designing a staff rota planner....have three tables Staff (Staff details), Event (Event details), and Job (JobId, JobDate, EventId (fk), StaffId (fk)). I need to display the last inserted job detail with the staff name. I've been at it for couple of hours and getting nowhere. Thanks for the help in advance. My code is the following: $eventId = $_POST['eventid']; $selectBox = $_POST['selectbox']; $timePeriod = $_POST['time']; $selectedDate = $_POST['date']; $count = count($selectBox); //constructing the staff selection if (empty($selectBox)) { echo "<p>You didn't select any member of staff to be assigned."; echo "<p><input type='button' value='Go Back' onClick='history.go(-1)'>"; } else { echo "<p> You selected ".$count. " staff for this show."; for ($i=0;$i<$count;$i++) { $selectId = $selectBox[$i]; //insert the details into the Job table in the database $insertJob = "INSERT INTO Job (JobDate, TimePeriod, EventId, StaffId) VALUES ('".$selectedDate."', '".$timePeriod."', ".$eventId.", ".$selectId.")"; $exeinsertJob = mysql_query($insertJob) or die (mysql_error()); } } //display the inserted job details $insertedlist = "SELECT Job.JobId, Staff.LastName, Staff.FirstName, Job.JobDate, Job.TimePeriod FROM Staff, Job WHERE Job.StaffId = Staff.StaffId AND Job.EventId = $eventId AND Job.JobDate = ".$selectedDate; $exeinsertlist = mysql_query($insertedlist) or die (mysql_error()); if ($exeinsertlist) { echo "<p><table cellspacing='1' cellpadding='3'>"; echo "<tr><th colspan=5> ".$eventname."</th></tr>"; echo "<tr><th>Job Id</th><th>Last Name</th> <th>First Name </th><th>Date</th><th>Hours</th></tr>"; while ($joblistarray = mysql_fetch_array($exeinsertlist)) { echo "<tr><td align=center>".$joblistarray['JobId']." </td><td align=center>".$joblistarray['LastName']."</td><td align=center>".$joblistarray['FirstName']." </td><td align=center>".$joblistarray['JobDate']." </td><td align=center>".$joblistarray['TimePeriod']."</td></tr>"; } echo "</table>"; echo "<h3><a href=AssignStaff.php>Add More Staff?</a></h3>"; } else { echo "The Job list can not be displayed at this time. Try again."; echo "<p><input type='button' value='Go Back' onClick='history.go(-1)'>"; }

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >