Search Results

Search found 7263 results on 291 pages for 'job boards'.

Page 80/291 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Unix shell script with Iseries command

    - by user293058
    I am trying to ftp a file from unix to as400 and executing iseries command in the script. ftp is working fine,I am getting an error in jobd command as HOST=KCBNSXDD.svr.us.bank.net USER=test PASS=1234 #This is the password for the FTP user. ftp -env $HOST << EOF # Call 2. Here the login credentials are supplied by calling the variables. user $USER $PASS # Call 3. Here you will change to the directory where you want to put or get cd "\$QARCVBEN" # Call4. Here you will tell FTP to put or get the file. #Ebcdic #Mode b quote site crtccsid *user quote site crtccsid *sysval put prod.txt quote rcmd sbmjob cmd(call pgm(pmtiprcc0) parm('prod' 'DEV')) job(\$pmtiprcc) jobd(orderbatch) 550-Error occurred on command SBMJOB cmd(call pgm(pmtiprcc0)) job($pmtiprcc) jobd(orderbatch). 550 Errors occurred on SBMJOB command.. 221 QUIT subcommand received.

    Read the article

  • Quartz.NET Instance Handling Problem

    - by Dhon
    Hi, I have 2 instances which implements 2 different instance IDs in 2 different windows services as: //windows service 1 instance 1 properties["quartz.scheduler.instanceName"] = "instanceName1"; properties["quartz.scheduler.instanceId"] = "instanceID1"; //windows service 2 instance 2 properties["quartz.scheduler.instanceName"] = "instanceName2"; properties["quartz.scheduler.instanceId"] = "instanceID2"; In the ADOJobstore, I can see that there are two instances. However, when I schedule a simple job in instance1, it is getting triggered in instance2 (and vise versa). By looking at the records created in jobstore, the scheduled job are properly tagged with the expected instanceIDs. Any idea of why this is happening?

    Read the article

  • ClassNotFoundException error in implementing Bayesian algorithm in Apache Mahout on Hadoop

    - by Shweta
    Hi, I have a problem in executing the Bayesian algorithm in Mahout. I built it with Maven and the job file is in target directory. When run from terminal using hadoop, I'm getting the ClassNotFoundException error. What should be done? $HADOOP_HOME/bin/hadoop jar mahout-core-0.3-SNAPSHOT.job org.apache.mahout.classifier.bayes.mapreduce.bayes.bayesdriver -i test -o output Exception in thread "main" java.lang.ClassNotFoundException: org.apache.mahout.classifier.bayes.mapreduce.bayes.bayesdriver at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at org.apache.hadoop.util.RunJar.main(RunJar.java:149)

    Read the article

  • Using wix3 SqlScript to run generated temporary sql-script files.

    - by leiflundgren
    I am starting to write an installer which will use the SqlScript-element. That takes a reference to the Binary-table what script to run. I would like to dynamically generate the script during the installation. I can see three possibilities: Somehow to get SqlScript to read it data from a file rather then a Binary entry. Inject my generated script into the Binary table Using SqlString Which will cause the need to place some rather long strings into Properties, but I guess that shouldn't really be a prolem. Any advice? Regards Leif (My reason, should anyone be interested is that the database should have a job set up, that calls on an installed exe-file. I prefer to create the job using sqlscript. And the path of that file is not known until InstallDir has been choosen.)

    Read the article

  • Lawler's Algorithm Implementation Assistance

    - by Richard Knop
    Here is my implemenation of Lawler's algorithm in PHP (I know... but I'm used to it): <?php $jobs = array(1, 2, 3, 4, 5, 6); $jobsSubset = array(2, 5, 6); $n = count($jobs); $processingTimes = array(2, 3, 4, 3, 2, 1); $dueDates = array(3, 15, 9, 7, 11, 20); $optimalSchedule = array(); foreach ($jobs as $j) { $optimalSchedule[] = 0; } $dicreasedCardinality = array(); for ($i = $n; $i >= 1; $i--) { $x = 0; $max = 0; // loop through all jobs for ($j = 0; $j < $i; $j++) { // ignore if $j already is in the $dicreasedCardinality array if (false === in_array($j, $dicreasedCardinality)) { // if the job has no succesor in $jobsSubset if (false === isset($jobs[$j+1]) || false === in_array($jobs[$j+1], $jobsSubset)) { // here I find an array index of a job with the maximum due date // amongst jobs with no sucessor in $jobsSubset if ($x < $dueDates[$j]) { $x = $dueDates[$j]; $max = $j; } } } } // move the job at the end of $optimalSchedule $optimalSchedule[$i-1] = $jobs[$max]; // decrease the cardinality of $jobs $dicreasedCardinality[] = $max; } print_r($optimalSchedule); Now the above returns an optimal schedule like this: Array ( [0] => 1 [1] => 1 [2] => 1 [3] => 3 [4] => 2 [5] => 6 ) Which doesn't seem right to me. The problem might be with my implementation of the algorithm because I am not sure I understand it correctly. I used this source to implement it: http://www.google.com/books?id=aSiBs6PDm9AC&pg=PA166&dq=lawler%27s+algorithm+code&lr=&hl=sk&cd=4#v=onepage&q=&f=false The description there is a little confusing. For example, I didn't quite get how is the subset D defined (I guess it is arbitrary). Could anyone help me out with this? I have been trying to find some sources with simpler explanation of the algorithm but all sources I found were even more complicated (with math proofs and such) so I am stuck with the link above. Yes, this is a homework, if it wasn't obvious. I still have few weeks to crack this but I have spent few days already trying to get how exactly this algorithm works with no success so I don't think I will get any brighter during that time.

    Read the article

  • MySQL Binary Storage using BLOB VS OS File System: large files, large quantities, large problems.

    - by Quantico773
    Hi Guys, Versions I am running (basically latest of everything): PHP: 5.3.1 MySQL: 5.1.41 Apache: 2.2.14 OS: CentOS (latest) Here is the situation. I have thousands of very important documents, ranging from customer contracts to voice signatures (recordings of customer authorisation for contracts), with file types including, but not limited to jpg, gif, png, tiff, doc, docx, xls, wav, mp3, pdf, etc. All of these documents are currently stored on several servers including Windows 32 bit, CentOS and Mac, among others. Some files are also stored on employees desktop computers and laptops, and some are still hard copies stored in hundreds of boxes and filing cabinets. Now because customers or lawyers could demand evidence of contracts at any time, my company has to be able to search and locate the correct document(s) effectively, for this reason ALL of these files have to be digitised (if not already) and correlated into some sort of order for searching and accessing. As the programmer, I have created a full Customer Relations Management tool that the whole company uses. This includes Customer Profiles management, Order and job Tracking tools, Job/sale creation and management modules, etc, and at the moment any file that is needed at a customer profile level (drivers licence, credit authority, etc) or at a job/sale level (contracts, voice signatures, etc) can be uploaded to the server and sits in a parent/child hierarchy structure, just like Windows Explorer or any other typical file managment model. The structure appears as such: drivers_license |- DL_123.jpg voice_signatures |- VS_123.wav |- VS_4567.wav contracts So the files are uplaoded using PHP and Apache, and are stored in the file system of the OS. At the time of uploading, certain information about the file(s) is stored in a MySQL database. Some of the information stored is: TABLE: FileUploads FileID CustomerID (the customer id that the file belongs to, they all have this.) JobID/SaleID (the id of the job/sale associated, if any.) FileSize FileType UploadedDateTime UploadedBy FilePath (the directory path the file is stored in.) FileName (current file name of uploaded file, combination of CustomerID and JobID/SaleID if applicable.) FileDescription OriginalFileName (original name of the source file when uploaded, including extension.) So as you can see, the file is linked to the database by the File Name. When I want to provide a customers' files for download to a user all I have to do is "SELECT * FROM FileUploads WHERE CustomerID = 123 OR JobID = 2345;" and this will output all the file details I require, and with the FilePath and FileName I can provide the link for download. http... server / FilePath / FileName There are a number of problems with this method: Storing files in this "database unconcious" environment means data integrity is not kept. If a record is deleted, the file may not be deleted also, or vice versa. Files are strewn all over the place, different servers, computers, etc. The file name is the ONLY thing matching the binary to the database and customer profile and customer records. etc, etc. There are so many reasons, some of which are described here: http://www.dreamwerx.net/site/article01 . Also there is an interesting article here too: sietch.net/ViewNewsItem.aspx?NewsItemID=124 . SO, after much research I have pretty much decided I am going to store ALL of these files in the database, as a BLOB or LONGBLOB, but there are still many considerations before I do this. I know that storing them in the database is a viable option, however there are a number of methods of storing them. I also know storing them is one thing; correlating and accessing them in a manageable way is another thing entirely. The article provided at this link: dreamwerx.net/site/article01 describes a way of splitting the uploaded binary files into 64kb chunks and storing each chunk with the FileID, and then streaming the actual binary file to the client using headers. This is a really cool idea since it alleviates preassure on the servers memory; instead of loading an entire 100mb file into the RAM and then sending it to the client, it is doing it 64kb at a time. I have tried this (and updated his scripts) and this is totally successful, in a very small frame of testing. So if you are in agreeance that this method is a viable, stable and robust long-term option to store moderately large files (1kb to couple hundred megs), and large quantities of these files, let me know what other considerations or ideas you have. Also, I am considering getting a current "File Management" PHP script that gives an interface for managing files stored in the File System and converting it to manage files stored in the database. If there is already any software out there that does this, please let me know. I guess there are many questions I could ask, and all the information is up there ^^ so please, discuss all aspects of this and we can pass ideas back and forth and teach each other. Cheers, Quantico773

    Read the article

  • Exception while running Quartz Schdular program

    - by Sunny Mate
    hi, i am getting he following Exception while running my Quartz Schdular program. Below is the exception Trace Mar 26, 2010 2:54:24 PM org.quartz.core.QuartzScheduler start INFO: Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started. Exception in thread "main" java.lang.IllegalArgumentException: Job class must implement the Job interface. at org.quartz.JobDetail.setJobClass(JobDetail.java:291) at org.quartz.JobDetail.(JobDetail.java:138) at com.Quarrtz.RanchSchedule.main(RanchSchedule.java:18) i have included Quartz-1.7.2.jar and Quartz-all-1.7.2.jar in my class path along with commom-logging 1.1.jar and jdk 6 this is an example i have copy and pasted from JAVA RANCH http://www.javaranch.com/journal/200711/combining_spring_and_quartz.html First example in the above page any help pls thanx in advance Sunny Mate

    Read the article

  • Tibco Unit Testing tools

    - by mezoid
    Does anyone know what unit testing tools are available when developing Tibco processes? In the next few months I'll be working on a Tibco project and I'm trying to find any existing unit testing frameworks that might make the job easier to build with a TDD approach. Thus far, the only one I've been able to locate is called BWUnit. It seems ok but its currently in beta and its commercial software. If possible I'd like to use an open source tool but as long as it is able to do a good job I'd be happy. So does anyone know of any other unit testing tools for Tibco development?

    Read the article

  • Cron syntax with Java EE 5?

    - by marabol
    Timer Tasks in Java EE are not very comfortable. Is there any util, to configure timer with cron syntax like "0 20 20 * * "? I wonder, if it would be a good way to use Quartzinside (clustered) JEE application. According to http://www.prozesse-und-systeme.de/serverClustering.html (german page) there limits with Quartz and Java EE clustering: JDBC must be used as job store for Quartz Only cluster associated Quartz instances are allowed to use this JDBC job store All cluster nodes must be synchronized to the split second All cluster nodes must use the same quartz.properties file I would prefer an easier way for configuration of timer service, instead an not Java EE managed scheduler.

    Read the article

  • How to disable Events for a while in Javascript or jQuery

    - by Tarik
    Hello, I am wondering if you guys know different approach to disable an event for a while. Let me elaborate this more : Lets say I have a div or button which has a subscriber to its onclick event. To prevent the double click when the the methods are doing some ajax things, these are the some of the ways we can do : Disable the button till the method finishes its job Unbind till the methods finishes its job and then bind it again. Use some kind of flagging system like boolean so it will prevent method from working more than once. So is there any other ways, maybe some javascript tricks or jQuery tricks which is more efficient and better practice. Thanks in advance.

    Read the article

  • MySQL Student database beginner SQL employment expectations

    - by sammysmall
    Background, Student pursuing BSIT degree, employer expectations, entry level. I would like to solicit opinions from this forum as to what your professional expectations are as regards an entry level position in working in the database realm... I see many job opportunities that require a minimum of 2 or more years experience, how does one go about obtaining this experience (I have tried very hard to maintain a minimum 3.70+ GPA) but have ZERO work experience in this field... I spend non school time working in VB and SQL to try and increase my proficiency. I have considered postponing my job search until I obtain certifications from brainbench etc... Any criticism and or advise is welcomed... Again I have no experience in database other than undergrad work in class. Thank you for your time sammysmall

    Read the article

  • How to download images in playframework jobs?

    - by MrROY
    I have a playframework Job class like this: public class ImageDownloader extends Job { private String[] urls; private String dir; public ImageDownloader(){} public ImageDownloader(String[] urls,String dir){ this.urls = urls; this.dir = dir; } @Override public void doJob() throws Exception { if(urls!=null && urls.length > 0){ for (int i = 0; i < urls.length; i++) { String url = urls[i]; //Dowloading } } } } Play(1.2.4) has lots of amazing tools to make things easy. So i wonder whether there's a way to make the downloading easy and beautiful in play ?

    Read the article

  • Query on simple C++ threadpool implementation

    - by ticketman
    Stackoverflow has been a tremendous help to me and I'd to give something back to the community. I have been implementing a simple threadpool using the tinythread C++ portable thread library, using what I have learnt from Stackoverflow. I am new to thread programming, so not that comfortable with mutexes, etc. I have a question best asked after presenting the code (which runs quite well under Linux): // ThreadPool.h class ThreadPool { public: ThreadPool(); ~ThreadPool(); // Creates a pool of threads and gets them ready to be used void CreateThreads(int numOfThreads); // Assigns a job to a thread in the pool, but doesn't start the job // Each SubmitJob call will use up one thread of the pool. // This operation can only be undone by calling StartJobs and // then waiting for the jobs to complete. On completion, // new jobs may be submitted. void SubmitJob( void (*workFunc)(void *), void *workData ); // Begins execution of all the jobs in the pool. void StartJobs(); // Waits until all jobs have completed. // The wait will block the caller. // On completion, new jobs may be submitted. void WaitForJobsToComplete(); private: enum typeOfWorkEnum { e_work, e_quit }; class ThreadData { public: bool ready; // thread has been created and is ready for work bool haveWorkToDo; typeOfWorkEnum typeOfWork; // Pointer to the work function each thread has to call. void (*workFunc)(void *); // Pointer to work data void *workData; ThreadData() : ready(false), haveWorkToDo(false) { }; }; struct ThreadArgStruct { ThreadPool *threadPoolInstance; int threadId; }; // Data for each thread ThreadData *m_ThreadData; ThreadPool(ThreadPool const&); // copy ctor hidden ThreadPool& operator=(ThreadPool const&); // assign op. hidden // Static function that provides the function pointer that a thread can call // By including the ThreadPool instance in the void * parameter, // we can use it to access other data and methods in the ThreadPool instance. static void ThreadFuncWrapper(void *arg) { ThreadArgStruct *threadArg = static_cast<ThreadArgStruct *>(arg); threadArg->threadPoolInstance->ThreadFunc(threadArg->threadId); } // The function each thread calls void ThreadFunc( int threadId ); // Called by the thread pool destructor void DestroyThreadPool(); // Total number of threads available // (fixed on creation of thread pool) int m_numOfThreads; int m_NumOfThreadsDoingWork; int m_NumOfThreadsGivenJobs; // List of threads std::vector<tthread::thread *> m_ThreadList; // Condition variable to signal each thread has been created and executing tthread::mutex m_ThreadReady_mutex; tthread::condition_variable m_ThreadReady_condvar; // Condition variable to signal each thread to start work tthread::mutex m_WorkToDo_mutex; tthread::condition_variable m_WorkToDo_condvar; // Condition variable to signal the main thread that // all threads in the pool have completed their work tthread::mutex m_WorkCompleted_mutex; tthread::condition_variable m_WorkCompleted_condvar; }; cpp file: // // ThreadPool.cpp // #include "ThreadPool.h" // This is the thread function for each thread. // All threads remain in this function until // they are asked to quit, which only happens // when terminating the thread pool. void ThreadPool::ThreadFunc( int threadId ) { ThreadData *myThreadData = &m_ThreadData[threadId]; std::cout << "Hello world: Thread " << threadId << std::endl; // Signal that this thread is ready m_ThreadReady_mutex.lock(); myThreadData->ready = true; m_ThreadReady_condvar.notify_one(); // notify the main thread m_ThreadReady_mutex.unlock(); while(true) { //tthread::lock_guard<tthread::mutex> guard(m); m_WorkToDo_mutex.lock(); while(!myThreadData->haveWorkToDo) // check for work to do m_WorkToDo_condvar.wait(m_WorkToDo_mutex); // if no work, wait here myThreadData->haveWorkToDo = false; // need to do this before unlocking the mutex m_WorkToDo_mutex.unlock(); // Do the work switch(myThreadData->typeOfWork) { case e_work: std::cout << "Thread " << threadId << ": Woken with work to do\n"; // Do work myThreadData->workFunc(myThreadData->workData); std::cout << "#Thread " << threadId << ": Work is completed\n"; break; case e_quit: std::cout << "Thread " << threadId << ": Asked to quit\n"; return; // ends the thread } // Now to signal the main thread that my work is completed m_WorkCompleted_mutex.lock(); m_NumOfThreadsDoingWork--; // Unsure if this 'if' would make the program more efficient // if(NumOfThreadsDoingWork == 0) m_WorkCompleted_condvar.notify_one(); // notify the main thread m_WorkCompleted_mutex.unlock(); } } ThreadPool::ThreadPool() { m_numOfThreads = 0; m_NumOfThreadsDoingWork = 0; m_NumOfThreadsGivenJobs = 0; } ThreadPool::~ThreadPool() { if(m_numOfThreads) { DestroyThreadPool(); delete [] m_ThreadData; } } void ThreadPool::CreateThreads(int numOfThreads) { // Check a thread pool has already been created if(m_numOfThreads > 0) return; m_NumOfThreadsGivenJobs = 0; m_NumOfThreadsDoingWork = 0; m_numOfThreads = numOfThreads; m_ThreadData = new ThreadData[m_numOfThreads]; ThreadArgStruct threadArg; for(int i=0; i<m_numOfThreads; ++i) { threadArg.threadId = i; threadArg.threadPoolInstance = this; // Creates the thread and save in a list so we can destroy it later m_ThreadList.push_back( new tthread::thread( ThreadFuncWrapper, (void *)&threadArg ) ); // It takes a little time for a thread to get established. // Best wait until it gets established before creating the next thread. m_ThreadReady_mutex.lock(); while(!m_ThreadData[i].ready) // Check if thread is ready m_ThreadReady_condvar.wait(m_ThreadReady_mutex); // If not, wait here m_ThreadReady_mutex.unlock(); } } // Adds a job to the batch, but doesn't start the job void ThreadPool::SubmitJob(void (*workFunc)(void *), void *workData) { // Check that the thread pool has been created if(!m_numOfThreads) return; if(m_NumOfThreadsGivenJobs >= m_numOfThreads) return; m_ThreadData[m_NumOfThreadsGivenJobs].workFunc = workFunc; m_ThreadData[m_NumOfThreadsGivenJobs].workData = workData; std::cout << "Submitted job " << m_NumOfThreadsGivenJobs << std::endl; m_NumOfThreadsGivenJobs++; } void ThreadPool::StartJobs() { // Check that the thread pool has been created // and some jobs have been assigned if(!m_numOfThreads || !m_NumOfThreadsGivenJobs) return; // Set 'haveworkToDo' flag for all threads m_WorkToDo_mutex.lock(); for(int i=0; i<m_NumOfThreadsGivenJobs; ++i) m_ThreadData[i].haveWorkToDo = true; m_NumOfThreadsDoingWork = m_NumOfThreadsGivenJobs; // Reset this counter so we can resubmit jobs later m_NumOfThreadsGivenJobs = 0; // Notify all threads they have work to do m_WorkToDo_condvar.notify_all(); m_WorkToDo_mutex.unlock(); } void ThreadPool::WaitForJobsToComplete() { // Check that a thread pool has been created if(!m_numOfThreads) return; m_WorkCompleted_mutex.lock(); while(m_NumOfThreadsDoingWork > 0) // Check if all threads have completed their work m_WorkCompleted_condvar.wait(m_WorkCompleted_mutex); // If not, wait here m_WorkCompleted_mutex.unlock(); } void ThreadPool::DestroyThreadPool() { std::cout << "Ask threads to quit\n"; m_WorkToDo_mutex.lock(); for(int i=0; i<m_numOfThreads; ++i) { m_ThreadData[i].haveWorkToDo = true; m_ThreadData[i].typeOfWork = e_quit; } m_WorkToDo_condvar.notify_all(); m_WorkToDo_mutex.unlock(); // As each thread terminates, catch them here for(int i=0; i<m_numOfThreads; ++i) { tthread::thread *t = m_ThreadList[i]; // Wait for thread to complete t->join(); } m_numOfThreads = 0; } Example of usage: (this calculates pi-squared/6) struct CalculationDataStruct { int inputVal; double outputVal; }; void LongCalculation( void *theSums ) { CalculationDataStruct *sums = (CalculationDataStruct *)theSums; int terms = sums->inputVal; double sum; for(int i=1; i<terms; i++) sum += 1.0/( double(i)*double(i) ); sums->outputVal = sum; } int main(int argc, char** argv) { int numThreads = 10; // Create pool ThreadPool threadPool; threadPool.CreateThreads(numThreads); // Create thread workspace CalculationDataStruct sums[numThreads]; // Set up jobs for(int i=0; i<numThreads; i++) { sums[i].inputVal = 3000*(i+1); threadPool.SubmitJob(LongCalculation, &sums[i]); } // Run the jobs threadPool.StartJobs(); threadPool.WaitForJobsToComplete(); // Print results for(int i=0; i<numThreads; i++) std::cout << "Sum of " << sums[i].inputVal << " terms is " << sums[i].outputVal << std::endl; return 0; } Question: In the ThreadPool::ThreadFunc method, would better performance be obtained if the following if statement if(NumOfThreadsDoingWork == 0) was included? Also, I'd be grateful of criticisms and ways to improve the code. At the same time, I hope the code is of use to others.

    Read the article

  • Access / Excel crossover: Should i attach spreadsheets to records

    - by glinch
    Hi, I currently have an archaic system of client records that I am trying to improve. For each client i have a directory, in that directory i include a directory for each job. Each job has a spreadsheet that i use to store their personal details, and run calculations and costings specific to their needs. In turn I also have word documents that are linked to their spreadsheet which automatically update accordingly. The spreadsheet is also exported as a pdf as well I am trying to build a database of customer records in Access, straight forward enough. For each new customer i need to be able to add the appropriate spreadsheet to their records, update the spreadsheet accordingly with their details, use the spreadsheet to calculate their costings etc.. I do not want to enter the same information repeatedly, and would like a cohesive system, with data being passed between access and excel. Should this be easy enough to do with the two packages? Thanks in advance Noel

    Read the article

  • Are government programming jobs good?

    - by Absolute0
    I am a passionate software developer and greatly enjoy programming. However I was recently contacted regarding a developer lead position for a government job at NYC for the fire department. The pay is pretty good, and I would assume the position has good job security and stability. But I am hesitant to even go for an interview as it seems like an exaggerated version of Office Space with a lot of Bureaucracy and mindless paper work. The description is as follows: The Lead Applications Developer, supporting the Programming Group, will be responsible for all phases of the system development life cycle including performing system analysis, requirements definition, database design, preparation of scopes of work, and development of project plans. Supervise programming staff and manage projects involving the design, implementation, maintenance, and enhancement of complex Oracle based user applications using Oracle Development tools. Applications will be deployed using Oracle Application Server utilizing programming languages such as JAVA, JSF, JSP, Oracle ADF, PL/SQL, and XML with J2EE and EJB technology. Anyone with previous government experience can share their two cents on this? Thank you.

    Read the article

  • Handling Exceptions for ThreadPoolExecutor

    - by HonorGod
    I have the following code snippet that basically scans through the list of task that needs to be executed and each task is then given to the executor for execution. The JobExecutor intern creates another executor (for doing db stuff...reading and writing data to queue) and completes the task. JobExecutor returns a Future for the tasks submitted. When one of the task fails, I want to gracefully interrupt all the threads and shutdown the executor by catching all the exceptions. What changes do I need to do? public class DataMovingClass { private static final AtomicInteger uniqueId = new AtomicInteger(0); private static final ThreadLocal<Integer> uniqueNumber = new IDGenerator(); ThreadPoolExecutor threadPoolExecutor = null ; private List<Source> sources = new ArrayList<Source>(); private static class IDGenerator extends ThreadLocal<Integer> { @Override public Integer get() { return uniqueId.incrementAndGet(); } } public void init(){ // load sources list } public boolean execute() { boolean succcess = true ; threadPoolExecutor = new ThreadPoolExecutor(10,10, 10, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(1024), new ThreadFactory() { public Thread newThread(Runnable r) { Thread t = new Thread(r); t.setName("DataMigration-" + uniqueNumber.get()); return t; }// End method }, new ThreadPoolExecutor.CallerRunsPolicy()); List<Future<Boolean>> result = new ArrayList<Future<Boolean>>(); for (Source source : sources) { result.add(threadPoolExecutor.submit(new JobExecutor(source))); } for (Future<Boolean> jobDone : result) { try { if (!jobDone.get(100000, TimeUnit.SECONDS) && success) { // in case of successful DbWriterClass, we don't need to change // it. success = false; } } catch (Exception ex) { // handle exceptions } } } public class JobExecutor implements Callable<Boolean> { private ThreadPoolExecutor threadPoolExecutor ; Source jobSource ; public SourceJobExecutor(Source source) { this.jobSource = source; threadPoolExecutor = new ThreadPoolExecutor(10,10,10, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(1024), new ThreadFactory() { public Thread newThread(Runnable r) { Thread t = new Thread(r); t.setName("Job Executor-" + uniqueNumber.get()); return t; }// End method }, new ThreadPoolExecutor.CallerRunsPolicy()); } public Boolean call() throws Exception { boolean status = true ; System.out.println("Starting Job = " + jobSource.getName()); try { // do the specified task ; }catch (InterruptedException intrEx) { logger.warn("InterruptedException", intrEx); status = false ; } catch(Exception e) { logger.fatal("Exception occurred while executing task "+jobSource.getName(),e); status = false ; } System.out.println("Ending Job = " + jobSource.getName()); return status ; } } }

    Read the article

  • Hadoop 0.2: How to read outputs from TextOutputFormat?

    - by S.N
    My reducer class produces outputs with TextOutputFormat (the default OutputFormat given by Job). I like to consume this outputs after the MapReduce job complete to aggregate the outputs. In addition to this, I like to write out the aggregated information with TextInputFormat so that the output from this process can be consumed by the next iteration of MapReduce task. Can anyone give me an example on how to write & read with TextFormat? By the way, the reason why I am using TextFormat, rather Sequence, is the interoperability. The outputs should be consumed by any software.

    Read the article

  • Becoming a professional programmer / software engineer

    - by Matt
    This isn't strictly about programming, more about being a programmer, so I'm sorry if its not the right kind of question to ask on this forum (mod, please delete if it isn't) I'm a computer tech in the US Army, and once I'm out I'll have eight years on the job. I'm about to start a degree through an online school (the only way I can get the army to pay for it while I'm still in), and I'm seriously looking at getting a computer science degree. I'm great with computers. I can take one apart and put it back together with my eyes closed. I'm A+ and Network+ certified and I'm getting a couple other CompTIA certs before I get out. I can work Windows as well as anyone on this planet and I'm not terrible with Linux. A job in computers is something I've always wanted. But, aside from being a computer technician, it seems that every job in the field requires programming ability. I like programming as a hobby. I programmed TI BASIC in high school and I'm teaching myself Python, but that's as far as my experience goes. That sort of brings me to my questions: I've always heard that the first language is the most difficult, and once you learn it well then all the others sort of fall into place for you. Is that true? Like, if I spend the next eight months mastering Python, will I pretty much be able to pick up at least fair proficiency in any other OO language within a month of studying it or whatever? How easy is it to burn out? the biggest thing I'm afraid of is just burning out on programming. I can go all day long if I'm programming strictly for my own personal desire, but I can imagine it being really easy to burn out after a few years of programming to deadlines and certain specifications. Especially if its a big project involving a dozen different designers. From what I told you about myself, would I already be qualified to work as a regular technician (geek squad type or maybe running a computer repair shop). Is Python a good base to learn from? I've heard that it makes you hate other languages because they feel more convoluted when learning, but also that its a great beginner language. If you're a professional programmer, did you have any of the same fears? Would you recommend that I stick to computer repair and Python rather than try to get into corporate programming? (just from what you've read in this thread, anyway) Thanks for taking the time to read all this and answer (if you did)

    Read the article

  • Proper way of utilizing gearman with my php application.

    - by luckytaxi
    For now I just want to use Gearman for background processing. For example, I need to email a recipient that they have a private message waiting for them once the sender submits their message into the DB. I assume I can run the worker/client and server on my primary server but I have no problems offloading some of the tasks to a different web server. Anyways, my question is how do I handle multiple "functions?" Let's say I need a job that handles the email portion and a job to handle image manipulation. Can I have multiple functions in the worker? I've followed a couple of examples I found online but each example only shows one function being initialized. Do I have to start up multiple "workers" to handle multiple functions?

    Read the article

  • What is the career value in learning ColdFusion?

    - by Jon Cram
    ColdFusion is a language I encounter rather infrequently, however it does turn up from time to time either in job adverts or as .cfm file extensions in URLs. There are possible job opportunities near to where I plan to live for ColdFusion developers. It might be in my interests to have a look at ColdFusion. ColdFusion appears, to me, to be a minority language compared to C#, Java or indeed most popular languages. Don thinks ColdFusion is declining in popularity. Would a ColdFusion position today be more related to the maintenance of legacy code than innovative, creative development, thus less interesting? Is there any long term career value in learning ColdFusion?

    Read the article

  • Will be self-taught limit me?

    - by Isaiah
    I'm 21 and am pretty efficient in html/css, python, and javascript. I also know my way around lisp languages and enjoy programing in them. My problem is that I'm extremely self-taught and not quite confident that I could land a job programing, but I really need a job soon as I've just become a father. I haven't even created a resume yet because I'm not really sure what to put on it except my lone experience. So I wanted to ask, will being primarily self-taught with some experience on small projects I've done for a few clients limit me too much? I mean I know I need some kind of education so I've enrolled part time in a community college to work on a degree in computer science, but it's years till then. And if it will limit me a lot, what kind of skills would be good to work on to make my chances any better? Thank You

    Read the article

  • notification scheduling question

    - by sims
    Hi Stackers! I'm building an app that needs to send out notifications to users depending on user definable "notifications". So the notifications are not per event. They are arbitrary. A cron job should query the database and send out emails when it finds an event with matching criterion. The is a scheduling app. So naturally, one of the criterion is time. I think I've figured it out, but I'm sure there are better ideas out there as it seems to be a fairly common thing to do. I think I'll limit the users ability to "minutes before hand" to get notified as opposed to seconds or hours. So I have an eventA and notificationA. notificationA should be triggered if an event is due within 45 minutes. So eventA starts at 17:30. The user should be notified at 16:45. But the cron job might not run exactly at 00 seconds. So when the difference is not 45 minutes and 0 seconds it is, say, 45 minutes and 5 seconds. Notification time is past. Email doesn't get sent. User misses event. Shugar. We should also take into account that the cron job might take a long time to run. So maybe we should only trigger it every 5 minutes. So we need a bigger interval maybe. So then my guess would be to say that: if ((eventDueTime - now notificationTimeValue - interval) && (eventDueTime - now < notificationTimeValue + interval)) sendTheFrikinNotificationAlready(); It seems kind of risky if the there are thousands of notifications to send out. I guess I could make a thread for each notification and then a thread for each event that matches the criterion. That might help. Does that make sense? Any other ideas? Thanks!

    Read the article

  • Running Awk command on a cluster

    - by alex
    How do you execute a Unix shell command (awk script, a pipe etc) on a cluster in parallel (step 1) and collect the results back to a central node (step 2) Hadoop seems to be a huge overkill with its 600k LOC and its performance is terrible (takes minutes just to initialize the job) i don't need shared memory, or - something like MPI/openMP as i dont need to synchronize or share anything, don't need a distributed VM or anything as complex Google's SawZall seems to work only with Google proprietary MapReduce API some distributed shell packages i found failed to compile, but there must be a simple way to run a data-centric batch job on a cluster, something as close as possible to native OS, may be using unix RPC calls i liked rsync simplicity but it seem to update remote notes sequentially, and you cant use it for executing scripts as afar as i know switching to Plan 9 or some other network oriented OS looks like another overkill i'm looking for a simple, distributed way to run awk scripts or similar - as close as possible to data with a minimal initialization overhead, in a nothing-shared, nothing-synchronized fashion Thanks Alex

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >