Search Results

Search found 3070 results on 123 pages for 'cron jobs'.

Page 40/123 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Unable to force Debian to do unattended install... libc6 wants interactive confirm

    - by JD Long
    I'm trying to create a script that forces a Debian Lenny install to install the latest version of CRAN R. During the install it appears libc6 is upgraded and the install wants interactive confirm that it's OK to restart three services (mysql, exim4, cron). This process HAS to be unattended as it runs on Amazon's Elastic Map Reduce (EMR) machines. But I'm running out of options. Here's a few things I've tried: This previous question appears to be exactly what I'm looking for. So I set up my install script as follows: # set my CRAN repos... yes, I know there's a new convention where to put these. echo "deb http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list echo "deb-src http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list # set the dpkg.cfg options per the previous SuperUser question echo "force-confold" | sudo tee -a /etc/dpkg/dpkg.cfg echo "force-confdef" | sudo tee -a /etc/dpkg/dpkg.cfg export DEBIAN_FRONTEND=noninteractive # add key to keyring so it doesn't complain gpg --keyserver pgp.mit.edu --recv-key 381BA480 gpg -a --export 381BA480 > jranke_cran.asc sudo apt-key add jranke_cran.asc sudo apt-get update # install the latest R sudo apt-get install --yes --force-yes r-base But this script hangs with the following request for input: OK, so I tried stopping the services using the following script: sudo /etc/init.d/mysql stop sudo /etc/init.d/exim4 stop sudo /etc/init.d/cron stop sudo apt-get install --yes --force-yes libc6 This does not work and the interactive screen comes back, but this time with only cron listed as the service that must be restarted. So is there a way to make libc6 just restart these services with no user input? Or is there a way to stop cron so it does not cause an interactive prompt? Maybe a creative option I've never thought of? Keep in mind that this system is brought up, some Hadoop code is run, and then it's torn down. So I can put up with side effects and bad behavior that we might not want in a production desktop machine or web server.

    Read the article

  • What exactly is a Deamon ? ( how to run a root command from apache binded script that uses www-data user )?

    - by user224235
    I am trying to run this command from WSGI script service httpd restart The problem is this command can only be run by root and apache uses the www-data user. it has been said the solution is to use a Deamon Process i suppose the idea is to send the command to a file that will be executed by a script that is considered "root" user.. its difficult to understand why they would call this a Deamon Process and try to scare me. Perhaps it should have been called : proxy process when i got the idea that this was a proxy process.. i thought about adding a line to /var/spool/cron/root that way the cron would execute the command for me. but of course this means i have to get the system time and then add 1 second to it and then add it to that line so cron would execute the command for me as root but my script demands an output instantly. so i suppose i need to create a DEAMON PROCESS that works like the cron. in other words it is a bash file that will execute the command in a plain file.. but will this DEAMON PROCESS be running a while command 24/7 every second ? would that not waste resources ? it only needs to activate itself to check for a command to execute when there is a command to be executed. i mean in PHP and other programming languages.. running a while statement when there is nothing to be executed could waste resources of the server.. so why should a deamon process constantly be listening for anything. i only want it to listen and execute when it is needed. i do not need a process that is constantly listening.

    Read the article

  • Oracle Big Data Software Downloads

    - by Mike.Hallett(at)Oracle-BI&EPM
    Companies have been making business decisions for decades based on transactional data stored in relational databases. Beyond that critical data, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. Oracle offers a broad integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data Connectors Downloads here, includes: Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0 Oracle Loader for Hadoop Release 2.1.0 Oracle Data Integrator Companion 11g Oracle R Connector for Hadoop v 2.1 Oracle Big Data Documentation The Oracle Big Data solution offers an integrated portfolio of products to help you organize and analyze your diverse data sources alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data, Release 2.2.0 - E41604_01 zip (27.4 MB) Integrated Software and Big Data Connectors User's Guide HTML PDF Oracle Data Integrator (ODI) Application Adapter for Hadoop Apache Hadoop is designed to handle and process data that is typically from data sources that are non-relational and data volumes that are beyond what is handled by relational databases. Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job usually requires expert programming knowledge. However, when you use Oracle Data Integrator with the Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Employing familiar and easy-to-use tools and pre-configured knowledge modules (KMs), the application adapter provides the following capabilities: Loading data into Hadoop from the local file system and HDFS Performing validation and transformation of data within Hadoop Loading processed data from Hadoop to an Oracle database for further processing and generating reports Oracle Database Loader for Hadoop Oracle Loader for Hadoop is an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. It pre-partitions the data if necessary and transforms it into a database-ready format. Oracle Loader for Hadoop is a Java MapReduce application that balances the data across reducers to help maximize performance. Oracle R Connector for Hadoop Oracle R Connector for Hadoop is a collection of R packages that provide: Interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables Predictive analytic techniques, written in R or Java as Hadoop MapReduce jobs, that can be applied to data in HDFS files You install and load this package as you would any other R package. Using simple R functions, you can perform tasks such as: Access and transform HDFS data using a Hive-enabled transparency layer Use the R language for writing mappers and reducers Copy data between R memory, the local file system, HDFS, Hive, and Oracle databases Schedule R programs to execute as Hadoop MapReduce jobs and return the results to any of those locations Oracle SQL Connector for Hadoop Distributed File System Using Oracle SQL Connector for HDFS, you can use an Oracle Database to access and analyze data residing in Hadoop in these formats: Data Pump files in HDFS Delimited text files in HDFS Hive tables For other file formats, such as JSON files, you can stage the input in Hive tables before using Oracle SQL Connector for HDFS. Oracle SQL Connector for HDFS uses external tables to provide Oracle Database with read access to Hive tables, and to delimited text files and Data Pump files in HDFS. Related Documentation Cloudera's Distribution Including Apache Hadoop Library HTML Oracle R Enterprise HTML Oracle NoSQL Database HTML Recent Blog Posts Big Data Appliance vs. DIY Price Comparison Big Data: Architecture Overview Big Data: Achieve the Impossible in Real-Time Big Data: Vertical Behavioral Analytics Big Data: In-Memory MapReduce Flume and Hive for Log Analytics Building Workflows in Oozie

    Read the article

  • Threading Overview

    - by ACShorten
    One of the major features of the batch framework is the ability to support multi-threading. The multi-threading support allows a site to increase throughput on an individual batch job by splitting the total workload across multiple individual threads. This means each thread has fine level control over a segment of the total data volume at any time. The idea behind the threading is based upon the notion that "many hands make light work". Each thread takes a segment of data in parallel and operates on that smaller set. The object identifier allocation algorithm built into the product randomly assigns keys to help ensure an even distribution of the numbers of records across the threads and to minimize resource and lock contention. The best way to visualize the concept of threading is to use a "pie" analogy. Imagine the total workset for a batch job is a "pie". If you split that pie into equal sized segments, each segment would represent an individual thread. The concept of threading has advantages and disadvantages: Smaller elapsed runtimes - Jobs that are multi-threaded finish earlier than jobs that are single threaded. With smaller amounts of work to do, jobs with threading will finish earlier. Note: The elapsed runtime of the threads is rarely proportional to the number of threads executed. Even though contention is minimized, some contention does exist for resources which can adversely affect runtime. Threads can be managed individually – Each thread can be started individually and can also be restarted individually in case of failure. If you need to rerun thread X then that is the only thread that needs to be resubmitted. Threading can be somewhat dynamic – The number of threads that are run on any instance can be varied as the thread number and thread limit are parameters passed to the job at runtime. They can also be configured using the configuration files outlined in this document and the relevant manuals.Note: Threading is not dynamic after the job has been submitted Failure risk due to data issues with threading is reduced – As mentioned earlier individual threads can be restarted in case of failure. This limits the risk to the total job if there is a data issue with a particular thread or a group of threads. Number of threads is not infinite – As with any resource there is a theoretical limit. While the thread limit can be up to 1000 threads, the number of threads you can physically execute will be limited by the CPU and IO resources available to the job at execution time. Theoretically with the objects identifiers evenly spread across the threads the elapsed runtime for the threads should all be the same. In other words, when executing in multiple threads theoretically all the threads should finish at the same time. Whilst this is possible, it is also possible that individual threads may take longer than other threads for the following reasons: Workloads within the threads are not always the same - Whilst each thread is operating on the roughly the same amounts of objects, the amount of processing for each object is not always the same. For example, an account may have a more complex rate which requires more processing or a meter has a complex amount of configuration to process. If a thread has a higher proportion of objects with complex processing it will take longer than a thread with simple processing. The amount of processing is dependent on the configuration of the individual data for the job. Data may be skewed – Even though the object identifier generation algorithm attempts to spread the object identifiers across threads there are some jobs that use additional factors to select records for processing. If any of those factors exhibit any data skew then certain threads may finish later. For example, if more accounts are allocated to a particular part of a schedule then threads in that schedule may finish later than other threads executed. Threading is important to the success of individual jobs. For more guidelines and techniques for optimizing threading refer to Multi-Threading Guidelines in the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) whitepaper available from My Oracle Support

    Read the article

  • iphone xcode - different levels of drill down in tab bar controller - nav controller - table view

    - by Frames84
    I have this type of data categories: today news - news item jobs news - news item general news - sub category news - news item So i have followed the very good tutorial 'Building an iPhone App Combining Tab Bar, Navigation and Tab' http://www.youtube.com/watch?v=LBnPfAtswgw and all is good with the 1st two 'todays news' and 'jobs news' but can't figure out the best method for implementing the sub category table view? Do i use the same table view but reload it with the sub categories then some how work out when one is click that it's a sub category or is there a better method? This is how i'm set up in Main Window.Xib - Tab Bar Controller - -Navigation Controller (today) - - - Table View (list of today news) - -Navigation Controller (jobs) - - - Table View (list of jobs news) - -Navigation Controller (general) - - - Table View (general sub cat) // how to implement this Table View - - - - Table View (list of sub cat news) Thanks for your time

    Read the article

  • Get-WMIObject fails when run AsJob

    - by codepoke
    It's a simple question, but it's stumped me. $cred = Get-Credential $jobs = @() $jobs += Get-WmiObject ` -Authentication 6 ` -ComputerName 'serverName' ` -Query 'Select * From IISWebServerSetting' ` -Namespace 'root/microsoftiisv2' ` -EnableAllPrivileges ` -Credential $cred ` -Impersonation 4 ` -AsJob $joblist = Wait-Job -Job $jobs -Timeout 60 foreach ($job in $jobs) { if ($job.State -eq "Completed") { $app = Receive-Job -Job $job $app } else { ("Job not completed: " + $job.Name + "@" + $job.State + ". Reason:" + $job.ChildJobs[0].JobStateInfo.Reason) Remove-Job -Job $job -Force } } The query succeeds when run directly and fails when run -AsJob. Reason:System.UnauthorizedAccessException: Access is denied. I've jiggered with -Impersonation, -Credentials, -Authority, and -EnableAllPrivileges to no useful effect. It appears I'm overlooking something fundamental. Why is my Powershell prompt allowed to connect to the remote server, but my child process denied?

    Read the article

  • Using will_paginate with AJAX live search with jQuery in Rails

    - by Mark Richman
    I am using will_paginate to successfully page through records. I am also using AJAX live search via jQuery to update my results div. No problem so far. The issue I have is when trying to paginate through those live search results. I simply get "Page is loading..." with no div update. Am I missing something fundamental? # index.html.erb <form id="searchform" accept-charset="utf-8" method="get" action="/search"> Search: <input id="search" name="search" type="text" autocomplete="off" title="Search location, company, description..." /> <%= image_tag("spinner.gif", :id => "spinner", :style =>"display: none;" ) %> </form> # JobsController#search def search if params[:search].nil? @jobs = Job.paginate :page => params[:page], :order => "created_at desc" elsif params[:search] and request.xhr? @jobs = Job.search params[:search], params[:page] end render :partial => "jobs", :layout => false, :locals => { :jobs => @jobs } end # Job#search def self.search(search, page) logger.debug "Job.paginate #{search}, #{page}" paginate :per_page => @@per_page, :page => page, :conditions => ["description LIKE ? or title LIKE ? or company LIKE ?", "%#{search}%", "%#{search}%", "%#{search}%"], :order => 'created_at DESC' end # search.js $(document).ready(function(){ $("#search").keyup(function() { $("#spinner").show(); // show the spinner var form = $(this).parents("form"); // grab the form wrapping the search bar. var url = form.attr("action"); // grab the URL from the form's action value. var formData = form.serialize(); // grab the data in the form $.get(url, formData, function(html) { // perform an AJAX get, the trailing function is what happens on successful get. $("#spinner").hide(); // hide the spinner $("#jobs").html(html); // replace the "results" div with the result of action taken }); }); });

    Read the article

  • Why does Celery work in Python shell, but not in my Django views? (import problem)

    - by TIMEX
    I installed Celery (latest stable version.) I have a directory called /home/myuser/fable/jobs. Inside this directory, I have a file called tasks.py: from celery.decorators import task from celery.task import Task class Submitter(Task): def run(self, post, **kwargs): return "Yes, it works!!!!!!" Inside this directory, I also have a file called celeryconfig.py: BROKER_HOST = "localhost" BROKER_PORT = 5672 BROKER_USER = "abc" BROKER_PASSWORD = "xyz" BROKER_VHOST = "fablemq" CELERY_RESULT_BACKEND = "amqp" CELERY_IMPORTS = ("tasks", ) In my /etc/profile, I have these set as my PYTHONPATH: PYTHONPATH=/home/myuser/fable:/home/myuser/fable/jobs So I run my Celery worker using the console ($ celeryd --loglevel=INFO), and I try it out. I open the Python console and import the tasks. Then, I run the Submitter. >>> import fable.jobs.tasks as tasks >>> s = tasks.Submitter() >>> s.delay("abc") <AsyncResult: d70d9732-fb07-4cca-82be-d7912124a987> Everything works, as you can see in my console [2011-01-09 17:30:05,766: INFO/MainProcess] Task tasks.Submitter[d70d9732-fb07-4cca-82be-d7912124a987] succeeded in 0.0398268699646s: But when I go into my Django's views.py and run the exact 3 lines of code as above, I get this: [2011-01-09 17:25:20,298: ERROR/MainProcess] Unknown task ignored: "Task of kind 'fable.jobs.tasks.Submitter' is not registered, please make sure it's imported.": {'retries': 0, 'task': 'fable.jobs.tasks.Submitter', 'args': ('abc',), 'expires': None, 'eta': None, 'kwargs': {}, 'id': 'eb5c65b4-f352-45c6-96f1-05d3a5329d53'} Traceback (most recent call last): File "/home/myuser/mysite-env/lib/python2.6/site-packages/celery/worker/listener.py", line 321, in receive_message eventer=self.event_dispatcher) File "/home/myuser/mysite-env/lib/python2.6/site-packages/celery/worker/job.py", line 299, in from_message eta=eta, expires=expires) File "/home/myuser/mysite-env/lib/python2.6/site-packages/celery/worker/job.py", line 243, in __init__ self.task = tasks[self.task_name] File "/home/myuser/mysite-env/lib/python2.6/site-packages/celery/registry.py", line 63, in __getitem__ raise self.NotRegistered(str(exc)) NotRegistered: "Task of kind 'fable.jobs.tasks.Submitter' is not registered, please make sure it's imported." It's weird, because the celeryd client does show that it's registered, when I launch it. [2011-01-09 17:38:27,446: WARNING/MainProcess] Configuration -> . broker -> amqp://GOGOme@localhost:5672/fablemq . queues -> . celery -> exchange:celery (direct) binding:celery . concurrency -> 1 . loader -> celery.loaders.default.Loader . logfile -> [stderr]@INFO . events -> OFF . beat -> OFF . tasks -> . tasks.Decayer . tasks.Submitter Can someone help?

    Read the article

  • How can I get a distinct list of elements in a hierarchical query?

    - by RenderIn
    I have a database table, with people identified by a name, a job and a city. I have a second table that contains a hierarchical representation of every job in the company in every city. Suppose I have 3 people in the people table: [name(PK),title,city] Jim, Salesman, Houston Jane, Associate Marketer, Chicago Bill, Cashier, New York And I have thousands of job type/location combinations in the job table, a sample of which follow. You can see the hierarchical relationship since parent_title is a foreign key to title: [title,city,pay,parent_title] Salesman, Houston, $50000, CEO Cashier, Houston, $25000 CEO, USA, $1000000 Associate Marketer, Chicago, $75000 Senior Marketer, Chicago, $125000 ..... The problem I'm having is that my Person table is a composite key, so I don't know how to structure the start with part of my query so that it starts with each of the three jobs in the cities I specified. I can execute three separate queries to get what I want, but this doesn't scale well. e.g.: select * from jobs start with city = (select city from people where name = 'Bill') and title = (select title from people where name = 'Bill') connect by prior parent_title = title UNION select * from jobs start with city = (select city from people where name = 'Jim') and title = (select title from people where name = 'Jim') connect by prior parent_title = title UNION select * from jobs start with city = (select city from people where name = 'Jane') and title = (select title from people where name = 'Jane') connect by prior parent_title = title How else can I get a distinct list (or I could wrap it with a distinct if not possible) of all the jobs which are above the three people I specified?

    Read the article

  • Tuple struct constructor complains about private fields

    - by Grubermensch
    I am working on a basic shell interpreter to familiarize myself with Rust. While working on the table for storing suspended jobs in the shell, I have gotten stuck at the following compiler error message: tsh.rs:8:18: 8:31 error: cannot invoke tuple struct constructor with private fields tsh.rs:8 let mut jobs = job::JobsList(vec![]); ^~~~~~~~~~~~~ It's unclear to me what is being seen as private here. As you can see below, both of the structs are tagged with pub in my module file. So, what's the secret sauce? tsh.rs use std::io; mod job; fn main() { // Initialize jobs list let mut jobs = job::JobsList(vec![]); loop { /*** Shell runtime loop ***/ } } job.rs use std::fmt; pub struct Job { jid: int, pid: int, cmd: String } impl fmt::Show for Job { /*** Formatter ***/ } pub struct JobsList(Vec<Job>); impl fmt::Show for JobsList { /*** Formatter ***/ }

    Read the article

  • Custom rowsetClass does not return values

    - by Wesley
    Hi, I'm quite new to Zend and the database classes from it. I'm having problems mapping a Zend_Db_Table_Row_Abstract to my rows. The problem is that whenever I try to map it to a class (Job) that extends the Zend_Db_Table_Row_Abstract class, the database data is not receivable anymore. I'm not getting any errors, trying to get data simply returns null. Here is my code so far: Jobs: class Jobs extends Zend_Db_Table_Abstract { protected $_name = 'jobs'; protected $_rowsetClass = "Job"; public function getActiveJobs() { $select = $this->select()->where('jobs.jobDateOpen < UNIX_TIMESTAMP()')->limit(15,0); $rows = $this->fetchAll($select); return $rows; } } Job: class Job extends Zend_Db_Table_Row_Abstract { public function getCompanyName() { //Gets the companyName } } Controller: $oJobs = new Jobs(); $aActiveJobs = $oJobs->getActiveJobs(); foreach ($aActiveJobs as $value) { var_dump($value->jobTitle); } When I remove the "protected $_rowsetClass = "Job";" line, so that the table row is not mapped to my own class, I get all the jobTitles perfectly. What am I doing wrong here? Thanks in advance, Wesley

    Read the article

  • MVC Application Design

    - by Paul Brown
    Hello I am about to create my first proper application in ASP.NET MVC3. It is basically a jobs site with 3 levels: 1) Users - No registration and can view all jobs posted on the website 2) Posters - Need to register and login to post adverts 3) Admin - Need to register and login to post adverts and review postings before they go live Would you suggest I use the same Jobs controller for the three levels I mention above? With a LIST action to show jobs to "Users" and a CREATE & EDIT action for the "Posters" & "Admin"? Thanks Paul

    Read the article

  • Change write-host output color based on foreach if elseif outcome in Powershell

    - by Emo
    I'm trying to change the color of write-host output based on the lastrunoutcome property of SQL Server jobs in Powershell....as in...if a job was successfull, the output of lastrunoutcome is "Success" in green....if failed, then "Failed" in red. I have the script working to get the desired job status...I just don't know how to change the colors. Here's what I have so far: # Check for failed SQL jobs on multiple servers [reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null foreach ($svr in get-content "C:\serverlist2.txt") { $a = get-date $BegDate = (Get-Date $a.AddDays(-1) -f d) + " 12:00:00 AM" $BegDateTrans = [system.datetime]$BegDate write-host $svr $srv=New-Object "Microsoft.SqlServer.Management.Smo.Server" "$svr" $srv.jobserver.jobs | where-object {$_.lastrundate -ge $BegDateTrans -and $_.Name -notlike "????????-????-????-????-????????????"} | format-table name,lastrunoutcome,lastrundate -autosize foreach ($_.lastrunoutcome in $srv.jobserver.jobs) { if ($_.lastrunoutcome = 0) { -forgroundcolor red } else {} } } This seems to be the closest I've gotten...but it's giving me an error of ""LastRunOutcome" is a ReadOnly property." Any help would be greatly appreciated! Thanks! Emo

    Read the article

  • MySQL Gurus: How to pull a complex grid of data from MySQL database with one query?

    - by iopener
    Hopefully this is less complex than I think. I have one table of companies, and another table of jobs, and a third table with that contains a single entry for each employee in each job from each company. NOTE: Some companies won't have employees in some jobs, and some companies will have more than one employee in some jobs. The company table has a companyid and companyname field, the job table has a jobid and jobtitle field, and the employee table has employeeid, companyid, jobid and employeename fields. I want to build a table like this: +-----------+-----------+-----------+ | Company A | Company B | Company C | ------+-----------+-----------+-----------+ Job A | Emp 1 | Emp 2 | | ------+-----------+-----------+-----------+ Job B | Emp 3 | | Emp 4 | | | | Emp 5 | ------+-----------+-----------+-----------+ Job C | | Emp 6 | | | | Emp 7 | | | | Emp 8 | | ------+-----------+-----------+-----------+ I had previously been looping through a result set of jobs, and for each job, looping through a result set of each company, and for each company, looping through each employee and printing it in a table (gross, but performance was not supposed to be a consideration). The app has grown in popularity, and now we have 100 companies and hundreds of jobs, and the server is crapping out (all the id fields are indexed). Any suggestions on how to write a single query to get this data? I don't need the company names or job titles (obviously), but I do need some way to identify where each row from the result should be printed. I'm imagining a result set that just contained a long list of joined employees, and I could write a loop to use the companyid and employeeid values to tell me when to create a new cell or table row. This works as long as there aren't ZERO employees; I would need a NULL employee name for that I think? Am I completely on the wrong track? Thanks in advance for any ideas!

    Read the article

  • Linq to SQL gives NotSupportedException when using local variables

    - by zwanz0r
    It appears to me that it matters whether you use a variable to temporary store an IQueryable or not. See the simplified example below: This works: List<string> jobNames = new List<string> { "ICT" }; var ictPeops = from p in dataContext.Persons where ( from j in dataContext.Jobs where jobNames.Contains(j.Name) select j.ID).Contains(p.JobID) select p; But when I use a variable to temporary store the subquery I get an exception: List<string> jobNames = new List<string> { "ICT" }; var jobs = from j in dataContext.Jobs where jobNames.Contains(j.Name) select j.ID; var ictPeops = from p in dataContext.Persons where jobs.Contains(p.JobID) select p; "System.NotSupportedException: Queries with local collections are not supported" I don't see what the problem is. Isn't this logic that is supposed to work in LINQ?

    Read the article

  • Ideas on frameworks in .NET that can be used for job processing and notifications

    - by Rajat Mehta
    Scenario: We have one instance of WCF windows service which exposes contracts like: AddNewJob(Job job), GetJobs(JobQuery query) etc. This service is consumed by 70-100 instances of client which is Windows Form based .NET app. Typically the service has 50-100 inward calls/minute to add or query jobs that are stored in a table on Sql Server. The same service is also responsible for processing these jobs in real time. It queries database every 5 seconds picks up the queued jobs and starts processing them. A job has 6 states. Queued, Pre-processing, Processing, Post-processing, Completed, Failed, Locked. Another responsibility on this service is to update all clients on every state change of every job. This means almost 200+ callbacks to clients per second. Question: This whole implementation is done using WCF Duplex bindings and works perfectly fine on small number of parallel jobs. Problem arises when we scale it up to 1000 jobs at a time. The notifications don't work as expected, it leads to memory overflow etc. Is there any standard framework that can provide a clean infrastructure for handling this scenario?? Apologies for the long explanation!

    Read the article

  • CakePHP: How can I change this find call to include all records that do not exist in the associated

    - by Stephen
    I have a few tables with the following relationships: Company hasMany Jobs, Employees, and Trucks, Users I've got all my foreign keys set up properly, along with the tables' Models, Controllers, and Views. Originally, the Jobs table had a boolean field called "assigned". The following find operation (from the JobsController) successfully returns all employees, all trucks, and any jobs that are not assigned and fall on a certain day for a single company (without returning users by utilizing the containable behavior): $this->set('resources', $this->Job->Company->find('first', array( 'conditions' => array( 'Company.id' => $company_id ), 'contain' => array( 'Employee', 'Truck', 'Job' => array( 'conditions' => array( 'Job.assigned' => false, 'Job.pickup_date' => date('Y-m-d', strtotime('Today')); ) ) ) ))); Now, since writing this code, I decided to do a lot more with the job assignments. So I've created a new model "Assignment" that belongsTo Truck and belongsTo Job. I've added the hasMany Assignments to both the Truck model and the Jobs Model. I have both foreign keys in the assignments table, along with some other assignment fields. Now, I'm trying to get the same information above, only instead of checking the assigned field from the job table, I want to check the assignments table to ensure that the job does not exist there. I can no longer use the containable behavior if I'm going to use the "joins" feature of the find method due to mysql errors (according to the cookbook). But, the following query returns all jobs, even if they fall on different days. $this->set('resources', $this->Job->Company->find('first', array( 'joins' => array( array( 'table' => 'employees', 'alias' => 'Employee', 'type' => 'LEFT', 'conditions' => array( 'Company.id = Employee.company_id' ) ), array( 'table' => 'trucks', 'alias' => 'Truck', 'type' => 'LEFT', 'conditions' => array( 'Company.id = Truck.company_id' ) ), array( 'table' => 'jobs', 'alias' => 'Job', 'type' => 'LEFT', 'conditions' => array( 'Company.id = Job.company_id' ) ), array( 'table' => 'assignments', 'alias' => 'Assignment', 'type' => 'LEFT', 'conditions' => array( 'Job.id = Assignment.job_id' ) ) ), 'conditions' => array( 'Job.pickup_date' => $day, 'Company.id' => $company_id, 'Assignment.job_id IS NULL' ) )));

    Read the article

  • Multiprocessing Bomb

    - by iKarampa
    I was working the following example from Doug Hellmann tutorial on multiprocessing: import multiprocessing def worker(): """worker function""" print 'Worker' return if __name__ == '__main__': jobs = [] for i in range(5): p = multiprocessing.Process(target=worker) jobs.append(p) p.start() When I tried to run it outside the if statement: import multiprocessing def worker(): """worker function""" print 'Worker' jobs = [] for i in range(5): p = multiprocessing.Process(target=worker) jobs.append(p) p.start() It started spawning processes non-stop, without any way of to terminating it. Why would that happen? Why it did not generate 5 processes and exit? Why do I need the if statement?

    Read the article

  • How to tell process id within Python

    - by R S
    Hey, I am working with a cluster system over linux (www.mosix.org) that allows me to run jobs and have the system run them on different computers. Jobs are run like so: mosrun ls & This will naturally create the process and run it on the background, returning the process id, like so: [1] 29199 Later it will return. I am writing a Python infrastructure that would run jobs and control them. For that I want to run jobs using the mosrun program as above, and save the process ID of the spawned process (29199 in this case). This naturally cannot be done using os.system or commands.getoutput, as the printed ID is not what the process prints to output... Any clues? Edit: Since the python script is only meant to initially run the script, the scripts need to run longer than the python shell. I guess it means the mosrun process cannot be the script's "son process". Any suggestions? Thanks

    Read the article

  • Load balancing and shedulling algorithms .NET

    - by Lukas Šalkauskas
    Hello there, so here is my problem: I have several different configuarion servers. I have different calculations (jobs), I can predict how long approx. each job will take to be caclulated. Also I have priorities. My question is how to keep all machines loaded 99-100% and shedule the jobs in the best way. Each machine can do several calculations at the time. Jobs are pushed to the machine. Central machine knows current load of each machine. Also I would like to to assign some king of machine learning here, because I will know statistics of each job (started, finished, cpu load etc.). How to distribute jobs(calculations) in the best possible way, also keep in mind priority. Any suggestions ? Ideas ? Algorithms ?

    Read the article

  • Load balancing and scheduling algorithms.

    - by Lukas Šalkauskas
    Hello there, so here is my problem: I have several different configuarion servers. I have different calculations (jobs); I can predict how long approximately each job will take to be caclulated. Also, I have priorities. My question is how to keep all machines loaded 99-100% and schedule the jobs in the best way. Each machine can do several calculations at a time. Jobs are pushed to the machine. The central machine knows the current load of each machine. Also, I would like to to assign some kind of machine learning here, because I will know statistics of each job (started, finished, cpu load etc.). How can I distribute jobs (calculations) in the best possible way, keeping in mind the priorities? Any suggestions, ideas, or algorithms ? FYI: My platform .NET.

    Read the article

  • Define keys in temporary table creation

    - by imperium2335
    How do I define the keys for a temporary table that is being created from a SELECT statement? I have: CREATE temporary TABLE _temp_unique_parts_trading engine=memory AS (SELECT parts_trading.enquiryref, sellingcurrency, jobs.id AS jobID FROM parts_trading, jobs WHERE jobs.enquiryref = parts_trading.enquiryref GROUP BY parts_trading.enquiryref) But where do I define the keys?

    Read the article

  • Multiple left joins, how to output in php

    - by Dan
    I have 3 tables I need to join. The contracts table is the main table, the 'jobs' and 'companies' table are extra info that can be associated to the contracts table. so, since I want all entries from my 'contracts' table, and the 'jobs' and 'companies' data only if it exists, I wrote the query like this.... $sql = "SELECT * FROM contracts LEFT JOIN jobs ON contracts.job_id = jobs.id LEFT JOIN companies ON contracts.company_id = companies.id ORDER BY contracts.end_date"; Now how would I output this in PHP? I tried this but kept getting an undefined error "Notice: Undefined index: contracts.id"... $sql_result = mysql_query($sql,$connection) or die ("Fail."); if(mysql_num_rows($sql_result) > 0){ while($row = mysql_fetch_array($sql_result)) { $contract_id = stripslashes($row['contracts.id']); $job_number = stripslashes($row['jobs.job_number']); $company_name = stripslashes($row['companies.name']); ?> <tr id="<?=$contract_id?>"> <td><?=$job_number?></td> <td><?=$company_name?></td> </tr> <? } }else{ echo "No records found"; } Any help is appreciated.

    Read the article

  • Extended Zend_Db_Table_Row_Abstract does not return values

    - by WesleyE
    Hi, I'm quite new to Zend and the database classes from it. I'm having problems mapping a Zend_Db_Table_Row_Abstract to my rows. The problem is that whenever I try to map it to a class (Job) that extends the Zend_Db_Table_Row_Abstract class, the database data is not receivable anymore. I'm not getting any errors, trying to get data simply returns null. Here is my code so far: Jobs: class Jobs extends Zend_Db_Table_Abstract { protected $_name = 'jobs'; protected $_rowsetClass = "Job"; public function getActiveJobs() { $select = $this->select()->where('jobs.jobDateOpen < UNIX_TIMESTAMP()')->limit(15,0); $rows = $this->fetchAll($select); return $rows; } } Job: class Job extends Zend_Db_Table_Row_Abstract { public function getCompanyName() { //Gets the companyName for this row (Is on another table), just for example } } Controller: $oJobs = new Jobs(); $aActiveJobs = $oJobs->getActiveJobs(); foreach ($aActiveJobs as $value) { var_dump($value->jobTitle); } When I remove the "protected $_rowsetClass = "Job";" line, so that the table row is not mapped to my own class, I get all the jobTitles perfectly. What am I doing wrong here? Thanks in advance, Wesley

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >