Search Results

Search found 8299 results on 332 pages for 'job hierarchy'.

Page 19/332 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Task Scheduler : Logon as Batch Job Rights

    - by Brohan
    I'm trying to set up a scheduled task which will work under the Network Administrators account, whether the account is logged in or not (on a specificed computer) According to the Task Scheduler, I need 'Logon as batch job rights'. Attempting to change this setting in the Local Security Policy window has it the option to add the Administrator account to the groups greyed out. Currently, only LOCAL_SERVICE may Logon as Batch job. Attempting to add administrator to this group hasn't worked. How do I make it able to set this permission so that I can run tasks if I'm logged in or not?

    Read the article

  • "Bad response to Storage command" when scheduling job with Bacula

    - by Joril
    I have a Bacula setup with 9 clients, and it's working happily. Today I had to add another client, so I went and copied+adapted the existing configuration files from another client, but when I schedule a job for the new client, I get these errors: 20-Mar 17:50 tools-dir JobId 39: Start Backup JobId 39, Job=BackupPresenze2.2012-03-20_17.50.49_04 20-Mar 17:50 tools-dir JobId 39: Using Device "FileStorage" 20-Mar 17:50 presenze2-fd JobId 39: Fatal error: Failed to connect to Storage daemon: bacula.mylan.local:9103 20-Mar 17:50 tools-dir JobId 39: Fatal error: Bad response to Storage command: wanted 2000 OK storage , got 2902 Bad storage From the client I can telnet to bacula.mylan.local:9103 just fine, and jobs for other clients work successfully... What could I check? (Server and client run Ubuntu 10.04, if it's relevant)

    Read the article

  • centos cron job running php file

    - by user50946
    I have a php file under php called test.php set to run every 5th minute of the hour. When ever I run the file manually (by going to the web browser and runnint the path) it works fine. But when the cron job tries to run it I get the error message my cron job is #### Delete Records 5 * * * * /var/www/html/phpsysinfo/cronUpdateLeadBucketOnEnergycAlliance.php my phpfile is (path : /var/www/html/phpsysinfo/phpfile) <?php require("dbconnect.php"); $sql = mysql_query("DELETE FROM list where status <> 'LEAD'") or die(mysql_error()); ?> and the error that I get is: /var/www/html/phpsysinfo/phpFile.php: line 1: ?php: No such file or directory /var/www/html/phpsysinfo/phpFile.php: line 2: syntax error near unexpected token `"dbconnect.php"' /var/www/html/phpsysinfo/phpFile.php: line 2: `require("dbconnect.php"); thanks

    Read the article

  • Job queueing in Toast Titanium 10?

    - by moonslug
    I have a bunch of .MP4 video files I'm burning to DVD-Video using Toast Titanium 10 on my MacBook Pro. Right now, I'm doing them one at a time. Because my computer is several years old, encoding video for a single DVD takes approximately six hours. I've discovered that it appears I can encode the video directly to a .toast format — however, I have yet to figure out if I can burn these directly to DVD. Also, I have quite a bit of video left to burn, and even that method would require me intervening manually to start a new encoding or burn job every six hours. Would it be possible to somehow queue up multiple DVD-Video encoding jobs at once, and have the computer work through them automatically? The actual writing to DVD disc doesn't take nearly as long, and if I had all my video encoded for me to begin with my job would be a lot quicker. Maybe this can be accomplished with a different piece of software?

    Read the article

  • MapReduce job is hung after 1 of 5 reducers completed on single-node environment

    - by Marboni
    I have only one Data Node on my dev environment on EC2. I ran heavy MR job and in 6 hours noticed that 100% of mappers and 20% of reducers finished (1 of reducer shows 100% competition, other ones - 0%). Looks like job is hung between 2 reducer runs. I don't see any errors in log files. What it can be? P.S. Last logs of successfully finished reducer: 2012-11-09 11:29:21,576 INFO org.apache.hadoop.mapred.Task: Task:attempt_201211090523_0004_r_000000_0 is done. And is in the process of commiting 2012-11-09 11:29:22,692 INFO org.apache.hadoop.mapred.Task: Task attempt_201211090523_0004_r_000000_0 is allowed to commit now 2012-11-09 11:29:22,719 INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of task 'attempt_201211090523_0004_r_000000_0' to /data/output/1352457275873/20121109-053433-common 2012-11-09 11:29:22,721 INFO org.apache.hadoop.mapred.Task: Task 'attempt_201211090523_0004_r_000000_0' done. 2012-11-09 11:29:22,725 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1

    Read the article

  • Cron job failing to backing up a Postgres database

    - by user705142
    I'm unsure what's going on here: I've got a backup script which runs fine under root. It produces a 300kb database dump in the proper directory. When running it as a cron job with exactly the same command however, an empty gzip file appears with nothing in it. The cron log shows no error, just that the command has been run. This is the script: #! /bin/bash DIR="/opt/backup" YMD=$(date "+%Y-%m-%d") su -c "pg_dump -U postgres mydatabasename | gzip -6 > "$DIR/database_backup.$YMD.gz" " postgres # delete backup files older than 60 days OLD=$(find $DIR -type d -mtime +60) if [ -n "$OLD" ] ; then echo deleting old backup files: $OLD echo $OLD | xargs rm -rfv fi And the cron job: 01 10 * * * root sh /opt/daily_backup_script.sh It produces a database_backup file, just an empty one. Anyone know what's going on here?

    Read the article

  • How do I debug an upstart job?

    - by Cerales
    I have the following job in /etc/init/collector: start on runlevel [2345] stop on runlevel [!2345] expect daemon exec /usr/bin/twistd -y /path/to/my/tac/file When I start the job with sudo service collector start, it hangs. If I ctrl-c and run initctl list, I see this: collector start/killed, process 616 I can't see an instance of the twistd daemon in ps, and the HTTP server it's supposed to be providing does not exist. I even tried this without 'expect daemon' and with a simple call to a one-line bash script using a script stanza, and it still doesn't work. I think I'm doing something very wrong. What could it be?

    Read the article

  • Single Table Per Class Hierarchy with an abstract superclass using Hibernate Annotations

    - by Andy Hull
    I have a simple class hierarchy, similar to the following: @Entity @Table(name="animal") @Inheritance(strategy=InheritanceType.SINGLE_TABLE) @DiscriminatorColumn(name="animal_type", discriminatorType=DiscriminatorType.STRING) public abstract class Animal { } @Entity @DiscriminatorValue("cat") public class Cat extends Animal { } @Entity @DiscriminatorValue("dog") public class Dog extends Animal { } When I query "from Animal" I get this exception: "org.hibernate.InstantiationException: Cannot instantiate abstract class or interface: Animal" If I make Animal concrete, and add a dummy discriminator... such as @DiscriminatorValue("animal")... my query returns my cats and dogs as instances of Animals. I remember this being trivial with HBM based mappings but I think I'm missing something when using annotations. Can anyone help? Thanks!

    Read the article

  • chomsky hierarchy and programming languages

    - by dader51
    Hi, I'm trying to learn some aspects of the CH ( chomsky hierarchy ) which are related to PL ( programming languages ), and i still have to read the Dragon Book. I've read that most of the PL can be parsed as CFG ( context free grammar ). In term of computational power, it equals the one of a pushdown non deterministic automaton. Am I right ? If it's true, then how could a CFG holds a UG ( unrestricted grammar, which is turing complete ) ? I'm asking because, even if PL are CFG they are actually used to describe TM (turing machines ) and through UG. I think that's because of at least two different levels of computing, the first, which is the parsing of a CFG focuses on the syntax related to the structure ( representation ? ) of the language, while the other focuses on the semantic ( sense, interpretation of the data itself ? ) related to the capabilities of the pl which is turing complete. Again, are these assumptions rights ? thanx a lot.

    Read the article

  • Generating a Call Hierarchy for dynamicly invoked method

    - by Maxim Veksler
    Hello, Today's world of dynamic invoke, reflection and runtime injection just doesn't play well with traditional tools such as ctags, doxygen and CDOC. I am searching for a method call hierarchy visualization tool that can display both static and dynamic method invocations. It should be easy to use, light during execution and provide helpful detailed information about the recorded runtime session. Now I guess Callgrind could be considered a valid solution for the family C. What tool / technique could you suggest to create a call graph for both static and dynamic method invocation for JVM based bytecode? The intended end result is a graphical display (preferably interactive) which can show path from main() to each method that was invoked. During research for this post I stumbled upon javashot, it seems that this is the kind of approach I'm aiming at, I would prefer that this would be integrated into a kind of profiler or alike which than can be used from within my IDE (Eclipse, IntelliJ, Netbeans and alike). Thank you, Maxim.

    Read the article

  • How to design this class hierarchy?

    - by devoured elysium
    I have defined an Event class: Event and all the following classes inherit from Event: AEvent BEvent CEvent DEvent Now, with the info I gather from all these Event classes, I will make a chart. With AEvent and BEvent, I will generate points for that chart, while with CEvent and DEvent I will paint certain regions of the chart. Now, how should I signal this in my class hierarchy? Should I make AEvent and BEvent inherit from PointEvent while CEvent and DEvent inherit from RegionEvent, being that both RegionEvent and PointEvent inherit from Event? Should I add a field with an Enum to Event with 2 values, Point and Region, and each of the child classes set their value to it? Should I use some kind of pattern here? Which one? Thanks.

    Read the article

  • Storing website hierarchy in Sql Server 2008

    - by Mika Kolari
    I want to store website page hierarchy in a table. What I would like to achieve is efficiently 1) resolve (last valid) item by path (e.g. "/blogs/programming/tags/asp.net,sql-server", "/blogs/programming/hello-world" ) 2) get ancestor items for breadcrump 3) edit an item without updating the whole tree of children, grand children etc. Because of the 3rd point I thought the table could be like ITEM id type slug title parentId 1 area blogs Blogs 2 blog programming Programming blog 1 3 tagsearch tags 2 4 post hello-world Hello World! 2 Could I use Sql Server's hierarchyid type somehow (especially point 1, "/blogs/programming/tags" is the last valid item)? Tree depth would usually be around 3-4. What would be the best way to achieve all this?

    Read the article

  • Specialization hierarchy in a domain-model

    - by devoured elysium
    I'm trying to make the domain model of a management system. I have the following kinds of persons in this system: employee manager top mananger I decided to define a User, from where employee, manager and top manager will specialize from. What I don't know is what kind of specialization hierarchy I should choose from. I thought of two ways: or Which might be preferable and why? As a long time coder, every time I try to do a domain-model, I have to fight against the idea of trying to think in how I'm going to code this. From what I've understood, I should not think about those matters in the domain-model, only in object relationships. I don't have to think of code duplication or any of these kind of details here, so I can't really pick any of the options over the other. Thanks

    Read the article

  • What to do when you're the interviewer and you don't like your job?

    - by emcb
    I'm in a sorta strange predicament, and I could use some advice. When I was interviewing for my current job, the job description I was given seemed pretty darn nice to me. Without going into the details, the job hasn't quite turned out the way it was advertised. The company is great and takes care of its employees, but for someone who cares about the code they write and the work they do, it's a bad environment - effectively, we operate between 0.5 and 1.0 on the Joel test, and due to political issues we're not going to move beyond that any time soon. Bitter? Maybe. OK...so I'm in the market for a new job. But that's not where my dilemma is. The problem that I see coming is that I will be participating in interviewing some candidates for a position on my team, and I'm not sure what to do. I've heard through the grapevine that we have some really solid, promising, fresh-out-of-college prospects coming in to interview, and I honestly dread the thought of somebody having their first experience of engineering in this department. So I'm wondering: what should I do if/when the interviewee asks me "Do you like your job?" (no) "What kind of projects would I be working on?" (mostly static HTML/CSS changes) Anything else that would elicit a negative answer if told truthfully Do I tell the truth, to give the candidate a real picture of the job? What if this scares them away, and what if it gets blamed on me? Do I fib or lie, saying we work on exciting projects with lots of flexibility, like the pitch my boss will give when the reality is quite different? Should I feel any kind of moral responsibility to let a promising young developer know that this isn't the job for them, or should I shut up and be loyal 100% to the company? Any approaches or advice is appreciated. I hope I don't come across as overly dramatic - I honestly struggle with this question.

    Read the article

  • Polymorphic functions with parameters from a class hierarchy

    - by myahya
    Let's say I have the following class hierarchy in C++: class Base; class Derived1 : public Base; class Derived2 : public Base; class ParamType; class DerivedParamType1 : public ParamType; class DerivedParamType2 : public ParamType; And I want a polymorphic function, func(ParamType), defined in Base to take a parameter of type DerivedParamType1 for Derived1 and a parameter of type DerivedParamType2 for Derived2. How would this be done without pointers, if possible?

    Read the article

  • CTE to build a list of departments and managers (hierarchical)

    - by Milky Joe
    I need to generate a list of users that are managers, or managers of managers, for company departments. I have two tables; one details the departments and one contains the manager hierarchy (simplified): CREATE TABLE [dbo].[Manager]( [ManagerId] [int], [ParentManagerId] [int]) CREATE TABLE [dbo].[Department]( [DepartmentId] [int], [ManagerId] [int]) Basically, I'm trying to build a CTE that will give me a list of DepartmentIds, together with all ManagerIds that are in the manager hierarchy for that department. So... Say Manager 1 is the Manager for Department 1, and Manager 2 is Manager 1's Manager, and Manager 3 is Manager 2's Manager, I'd like to see: DepartmentId, ManagerId 1, 1 1, 2 1, 3 Basically, managers are able to deal with all of their sub-manager's departments. Building the CTE to return the Manager hierarchy was fairly simple, but I'm struggling to inject the Departments in there: WITH DepartmentManagers AS ( SELECT ManagerId, ParentManagerId, 0 AS Depth From Manager UNION ALL SELECT Manager.ManagerId, Manager.ParentManagerId, DepartmentManagers.Depth + 1 AS Depth FROM Manager INNER JOIN DepartmentManagers ON DepartmentManagers.ManagerId = Manager.ParentManagerId ) Can anyone help?

    Read the article

  • Types in Union or Concat cannot be constructed with hierarchy

    - by user927777
    I am trying to run a query very similar to the following: (from bs in DataContext.TblBookShelf join b in DataContext.Book on bs.BookID equals b.BookID where bs.BookShelfID == bookShelfID select new BookItem { Categories = String.Join("<br/>", b.BookCategories.Select(x => x.Name).DefaultIfEmpty().ToArray()), Name = b.Name, ISBN = b.ISBN, BookType = "Shelf" }).Union(from bs in DataContext.TblBookShelf join bi in DataContext.TblBookInventory on bs.BookID equals bi.BookID select new BookItem { Categories = String.Join("<br/>", bi.BookCategories.Select(x => x.Name).DefaultIfEmpty().ToArray()), Name = bi.Name, ISBN = bi.ISBN, BookType = "Inventory" }); I am receiving "Types in Union or Concat cannot be constructed with hierarchy" after the statement executes, I need to to be able to get a list of categories to display with each book. If anyone could shed some light on a possible solution, it would be greatly appreciated.

    Read the article

  • Quartz .Net Job calling WCF service

    - by mattcole
    Hi, What's the best way for me to call a WCF Service from within a Quartz .Net job? Is the easiest way to write a separate exe that spins up a WCF proxy and have that exe called from within the job? This seems like it would work but is a bit convoluted. It'd be nicer if I could somehow have the Job have the proxy injected in someway. Thanks, Matt

    Read the article

  • Hadoop streaming job : stuck

    - by Algorist
    Hi, I am running a hadoop streaming job. It got stuck due to no reason. I am not sure how to cancel the task, so that hadoop schedules another task for the same job. I tried killing the job, but it still doesn't work. Anyone know, how to do this? Thank you Bala

    Read the article

  • Minimum-Waste Print Job Grouping Algorithm?

    - by Matt Mc
    I work at a publishing house and I am setting up one of our presses for "ganging", in other words, printing multiple jobs simultaneously. Given that different print jobs can have different quantities, and anywhere from 1 to 20 jobs might need to be considered at a time, the problem would be to determine which jobs to group together to minimize waste (waste coming from over-printing on smaller-quantity jobs in a given set, that is). Given the following stable data: All jobs are equal in terms of spatial size--placement on paper doesn't come into consideration. There are three "lanes", meaning that three jobs can be printed simultaneously. Ideally, each lane has one job. Part of the problem is minimizing how many lanes each job is run on. If necessary, one job could be run on two lanes, with a second job on the third lane. The "grouping" waste from a given set of jobs (let's say the quantities of them are x, y and z) would be the highest number minus the two lower numbers. So if x is the higher number, the grouping waste would be (x - y) + (x - z). Otherwise stated, waste is produced by printing job Y and Z (in excess of their quantities) up to the quantity of X. The grouping waste would be a qualifier for the given set, meaning it could not exceed a certain quantity or the job would simply be printed alone. So the question is stated: how to determine which sets of jobs are grouped together, out of any given number of jobs, based on the qualifiers of 1) Three similar quantities OR 2) Two quantities where one is approximately double the other, AND with the aim of minimal total grouping waste across the various sets. (Edit) Quantity Information: Typical job quantities can be from 150 to 350 on foreign languages, or 500 to 1000 on English print runs. This data can be used to set up some scenarios for an algorithm. For example, let's say you had 5 jobs: 1000, 500, 500, 450, 250 By looking at it, I can see a couple of answers. Obviously (1000/500/500) is not efficient as you'll have a grouping waste of 1000. (500/500/450) is better as you'll have a waste of 50, but then you run (1000) and (250) alone. But you could also run (1000/500) with 1000 on two lanes, (500/250) with 500 on two lanes and then (450) alone. In terms of trade-offs for lane minimization vs. wastage, we could say that any grouping waste over 200 is excessive. (End Edit) ...Needless to say, quite a problem. (For me.) I am a moderately skilled programmer but I do not have much familiarity with algorithms and I am not fully studied in the mathematics of the area. I'm I/P writing a sort of brute-force program that simply tries all options, neglecting any option tree that seems to have excessive grouping waste. However, I can't help but hope there's an easier and more efficient method. I've looked at various websites trying to find out more about algorithms in general and have been slogging my way through the symbology, but it's slow going. Unfortunately, Wikipedia's articles on the subject are very cross-dependent and it's difficult to find an "in". The only thing I've been able to really find would seem to be a definition of the rough type of algorithm I need: "Exclusive Distance Clustering", one-dimensionally speaking. I did look at what seems to be the popularly referred-to algorithm on this site, the Bin Packing one, but I was unable to see exactly how it would work with my problem. Any help is appreciated. :)

    Read the article

  • Optimal Sharing of heavy computation job using Snow and/or multicore

    - by James
    Hi, I have the following problem. First my environment, I have two 24-CPU servers to work with and one big job (resampling a large dataset) to share among them. I've setup multicore and (a socket) Snow cluster on each. As a high-level interface I'm using foreach. What is the optimal sharing of the job? Should I setup a Snow cluster using CPUs from both machines and split the job that way (i.e. use doSNOW for the foreach loop). Or should I use the two servers separately and use multicore on each server (i.e. split the job in two chunks, run them on each server and then stich it back together). Basically what is an easy way to: 1. Keep communication between servers down (since this is probably the slowest bit). 2. Ensure that the random numbers generated in the servers are not highly correlated.

    Read the article

  • How to get my first programming job.

    - by itsbunnies
    What is the best way to go about getting your very first programming job? I am currently going to college but have a few years before I graduate. I want to get out of my current "factory" job and into a computer related field as soon as possible. I found quite a few positions that I qualify for, but believe my pathetic resume has hurt me. My resume has a "skills" section where I list everything I am familiar with, but my employment section contains my first job (Super Wash, best car wash around!) and my current job (Assembling stuff that makes up various fans and motors). How do I get a company to give me a chance?

    Read the article

  • Condor job using DAG with some jobs needing to run the same host

    - by gurney alex
    I have a computation task which is split in several individual program executions, with dependencies. I'm using Condor 7 as task scheduler (with the Vanilla Universe, due do constraints on the programs beyond my reach, so no checkpointing is involved), so DAG looks like a natural solution. However some of the programs need to run on the same host. I could not find a reference on how to do this in the Condor manuals. Example DAG file: JOB A A.condor JOB B B.condor JOB C C.condor JOB D D.condor PARENT A CHILD B C PARENT B C CHILD D I need to express that B and D need to be run on the same computer node, without breaking the parallel execution of B and C. Thanks for your help.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >