Search Results

Search found 8172 results on 327 pages for 'job interview'.

Page 70/327 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Releasing software/Using Continuous Integration - What do most companies seem to use?

    - by Sagar
    I've set up our continuous integration system, and it has been working for about a year now. We have finally reached a point where we want to do releases using the same. Before our CI system, the process(es) that was used was: (Develop) -> Ready for release -> Create a branch -> (Build -> Fix bugs as QA finds them) Loop -> Final build -> Tag (Develop) -> Ready for release -> (build -> fix bugs) Loop -> Tag Our CI setup: 1 server for development (DEV) 1 server for qa/release (QA) The second one has integrated into CI perfectly. I create a branch when the software is ready for release, and the branch never changes thereafter, which means the build is reproduceable without having to change the CI job. Any future development takes place on HEAD, and even maintainence releases get a completely new branch and a completely new job, which remains on the CI system forever, and then some. The first method is harder to adapt. If the branch changes, the build is not reproduceable unless I use the tag to build [jobs on the CI server uses the branch for QA/RELEASE, and HEAD for development builds]. However, if I use the tag to build, I have to create a new CI job to build from the tag (lose changelog on server), or change the existing job (lose original job configuration). I know this sounds complicated, and if required, I will rewrite/edit to explain the situation better. However, my question: [If at all] what process does your company use to release software using continuous integration systems. Is it even done using the CI system, or manually?

    Read the article

  • Enabling Attachments in Oracle Forms

    - by Manoj Madhusoodanan
    In this blog I will demonstrate how we can enable attachments in Oracle Form. The user ask is to capture a job description which will be along text. Oracle Apps has a feature called attachments, through some easy setup we can enable attachments in forms. We will be going to addresscustomer ask through this feature. Step 1 : Identify Form Name First thing you have to do is to identify the form to enable attachment. In this scenario form will be PERWSDJT ( Job form in HRMS ). Step 2 :  Identify Document Entity exists or not. If exists then do nothing otherwise create new. Step 3 : Create Document Categories. Here I will create a category Job Description with Default Data Type Long Text and Application Human Resources.Onceit is done I will assign this to Create Job form. Step: 3 Query Attachment Functions for PERWSDJT. Click on Categories and add Job Description.Make sure enabled check box is checked. v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Step 4 : Turn off HR:Use Standard Attachments profile option. Output Go to Work Structures > Job > Description

    Read the article

  • SEO: best way to deal with short lifetime URLs?

    - by Mike Norgate
    I am currently in the process of redesigning a job advert site and am trying to put a lot more effort into my SEO. My question is how should I deal with the URLs that point to job adverts when the advert expires. The options I have thought of so far are: Return a 404 error and redirect to a 404 page. Will it have an effect on ranking if there are a lot of URLs that return 404s after only being up for a few weeks? Redirect to job listing page - When the user requests a URL for an advert that has expired just redirect to the main job listing page. Show the advert but tell the user to has closed - Show the advert page but with a notification that the advert has closed. The issue I see with this is that the user will visit the page, see its closed and then leave the site again which would not be good for rankings

    Read the article

  • Cron doesn't execute one of the scheduled jobs

    - by user288633
    I'm using a lubuntu desktop, distribution Ubuntu 13.10, i686. This is my problem: in the job list scheduled by cron a job hasn't effect, but in /var/log/syslog its execution is traced. This is the relative log line: Jun 4 09:06:01 kiosk CRON[14189]: (root) CMD (/usr/bin/xinput set-prop 12 --type=float "Coordinate Transformation Matrix" 0 -1 1 1 0 0 0 0 1 /tmp/mybackup.log) This job should rotate touchscreen mapping. I try different solutions: I substitute in crontab the with bash -c "", I set "export DISPLAY=:0.0" ("for Graphics related job in Unix Environment we need to set first the DISPLAY...") before the command,...and many other! I know there are a lots of details affect cron execution (path, environment variables, special character and other) and I have no more idea by now :( Could some gentleman suggest me an idea? where can I find the problem? Thanks in advance!

    Read the article

  • The Real Value Of Certification

    - by Brandye Barrington
    I read a quote recently by Rich Hein of CIO.com "Certifications are, like most things in life: The more you put into them, the more you will get out." This is what we tell candidates all the time. The real value in obtaining a certification is the time spent preparing for the exam. All the hours spent reading books, practicing in hands-on environments, asking questions and searching for answers is valuable. It's valuable preparation for the exam, but it's also valuable preparation for your future job role and for your career. If your goal is just to pass an exam, you've missed a very important part of the value of certification.We receive so many questions through different forms of social media on whether or not certification will help candidates get jobs or get better jobs. Surveys conducted by us and by independent entities all point to the job and salary benefits of certification. However, a key part of that equation is whether a candidate can actually perform successfully in a job role. If preparation time was used to practice and learn and master new skills rather than to memorize a brain dump, the candidate will probably perform successfully in their job role, and job opportunities and higher salary will likely follow. Candidates who do not show that initiative, will not likely reap the full benefits of certification.Keep this in mind as you approach your next certification exam. You are preparing for a career, not an exam. This may help you to be more appreciative of the long hours spent studying!

    Read the article

  • Did I choose proper career path? [closed]

    - by Liston Catch
    I am a C# Junior. My company has it's own enterprise documents-flow system written, my job along with 10 other programmers is to write modules/add-ons for it. I am totally bored of this job, I dont like Microsoft's technologies stack (dont hate me here, just subjective), but it's plain boring, enterprise is boring (subjective again, everyone's tastes differ), days on this work last long and I am tired of it. In short - I dont like my job. In my spare time I am doing PHP-development and I totally like it. I am also doing web-design, so I am LAMP-kind of guy who loves his Ubuntu and does design aswell. I know that most programmers don't do design themselves, so some person is either all about design or all about coding, but I enjoy both and do both. I often get interesting sites orders, I love to make whole websites with all the design, I love the feeling of site completeness, I enjoy talking with customers. I like that PHP is simple and skill cap is lower than one of java, meaning I can become expert in it after some years. But C# (and J2EE also) pay more, and I am doing really good in C#. But I dont like it. I can go for J2EE, platform itself seems more fun to me rather than .NET, but EE development is still boring to me. But it seems higher payed, easier to find job (since PHP is too common for its easiness. But if you are expert in something it doesnt matter, right? Just a higher skillcap.) Question: I want to go on with freelance. I want to have an opportunity to start my own startup in web. Actually I have a browser-game already written by myself, it earns me around 500$ per month which I am really proud of since I am 21 only and still noob in coding. I want to find part-time PHP job. 3 days per week so I can get some stable income, I can work in team and learn from them, social factor matters aswell as ensurance and diversity. I also want my total income (freelance + part-time job + own startup maybe)to be not too much less than one I have working in EE development sector. Maximum of 25% lower, but not more. Is it all possible if I stick with web-development (LAMP + HTML/CSS/JS/Jquery/AJAX)? Or is it easier to reach my goals with EE development?

    Read the article

  • How easy is it to change languages/frameworks professionally? [closed]

    - by user924731
    Forgive me for asking a career related question - I know that they can be frowned upon here, but I think that this one is general enough to be useful to many people. My question is: How easy/difficult is it to get a job using language/frameowork B, when your current job uses language/framework A? e.g. If you use C#/ASP.NET in your current job, how difficult would it be to get a job using python/django, or PHP/Zend, or whatever (the specifics of the example don't matter). Relatedly, if you work in client side scripting, but perhaps work on server-side projects in your own time, how difficult would it be to switch to server-side professionally? So, to sum up, does the choice of which languages/frameworks use at work tend to box you in professionally?

    Read the article

  • Example map-reduce oozie program not working on CDH 4.5

    - by user2002748
    I am using Hadoop (CDH 4.5) on my mac since some time now, and do run map reduce jobs regularly. I installed oozie recently (again, CDH4.5) following instructions at: http://archive.cloudera.com/cdh4/cdh/4/oozie-3.3.2-cdh4.5.0/DG_QuickStart.html, and tried to run sample programs provided. However, it always fails with the following error. Looks like the workflow is not getting run at all. The Console URL field in the Job info is also empty. Could someone please help on this? The relevant snippet of the Oozie Job log follows. 2014-06-10 17:27:18,414 INFO ActionStartXCommand:539 - USER[userXXX] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000000-140610172702069-oozie-usrX-W] ACTION[0000000-140610172702069-oozie-usrX-W@:start:] Start action [0000000-140610172702069-oozie-usrX-W@:start:] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10] 2014-06-10 17:27:18,417 WARN ActionStartXCommand:542 - USER[userXXX] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000000-140610172702069-oozie-usrX-W] ACTION[0000000-140610172702069-oozie-usrX-W@:start:] [***0000000-140610172702069-oozie-usrX-W@:start:***]Action status=DONE 2014-06-10 17:27:18,417 WARN ActionStartXCommand:542 - USER[userXXX] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000000-140610172702069-oozie-usrX-W] ACTION[0000000-140610172702069-oozie-usrX-W@:start:] [***0000000-140610172702069-oozie-usrX-W@:start:***]Action updated in DB! 2014-06-10 17:27:18,576 INFO ActionStartXCommand:539 - USER[userXXX] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000000-140610172702069-oozie-usrX-W] ACTION[0000000-140610172702069-oozie-usrX-W@mr-node] Start action [0000000-140610172702069-oozie-usrX-W@mr-node] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10] 2014-06-10 17:27:19,188 WARN MapReduceActionExecutor:542 - USER[userXXX] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000000-140610172702069-oozie-usrX-W] ACTION[0000000-140610172702069-oozie-usrX-W@mr-node] credentials is null for the action 2014-06-10 17:27:19,423 WARN ActionStartXCommand:542 - USER[userXXX] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000000-140610172702069-oozie-usrX-W] ACTION[0000000-140610172702069-oozie-usrX-W@mr-node] Error starting action [mr-node]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: Unknown rpc kind RPC_WRITABLE] org.apache.oozie.action.ActionExecutorException: JA009: Unknown rpc kind RPC_WRITABLE at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:418) at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:392) at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:773) at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:927) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:211) at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:59) at org.apache.oozie.command.XCommand.call(XCommand.java:277) at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:326) at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:255) at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unknown rpc kind RPC_WRITABLE at org.apache.hadoop.ipc.Client.call(Client.java:1238) at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:225) at org.apache.hadoop.mapred.$Proxy30.getDelegationToken(Unknown Source) at org.apache.hadoop.mapred.JobClient.getDelegationToken(JobClient.java:2125) at org.apache.oozie.service.HadoopAccessorService.createJobClient(HadoopAccessorService.java:372) at org.apache.oozie.action.hadoop.JavaActionExecutor.createJobClient(JavaActionExecutor.java:970) at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:723) ... 10 more 2014-06-10 17:27:19,426 INFO ActionStartXCommand:539 - USER[userXXX] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000000-140610172702069-oozie-usrX-W] ACTION[0000000-140610172702069-oozie-usrX-W@mr-node] Next Retry, Attempt Number [1] in [60,000] milliseconds 2014-06-10 17:28:19,468 INFO ActionStartXCommand:539 - USER[userXXX] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000000-140610172702069-oozie-usrX-W] ACTION[0000000-140610172702069-oozie-usrX-W@mr-node] Start action [0000000-140610172702069-oozie-usrX-W@mr-node] with user-retry state : userRetryCount [0], userRetryMax [0], userRetryInterval [10]

    Read the article

  • Rails development environment Resque.enqueue does not create jobs

    - by anton evangelatov
    I am having the same problem like Rails custom environment Resque.enqueue does not create jobs , but the solution there doesn't work for me. I'm using Resque for a couple of asynchronous jobs. It works just fine for the staging environment, but for some reason it stopped working on development environment. For example, if I run the following: $ rails c development > Resque.enqueue(MyLovelyJob, 1) Nothing is enqueued. I check Resque using resque-web If I run it on staging - it works just fine. $ rails c staging > Resque.enqueue(MyLovelyJob, 1) I have tried to duplicate the 2 environment, and they seem to use absolutely the same configurations (database.yml , config/environment , etc.), but development is still not working. If I do > Resque.enqueue(UpdateInstancesData, 2) > => true > Resque.info > => { > :pending => 0, > :processed => 0, > :queues => 0, > :workers => 1, > :working => 0, > :failed => 0, > :servers => [ > [0] "redis://127.0.0.1:6379/0" > ], > :environment => "development" > } Any suggestions where to look in order to debug this? I am running the application via foreman. My Procfile looks like: faye: rackup faye.ru -s thin -E production worker1: bundle exec rake resque:work QUEUE=* VERBOSE=1 worker2: bundle exec rake resque:work QUEUE=* VERBOSE=1 clock: bundle exec rake resque:scheduler VERBOSE=1 web: bundle exec rails s For staging, as mentioned, everything works and the log from foreman is: 17:03:42 clock.1 | 2013-06-26 17:03:42 Reloading Schedule 17:03:42 clock.1 | 2013-06-26 17:03:42 Loading Schedule 17:03:42 clock.1 | 2013-06-26 17:03:42 Scheduling logging_test 17:03:42 clock.1 | 2013-06-26 17:03:42 Schedules Loaded 17:03:43 worker2.1 | *** Starting worker ttttt-mbp.local:69573:* 17:03:43 worker2.1 | *** Registered signals 17:03:43 worker2.1 | *** Running before_first_fork hooks 17:03:43 worker1.1 | *** Starting worker ttttt-mbp.local:69572:* 17:03:43 worker1.1 | *** Registered signals 17:03:43 worker2.1 | *** Checking another_queue 17:03:43 worker2.1 | *** Checking anotherqueue 17:03:43 worker2.1 | *** Checking statused 17:03:43 worker2.1 | *** Found job on statused 17:03:43 worker2.1 | *** got: (Job{statused} | LoggingTest | ["57e89a1c1b24ce6866bcf5d0e1c07f01", {}]) 17:06:30 clock.1 | 2013-06-26 17:06:30 queueing LoggingTest (logging_test) 17:06:33 worker1.1 | *** Checking another_queue 17:06:33 worker2.1 | *** Checking another_queue 17:06:33 worker1.1 | *** Checking anotherqueue 17:06:33 worker2.1 | *** Checking anotherqueue 17:06:33 worker1.1 | *** Found job on anotherqueue 17:06:33 worker1.1 | *** got: (Job{anotherqueue} | LoggingTest | ["0d976869a945766e0cfeca83e7349305", {}]) 17:06:33 worker1.1 | *** resque-1.24.1: Processing anotherqueue since 1372259193 [LoggingTest] 17:06:33 worker1.1 | *** Running before_fork hooks with [(Job{anotherqueue} | LoggingTest | ["0d976869a945766e0cfeca83e7349305", {}])] 17:06:33 worker1.1 | *** resque-1.24.1: Forked 69955 at 1372259193 17:06:33 worker2.1 | *** resque-1.24.1: Forked 69956 at 1372259193 17:06:33 worker1.1 | *** Running after_fork hooks with [(Job{anotherqueue} | LoggingTest | ["0d976869a945766e0cfeca83e7349305", {}])] 17:06:33 worker1.1 | JOB :: LoggingTest 17:06:33 worker1.1 | 55555 17:06:33 worker1.1 | *** done: (Job{anotherqueue} | LoggingTest | ["0d976869a945766e0cfeca83e7349305", {}]) whereas for development it doesn't seem to enqueue and then find the job. If there is a job already in the queue (pending, left over from staging environment) the workers from development don't process it. 17:01:23 clock.1 | 2013-06-26 17:01:23 Reloading Schedule 17:01:23 clock.1 | 2013-06-26 17:01:23 Loading Schedule 17:01:23 clock.1 | 2013-06-26 17:01:23 Scheduling logging_test 17:01:23 clock.1 | 2013-06-26 17:01:23 Scheduling update_instances_data 17:01:23 clock.1 | 2013-06-26 17:01:23 Schedules Loaded 17:03:10 clock.1 | 2013-06-26 17:03:10 queueing LoggingTest (logging_test) 17:03:14 worker1.1 | *** Checking another_queue 17:03:14 worker2.1 | *** Checking another_queue 17:03:14 worker1.1 | *** Checking anotherqueue 17:03:14 worker2.1 | *** Checking anotherqueue 17:03:14 worker1.1 | *** Checking statused 17:03:14 worker2.1 | *** Checking statused

    Read the article

  • MySQL Gurus: How to pull a complex grid of data from MySQL database with one query?

    - by iopener
    Hopefully this is less complex than I think. I have one table of companies, and another table of jobs, and a third table with that contains a single entry for each employee in each job from each company. NOTE: Some companies won't have employees in some jobs, and some companies will have more than one employee in some jobs. The company table has a companyid and companyname field, the job table has a jobid and jobtitle field, and the employee table has employeeid, companyid, jobid and employeename fields. I want to build a table like this: +-----------+-----------+-----------+ | Company A | Company B | Company C | ------+-----------+-----------+-----------+ Job A | Emp 1 | Emp 2 | | ------+-----------+-----------+-----------+ Job B | Emp 3 | | Emp 4 | | | | Emp 5 | ------+-----------+-----------+-----------+ Job C | | Emp 6 | | | | Emp 7 | | | | Emp 8 | | ------+-----------+-----------+-----------+ I had previously been looping through a result set of jobs, and for each job, looping through a result set of each company, and for each company, looping through each employee and printing it in a table (gross, but performance was not supposed to be a consideration). The app has grown in popularity, and now we have 100 companies and hundreds of jobs, and the server is crapping out (all the id fields are indexed). Any suggestions on how to write a single query to get this data? I don't need the company names or job titles (obviously), but I do need some way to identify where each row from the result should be printed. I'm imagining a result set that just contained a long list of joined employees, and I could write a loop to use the companyid and employeeid values to tell me when to create a new cell or table row. This works as long as there aren't ZERO employees; I would need a NULL employee name for that I think? Am I completely on the wrong track? Thanks in advance for any ideas!

    Read the article

  • Sorted queue with dropping out elements

    - by ffriend
    I have a list of jobs and queue of workers waiting for these jobs. All the jobs are the same, but workers are different and sorted by their ability to perform the job. That is, first person can do this job best of all, second does it just a little bit worse and so on. Job is always assigned to the person with the highest skills from those who are free at that moment. When person is assigned a job, he drops out of the queue for some time. But when he is done, he gets back to his position. So, for example, at some moment in time worker queue looks like: [x, x, .83, x, .7, .63, .55, .54, .48, ...] where x's stand for missing workers and numbers show skill level of left workers. When there's a new job, it is assigned to 3rd worker as the one with highest skill of available workers. So next moment queue looks like: [x, x, x, x, .7, .63, .55, .54, .48, ...] Let's say, that at this moment worker #2 finishes his job and gets back to the list: [x, .91, x, x, .7, .63, .55, .54, .48, ...] I hope the process is completely clear now. My question is what algorithm and data structure to use to implement quick search and deletion of worker and insertion back to his position. For the moment the best approach I can see is to use Fibonacci heap that have amortized O(log n) for deleting minimal element (assigning job and deleting worker from queue) and O(1) for inserting him back, which is pretty good. But is there even better algorithm / data structure that possibly take into account the fact that elements are already sorted and only drop of the queue from time to time?

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • Mysql server won't start - no logs

    - by Owen
    After a restart, mysql won't start. sudo service mysql start gives start: Job failed to start and the logs are empty, so I have no idea where to start. I'm pretty sure permissions problems are taken care of. Edit: All disks have at least 1G of space and sh -x /etc/init.d/mysql start gives me: + set -e + basename /etc/init.d/mysql + INITSCRIPT=mysql + JOB=mysql + [ mysql = upstart-job ] + [ -z start ] + COMMAND=start + shift + [ -z ] + ECHO=echo + echo Rather than invoking init scripts through /etc/init.d, use the service(8) Rather than invoking init scripts through /etc/init.d, use the service(8) + echo utility, e.g. service mysql start utility, e.g. service mysql start + echo + echo Since the script you are attempting to invoke has been converted to an Since the script you are attempting to invoke has been converted to an + echo Upstart job, you may also use the start(8) utility, e.g. start mysql Upstart job, you may also use the start(8) utility, e.g. start mysql + grep -q start/ + status mysql + [ -z ] + [ start = stop ] + [ -n ] + start mysql start: Rejected send message, 1 matched rules; type="method_call", sender=":1.105" (uid=1000 pid=3208 comm="start mysql ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")

    Read the article

  • Symantec Backup Exec 12 Tape Alert.

    - by Adam
    Every day, I run 5 backups using 6 tapes. Each day, when I run the inventory, I get a tape alert Error. This occurs every day, on the same job. The error is: Job 'Inventory Daily ********' has reported Multiple Tape Alerts on server '******' Please refer to job log *****.xml for more information. When i look at the Job log, the Utility Job Information says: The device has reported the following TapeAlert diagnostic information Information- The library has been manually turned offline and is unavailable for use. Robotic library for device: PV132T 500 Warning - Library security has been compromised. Robotic Library for device: PV132T 500. Critical - The library has detected a inconsistency in its inventory. 1.Redo the library inventory to correct the inconsistency. 2. Restart the operation. Check the applications users manual or hardware users manual for specific instructions on redoing the library inventory. Roboric Library for Device PV132T 500. When I run the same inventory for a second time, the job completes successfully. I am using Symantec Backup Exec 12 running on Windows Server 2008. I am using a Dell Powervault 132T 500 tape drive. If anyone can help me on how to resolve this problem, it would be very much appreciated.

    Read the article

  • Scheduling a RichCopy Jobs

    - by Ian B
    Anyone use the timer feature of RichCopy? I have a job that works fine when I manually start the job. However, when I schedule the job and click run, the app appears to be waiting for the scheduled time to elapse yet never fires. Interesting enough when I stop the job the copy starts. Anyone have any experience with using RichCopy timer? IanB

    Read the article

  • Dynamically building and updating Histograms with JFreeChart

    - by job
    I've got a stream of incoming data that I would like to plot using a simple histogram. I don't know the range of values, or the proper resolution or bin width to use for the histogram. SimpleHistogramDataset provides some of this functionality, but I don't want to have to deal with catching exceptions in order to add new bins if the new value isn't covered. In addition, it doesn't easily allow me to rebuild the histogram using a different bin width (perhaps an integer multiples of some initial set width). Is there an easy way to accomplish this with JFreeChart or some alternate charting library, or am I going to have to write my own class here?

    Read the article

  • Output of gcc -fdump-tree-original

    - by Job
    If I dump the code generated by GCC for a virtual destructor (with -fdump-tree-original), I get something like this: ;; Function virtual Foo::~Foo() (null) ;; enabled by -tree-original { <<cleanup_point <<< Unknown tree: expr_stmt (void) (((struct Foo *) this)->_vptr.Foo = &_ZTV3Foo + 8) >>> >>; } <D.20148>:; if ((bool) (__in_chrg & 1)) { <<cleanup_point <<< Unknown tree: expr_stmt operator delete ((void *) this) >>> >>; } My question is: where is the code after "<D.20148>:;" located? It is outside of the destructor so when is this code executed?

    Read the article

  • "EXC_BAD_ACCESS: Unable to restore previously selected frame" Error, Array size?

    - by Job
    Hi there, I have an algorithm for creating the sieve of Eratosthenes and pulling primes from it. It lets you enter a max value for the sieve and the algorithm gives you the primes below that value and stores these in a c-style array. Problem: Everything works fine with values up to 500.000, however when I enter a large value -while running- it gives me the following error message in xcode: Program received signal: “EXC_BAD_ACCESS”. warning: Unable to restore previously selected frame. Data Formatters temporarily unavailable, will re-try after a 'continue'. (Not safe to call dlopen at this time.) My first idea was that I didn't use large enough variables, but as I am using 'unsigned long long int', this should not be the problem. Also the debugger points me to a point in my code where a point in the array get assigned a value. Therefore I wonder is there a maximum limit to an array? If yes: should I use NSArray instead? If no, then what is causing this error based on this information? EDIT: This is what the code looks like (it's not complete, for it fails at the last line posted). I'm using garbage collection. /*--------------------------SET UP--------------------------*/ unsigned long long int upperLimit = 550000; // unsigned long long int sieve[upperLimit]; unsigned long long int primes[upperLimit]; unsigned long long int indexCEX; unsigned long long int primesCounter = 0; // Fill sieve with 2 to upperLimit for(unsigned long long int indexA = 0; indexA < upperLimit-1; ++indexA) { sieve[indexA] = indexA+2; } unsigned long long int prime = 2; /*-------------------------CHECK & FIND----------------------------*/ while(!((prime*prime) > upperLimit)) { //check off all multiples of prime for(unsigned long long int indexB = prime-2; indexB < upperLimit-1; ++indexB) { // Multiple of prime = 0 if(sieve[indexB] != 0) { if(sieve[indexB] % prime == 0) { sieve[indexB] = 0; } } } /*---------------- Search for next prime ---------------*/ // index of current prime + 1 unsigned long long int indexC = prime - 1; while(sieve[indexC] == 0) { ++indexC; } prime = sieve[indexC]; // Store prime in primes[] primes[primesCounter] = prime; // This is where the code fails if upperLimit > 500000 ++primesCounter; indexCEX = indexC + 1; } As you may or may not see, is that I am -very much- a beginner. Any other suggestions are welcome of course :)

    Read the article

  • Rename SSRS Subscription GUID's

    - by Registered User
    When you create a job schedule in SSRS 2008, the server creates a SQL Server Agent job with a GUID name. Is there a script that can be run to rename the shared job schedule in the ReportServer database to another name? I would like to do this to make it easier to organize subscriptions in the Job Activity Monitor since I have a very large number of subscriptions mixed in with other jobs on my database server.

    Read the article

  • How add column order in careers table???

    - by Mahran Elneel
    this table i want to create and how assign job to first position??? job_id dynamic Jobs Title text Job Description text Order combo box to choose after what job or at first position in the website i create this table and cannot choose first job to view in my website

    Read the article

  • what's wrong with my producer-consumer queue design?

    - by toasteroven
    I'm starting with the C# code example here. I'm trying to adapt it for a couple reasons: 1) in my scenario, all tasks will be put in the queue up-front before consumers will start, and 2) I wanted to abstract the worker into a separate class instead of having raw Thread members within the WorkerQueue class. My queue doesn't seem to dispose of itself though, it just hangs, and when I break in Visual Studio it's stuck on the _th.Join() line for WorkerThread #1. Also, is there a better way to organize this? Something about exposing the WaitOne() and Join() methods seems wrong, but I couldn't think of an appropriate way to let the WorkerThread interact with the queue. Also, an aside - if I call q.Start(#) at the top of the using block, only some of the threads every kick in (e.g. threads 1, 2, and 8 process every task). Why is this? Is it a race condition of some sort, or am I doing something wrong? using System; using System.Collections.Generic; using System.Text; using System.Messaging; using System.Threading; using System.Linq; namespace QueueTest { class Program { static void Main(string[] args) { using (WorkQueue q = new WorkQueue()) { q.Finished += new Action(delegate { Console.WriteLine("All jobs finished"); }); Random r = new Random(); foreach (int i in Enumerable.Range(1, 10)) q.Enqueue(r.Next(100, 500)); Console.WriteLine("All jobs queued"); q.Start(8); } } } class WorkQueue : IDisposable { private Queue _jobs = new Queue(); private int _job_count; private EventWaitHandle _wh = new AutoResetEvent(false); private object _lock = new object(); private List _th; public event Action Finished; public WorkQueue() { } public void Start(int num_threads) { _job_count = _jobs.Count; _th = new List(num_threads); foreach (int i in Enumerable.Range(1, num_threads)) { _th.Add(new WorkerThread(i, this)); _th[_th.Count - 1].JobFinished += new Action(WorkQueue_JobFinished); } } void WorkQueue_JobFinished(int obj) { lock (_lock) { _job_count--; if (_job_count == 0 && Finished != null) Finished(); } } public void Enqueue(int job) { lock (_lock) _jobs.Enqueue(job); _wh.Set(); } public void Dispose() { Enqueue(Int32.MinValue); _th.ForEach(th = th.Join()); _wh.Close(); } public int GetNextJob() { lock (_lock) { if (_jobs.Count 0) return _jobs.Dequeue(); else return Int32.MinValue; } } public void WaitOne() { _wh.WaitOne(); } } class WorkerThread { private Thread _th; private WorkQueue _q; private int _i; public event Action JobFinished; public WorkerThread(int i, WorkQueue q) { _i = i; _q = q; _th = new Thread(DoWork); _th.Start(); } public void Join() { _th.Join(); } private void DoWork() { while (true) { int job = _q.GetNextJob(); if (job != Int32.MinValue) { Console.WriteLine("Thread {0} Got job {1}", _i, job); Thread.Sleep(job * 10); // in reality would to actual work here if (JobFinished != null) JobFinished(job); } else { Console.WriteLine("Thread {0} no job available", _i); _q.WaitOne(); } } } } }

    Read the article

  • SQLAuthority News – Pinal Dave: Blogger, MVP and now Interviewee by Michael J Swart

    - by pinaldave
    Michael J. Swart is a very unique person. I have often exchanged emails with him and also used a couple of his scripts in my presentations (with his permission). Every time I conduct spatial database presentation, I always start with his script where he has drawn the wonderful image of Botticelli’s Birth of Venus. I often think he is more of a creative artist than IT professional. However, if you read his blog posts and articles, they are top notch and each article is as creative as his caricatures. He is wonderful, inspiring, creative and most importantly, very humble. He recently took my interview and asked me some very interesting question. To answer his question, I had to share some of the interesting aspects of my life which I have had never shared in any interview before. He made me share the following interesting facts. Pinal Dave Caricatures Read my Interview Here are a few questions that I have answered at his blog: How I met my wife? Best moments of my life? How to pronounce my last name? Who inspired me? English as a Third Language. I am also thankful to Michael for drawing my caricature. I really liked it and I am very glad that he took time to do so. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, Readers Question, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • A whole site for reviewing of SQL Server MVP Deep Dives

    - by Rob Farley
    This book just keeps amazing me. Not only as I read through some chapters for the first time, and others for the second and third times, but also as I read reviews of it written by other people. The guys over at http://sqlperspectives.wordpress.com are a prime example. They’ve been going through each chapter, each writing a review on it, and often getting a guest blogger to write something as well – and they’re clearly getting a lot of stuff out of this brilliant book. Back when I first heard about them doing this, I had offered to be involved, and recently did an interview with them about my chapters (chapter seven and chapter forty). That interview can be found at http://sqlperspectives.wordpress.com/2010/03/20/interview-with-rob-farley/ – and covers how I got into databases, and how I think the database roles in the IT industry are changing. If you don’t have a copy of SQL Server MVP Deep Dives yet, why not get a copy from http://www.sqlservermvpdeepdives.com (or persuade your local bookstore to get some copies in), and read through chapters with these guys? Treat it like a book club, discussing each chapter with others (guest blogging perhaps?), and you’ll probably end up getting even more out of it. Remember that the proceeds of the book go to charity (instead of the authors – we get nothing), so you don’t need to consider that you’re splashing out on a treat for yourself. Think of the kids helped by War Child instead. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >