Search Results

Search found 7239 results on 290 pages for 'job satisfaction'.

Page 58/290 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • ArcServer creates additional jobs

    - by wullxz
    We're running CA ARCserve Backup r12.5 (Build 5854) - Small Business Server Edition on our Small Business Server 2008. There is a daily job which saves the systemdrive, exchange and 3 databases to our backup storage. It seems like this daily job creates these additional jobs every time it runs. I don't know why it's doing this. Can anybody tell me why this is happening? (I'm sorry this screenshot is in german... "Ergänzungsjob" means something like "extensionjob" or "additionjob")

    Read the article

  • Why when trying to restore a SQL DB from backup, it keeps referring to backup's original location an

    - by rm
    I've done a full DB backup to C:\Backups\MyDb.bak Then I've setup a job to incrementally back the DB up every day to that same location. Then I've moved the backup files to X:\Backups\, changing the target directory in backup job Now that I try to restore the database from backup, it keeps trying to refer to C:\Backups\MyDb.bak, but that file is no longer there. How do I fix the issue w/o having to move the backup files back to C:\Backups?

    Read the article

  • Sarg report error

    - by amyassin
    I have a proxy server that runs Ubuntu Server 11.10, Squid 2.7.STABLE9. I installed sarg (version 2.3.1 Sep-18-2010) to generate reports using the ordinary apt-get install, and added a cron job to generate a report of the day every 5 minutes (that will overwrite the 5-minutes-older one): */5 * * * * /root/proxy_report.sh And the content of /root/proxy_report.sh is: #!/bin/bash /usr/bin/sarg -nd `date +"%d/%m/%Y"` > /dev/null 2>&1 And I added another cron job to generate a full report every hour at :32 (not to collide with the 5 minutes job): */32 * * * * /root/proxy_report_full.sh And the content of /root/proxy_report_full.sh is : #!/bin/bash /usr/bin/sarg -n > /dev/null 2>&1 And I added a small script to remove the yesterday full report (the full report that ends in yesterday that won't be overwritten by the new today full report) in /etc/rc.local to run at startup: /usr/bin/rm_yesterday.sh &>> /var/log/rm_yesterday Where /usr/bin/rm_yesterday.sh: #!/bin/bash find /var/www/sarg/ | grep `date -d Apr1 +"%Y%b%d"`-* | grep -v `date +"%Y%b%d"` | xargs rm -rf * Apr1 is the starting date of the proxy... ** I've placed it in /usr/bin to be mounted early at startup... That arrangement went OK for about a month and a half, except for one time I noticed some errors and reports wasn't generated, and fixed that by making an offset (the two minutes in 32 of the second cron job). However, it then started not to generate reports anymore. By manually trying to generate it it gives the following error: root@proxy-server:~# sarg -n SARG: getword_atoll loop detected after 3 bytes. SARG: Line="154 192.168.10.40 TCP_MISS/200 39 CONNECT www.google.com" SARG: Record="154 192.168.10.40 TCP_MISS/200 39 CONNECT www.google.com" SARG: searching for 'x2f' SARG: getword backtrace: SARG: 1:sarg() [0x8050a4a] SARG: 2:sarg() [0x8050c8b] SARG: 3:sarg() [0x804fc2e] SARG: 4:/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x129113] SARG: 5:sarg() [0x80501c9] SARG: Maybe you have a broken date in your /var/log/squid/access.log file When I looked to /var/log/squid/ folder, I noticed that it contains some rotated logs: root@proxy-server:~# ls /var/log/squid/ access.log access.log.1 cache.log cache.log.1 store.log store.log.1 So maybe sarg installed logrotate with it? Or it comes with the standard Ubuntu? I don't remember I installed it manuallly. The question is: What could've gone wrong? Does it have something to do with rotating the log? How can I trace the error and start generating reports again?

    Read the article

  • A way to rename keep the first 10 charaters of a image name

    - by Chris
    Hi there, I have a very big job to do. I have about 930 pictures which are called like: 5210841 Tuinset Senator.jpg 5210898 Traptrede Premium.jpg etc. I'm looking for a way to rename these pictures, without losing the number part. So for the first one, its name would be: 5210841.jpg and the second 5210898.jpg Can you guys think about a program which can do this job? It's for a Windows platform.

    Read the article

  • Bacula Volume Retention and Automatic Recycling

    - by Kyle Brandt
    Will a bacula volume be recycled if: No more volumes are appendable The date of the last job on it is past the volume retention period Jobs on that volume are not past the job and file retention periods This is the way the manual reads to me, but I saw a post saying all retention periods have to be up before a volume is recycled (which does make less sense to me). Anyone know for sure from experience?

    Read the article

  • Localhost has just stopped working (using xampp)

    - by Joe Taylor
    I installed Xampp to use for local development of a Drupal site. Its been working fine out of the box until now. The main Xampp localhost welcome menu loads, however my subdirectory (localhost/drupal) doesn't. It just spins in the browser for ages and nothing happens. Just a blank screen. I've tried the edit people suggest in the hosts file but that hasn't work and I'm getting no errors so not sure what to do. Anyone have any ideas what might be wrong? PS I'm running Windows 7 edit: Log files: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 [Tue Nov 05 20:52:07.242454 2013] [ssl:warn] [pid 8432:tid 260] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:07.331459 2013] [core:warn] [pid 8432:tid 260] AH00098: pid file C:/xampp/apache/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Tue Nov 05 20:52:07.820487 2013] [ssl:warn] [pid 8432:tid 260] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:07.898492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00455: Apache/2.4.4 (Win32) OpenSSL/0.9.8y PHP/5.4.16 configured -- resuming normal operations [Tue Nov 05 20:52:07.898492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00456: Server built: Feb 23 2013 13:07:34 [Tue Nov 05 20:52:07.898492 2013] [core:notice] [pid 8432:tid 260] AH00094: Command line: 'c:\xampp\apache\bin\httpd.exe -d C:/xampp/apache' [Tue Nov 05 20:52:07.905492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00418: Parent: Created child process 7588 [Tue Nov 05 20:52:08.882548 2013] [ssl:warn] [pid 7588:tid 272] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:09.467582 2013] [ssl:warn] [pid 7588:tid 272] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:09.534585 2013] [mpm_winnt:notice] [pid 7588:tid 272] AH00354: Child: Starting 150 worker threads. Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41

    Read the article

  • Problem installing CanonMF5880dn

    - by Paul
    Just got a CanonMF5880dn and cannot print to it from Suse 11.1 MacBook prints w/o issue ping 192.168.1.103 no problem cups sees it as Canon MF5880/MF5840 PCL at URI socket://192.168.1.103:9100 cups test print appears to submit and complete job but no action from printer Yast also seems to install printer correctly CQue2 also seems to install printer correctly all attempts to print yield same results: Suse indicates job processed correctly and completely but no printing happens. firewall is off http://192.168.1.103 in FF gives me the printer config menus correctly What have I failed to do?

    Read the article

  • Does anybody knows a replacement for TorrentSpy?

    - by Ither
    Hi, I've used TorrentSpy since 2004 and it does a pretty good job but it is very slow with torrent files that have hundreds or thousands of files. Does anyboy knows a better tool for the same job in XP? If you didn't know it, TorrentSpy show the data contained in a torrent file in a readable way: URL's tracker, number of files, its size, number of complete and incomplete downloaders, etc. Edit: What I want is something that can be used from explorer right-clicking in the torrent file, like MediaInfo does with media files.

    Read the article

  • sudo with -u via ssh -t in a crontab

    - by DJK_devel
    I'm trying to create a cron job that uses ssh to login to a remote server and run a script as a different user. I try: * * * * * source $HOME/.keychain/$HOST-sh && sudo -u $USER $PATH/$SCRIPT but this doesn't work because there is no -t option specified for ssh. The cron job needs to source the keychain file in order to work without a password, but I'm not sure where to include the -t option for ssh in this instance.

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

  • When RAID 10 is SLOWER than RAID 1, why?

    - by Paul
    We have a Dell 2950 with PERC and 14 external SAS 15K 73GB drives. An Oracle database job takes 3 hours to run with the drives set as hardware RAID 10 (striped across 7 mirrored pairs). The same job with the drives in RAID 1 takes only 1 hour. OS is Win 2008 R2 I think. Before we change the RAID level (with considerable downtime) on the production box, does anyone know why we're seeing this odd result, and if there's a better way to fix it?

    Read the article

  • Getting into a technology which requires experience when you have no experience

    - by dotnetdev
    It seems that Sharepoint is a technology which is very hard to get into. All the jobs in this tech require experience in working with it (Eg 2 years development experience in MOSS). If I wanted to get into this - but had no job that used the tech, how can I get experience in it to get an experienced job? Jobs state you need "2 years professional experience in MOSS 2007" but then if you have never done it, you won't get the job. The only possible way is you will be doing this at home and not in a team, but if you work in the mean time, that will negate this (it's not like teamworking is tech specific). Many people think if you decide to make a project at home you're just going to play about aimlessly rather than work to specs (where as in my current situation it's vice versa) but if you're dedicated, like me, you would write them - just not with the same presentation. Would employers treat experience at home as professional experience? Biztalk is another prime example of this. Thanks

    Read the article

  • Parsing XML feed into Ruby object using nokogiri?

    - by Galen King
    Hi all, I am pretty green with coding in Ruby but am trying to pull an XML feed into a Ruby object as follows (ignore the ugly code please): <% doc = Nokogiri::XML(open("http://api.workflowmax.com/job.api/current?apiKey=#{@feed.service.api_key}&accountKey=#{@feed.service.account_key}")) %> <% doc.xpath('//Jobs/Job').each do |node| %> <h2><%= node['name'].text %></h2> <p><%= node['description'].text %></p> <% end %> Basically I want to iterate through each Job and output the name, description etc. What am I missing? Many thanks, Galen

    Read the article

  • How to create a Task Scheduler App.

    - by Mike
    I have been task with (ha) creating an application that will allow the users to schedule a command line app we have with a parameter. So the command line app takes an xml and "runs it" So bottom line I either need to create a windows service or learn how to interact with the Task Scheduler service already running on the box (version 1 Xp /2003) At first I though it would be easy have a service run and when a job is submitted, calculate the time between now and run and set up a timer to wait that amount of time. This is better then checking every minute if it's time to run. Were I hit a wall is I relized I do not know how to communicate with a running windows service. Except maybe create a file with details and have the service with a file watcher to load the file and modify the schedule. So the underlying questions are how can I execute this psedo code from client serviceThatIsRunning.Add(Job) Or ineracting with the task schedule or creating .job files using c# 3.5

    Read the article

  • Get http status in Qt WebKit

    - by RR
    Hello all: What I would like to implement is 1 Using Qt's WebView(part of QtWebKit) to access some page. 2 Show specified html page if got HTTP 4xx, 5xx status (Ex HTTP 404, 500). 3 Also shows specified page when network connection is unavailable. For now, I had only done job 1... In job 2, how did I get http status from WebView ? In job 3, I'm looking about QUrl APIs now. Anyone have idea or expreience yet ?

    Read the article

  • python __getattr__ help

    - by Stefanos Tux Zacharakis
    Reading a Book, i came across this code... # module person.py class Person: def __init__(self, name, job=None, pay=0): self.name = name self.job = job self.pay = pay def lastName(self): return self.name.split()[-1] def giveRaise(self, percent): self.pay = int(self.pay *(1 + percent)) def __str__(self): return "[Person: %s, %s]" % (self.name,self.pay) class Manager(): def __init__(self, name, pay): self.person = Person(name, "mgr", pay) def giveRaise(self, percent, bonus=.10): self.person.giveRaise(percent + bonus) def __getattr__(self, attr): return getattr(self.person, attr) def __str__(self): return str(self.person) It does what I want it to do, but i do not understand the __getattr__ function in the Manager class. I know that it Delegates all other attributes from Person class. but I do not understand the way it works. for example why from Person class? as I do not explicitly tell it to. person(module is different than Person(class) Any help is highly appreciated :)

    Read the article

  • Why do software engineers hate writing documentation?

    - by Stewart Johnson
    I ask because I quite enjoy it! I'm talking about design documentation and implementation notes (NOT user manuals), which are non-existent in most of the codebases I've been handed. I can understand why a developer wouldn't want to write requirements (that's the analyst's job) or the user documentation (that's a technical writer's job) but I don't get why developers hate writing design docs. I don't think I would feel as if I'd finished the job if I only wrote the code and walked away -- mainly because when I've been introduced to code-only situations I've seen how hard it is to figure out what's been done and what the software does. I would hate for people to suffer the same situation when inheriting my code. What makes you loath writing supporting documentation for your code?

    Read the article

  • How to filter Backbone.js Collection and Rerender App View?

    - by Jeremy H.
    Is is a total Backbone.js noob question. I am working off of the ToDo Backbone.js example trying to build out a fairly simple single app interface. While the todo project is more about user input, this app is more about filtering the data based on the user options (click events). I am completely new to Backbone.js and Mongoose and have been unable to find a good example of what I am trying to do. I have been able to get my api to pull the data from the MongoDB collection and drop it into the Backbone.js collection which renders it in the app. What I cannot for the life of me figure out how to do is filter that data and re-render the app view. I am trying to filter by the "type" field in the document. Here is my script: (I am totally aware of some major refactoring needed, I am just rapid prototyping a concept.) $(function() { window.Job = Backbone.Model.extend({ idAttribute: "_id", defaults: function() { return { attachments: false } } }); window.JobsList = Backbone.Collection.extend({ model: Job, url: '/api/jobs', leads: function() { return this.filter(function(job){ return job.get('type') == "Lead"; }); } }); window.Jobs = new JobsList; window.JobView = Backbone.View.extend({ tagName: "div", className: "item", template: _.template($('#item-template').html()), initialize: function() { this.model.bind('change', this.render, this); this.model.bind('destroy', this.remove, this); }, render: function() { $(this.el).html(this.template(this.model.toJSON())); this.setText(); return this; }, setText: function() { var month=new Array(); month[0]="Jan"; month[1]="Feb"; month[2]="Mar"; month[3]="Apr"; month[4]="May"; month[5]="Jun"; month[6]="Jul"; month[7]="Aug"; month[8]="Sep"; month[9]="Oct"; month[10]="Nov"; month[11]="Dec"; var title = this.model.get('title'); var description = this.model.get('description'); var datemonth = this.model.get('datem'); var dateday = this.model.get('dated'); var jobtype = this.model.get('type'); var jobstatus = this.model.get('status'); var amount = this.model.get('amount'); var paymentstatus = this.model.get('paymentstatus') var type = this.$('.status .jobtype'); var status = this.$('.status .jobstatus'); this.$('.title a').text(title); this.$('.description').text(description); this.$('.date .month').text(month[datemonth]); this.$('.date .day').text(dateday); type.text(jobtype); status.text(jobstatus); if(amount > 0) this.$('.paymentamount').text(amount) if(paymentstatus) this.$('.paymentstatus').text(paymentstatus) if(jobstatus === 'New') { status.addClass('new'); } else if (jobstatus === 'Past Due') { status.addClass('pastdue') }; if(jobtype === 'Lead') { type.addClass('lead'); } else if (jobtype === '') { type.addClass(''); }; }, remove: function() { $(this.el).remove(); }, clear: function() { this.model.destroy(); } }); window.AppView = Backbone.View.extend({ el: $("#main"), events: { "click #leads .highlight" : "filterLeads" }, initialize: function() { Jobs.bind('add', this.addOne, this); Jobs.bind('reset', this.addAll, this); Jobs.bind('all', this.render, this); Jobs.fetch(); }, addOne: function(job) { var view = new JobView({model: job}); this.$("#activitystream").append(view.render().el); }, addAll: function() { Jobs.each(this.addOne); }, filterLeads: function() { // left here, this event fires but i need to figure out how to filter the activity list. } }); window.App = new AppView; });

    Read the article

  • Decode JSON with mochijson2 in Erlang

    - by Jon Romero
    I have a var that has some JSON data: A = <<"{\"job\": {\"id\": \"1\"}}">>. Using mochijson2, I decode the data: Struct = mochijson2:decode(A). And now I have this: {struct,[{<<"job">>,{struct,[{<<"id">>,<<"1">>}]}}]} I am trying to read (for example), "job" or "id". I tried using struct.get_value but it doesn't seem to work. Any ideas?

    Read the article

  • How to sort list items for Date field value (Apex, Salesforce)

    - by Channa
    With Salesforce's Apex, is there any way to sort list items, for Date field value. Please refer the TODO section of the following code, thanks. /** * Select list of jobOccurrences belongs to particular list of jobs */ private List<Job_Occurrence__c> getJobsByJobCode(List<Job__c> jobList) { // Select relevant Job Occurrence objects List<Job_Occurrence__c> jobOccuList = new List<Job_Occurrence__c>(); for (Job__c job : jobList) { Job_Occurrence__c jobOccurrence = [SELECT Id, Job__c, Schedule_Start_Date__c FROM Job_Occurrence__c WHERE Job__c =: job.Id]; if((jobOccurrence != null) && (jobOccurrence.Id != null)) { jobOccuList.add(jobOccurrence); } } if((jobOccuList != null) && (jobOccuList.size() > 0)) { // TODO // How I sort the 'jobOccuList' with Date field 'Schedule_Start_Date__c', // for select the items according to sequence of latest jobOccurrence return jobOccuList; } else { throw new RecordNotFoundException ('Could not found any jobOccurrence for given list of jobs'); } }

    Read the article

  • Python Daemon Subprocess not working at boot

    - by Adam Richardson
    I am attempting to write a python daemon that will launch at boot. The goal of the script is to receive a job from our gearman load balancing server and complete the job. I am using the python-daemon module from pypi (http://pypi.python.org/pypi/python-daemon/). The nature of the job that it is completing is converting images in the orf (olympus raw image format) to jpeg. In order to accomplish this an outside program is used, ufraw in this case. The problem comes in when I start the daemon at boot, if I launch from the shell it runs perfectly and completes the work. When it starts at boot it is unable to launch the subprocess command. commandString = '/usr/bin/ufraw-batch --interpolation=four-color --wb=camera --compression=100 --output="' + outfile + '" --out-type=jpg --overwrite "' + infile + '"' args = shlex.split(commandString) process = subprocess.Popen(args).wait() I am not sure what I am doing wrong. Thanks for any help.

    Read the article

  • How to loop over nodes with xmlfeed using scrapy python

    - by Kour ipm
    Hi i working on scrapy and trying xml feeds first time, below is my code class TestxmlItemSpider(XMLFeedSpider): name = "TestxmlItem" allowed_domains = {"http://www.nasinteractive.com"} start_urls = [ "http://www.nasinteractive.com/jobexport/advance/hcantexasexport.xml" ] iterator = 'iternodes' itertag = 'job' def parse_node(self, response, node): title = node.select('title/text()').extract() job_code = node.select('job-code/text()').extract() detail_url = node.select('detail-url/text()').extract() category = node.select('job-category/text()').extract() print title,";;;;;;;;;;;;;;;;;;;;;" print job_code,";;;;;;;;;;;;;;;;;;;;;" item = TestxmlItem() item['title'] = node.select('title/text()').extract() ....... return item result: File "/usr/lib/python2.7/site-packages/Scrapy-0.14.3-py2.7.egg/scrapy/item.py", line 56, in __setitem__ (self.__class__.__name__, key)) exceptions.KeyError: 'TestxmlItem does not support field: title' Totally there are 200+ items so i need to loop over and assign the node text to item but here all the results are displaying at once when we print, actually how can we loop over on nodes in scraping xml files with xmlfeedspider

    Read the article

  • Showing all rows for keys with more than one row

    - by Leif Neland
    Table kal id integer primary key init char 4 indexed job char4 id init job --+----+------ 1 | aa | job1 2 | aa | job2 3 | bb | job1 4 | cc | job3 5 | cc | job5 I want to show all rows where init has more than one row id init job --+----+------ 1 | aa | job1 2 | aa | job2 4 | cc | job3 5 | cc | job5 I tried select * from kal where init in (select init from kal group by init having count(init)2); Actually, the table has 60000 rows, and the query was count(init)<40, but it takes humongus time, phpmyadmin and my patience runs out. Both select init from kal group by init having count(init)2) and select * from kal where init in ('aa','bb','cc') runs in "no time", less than 0.02 seconds. I've tried different subqueries, but all takes "infinite" time, more than a few minutes; I've actually never let them finish. Leif

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >