Search Results

Search found 8299 results on 332 pages for 'job hierarchy'.

Page 68/332 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Automate ripping TV show DVDs

    - by skarface
    RipIt & Handbrake do a really good job of ripping and compressing. For normal "single main feature" DVDs I have a good workflow, and for the most part handbrake does a good job of figuring out what the main title is. The process for ripping a DVD that has multiple episodes of something kinda sucks. Has anyone made any progress on automating (or at least simplifying) the process of getting show-s01e01.avi from a DVD?

    Read the article

  • Security considerations on Importing Bulk Data by Using BULK INSERT or OPENROWSET(BULK...)

    - by Ice
    I do not understand the following article profound. http://msdn.microsoft.com/en-us/library/ms175915(SQL.90).aspx "In contrast, if a SQL Server user logs on by using Windows Authentication, the user can read only those files that can be accessed by the user account, regardless of the security profile of the SQL Server process." What if i define a SQL-Agent Job to perform this bulk-Insert; Is it the OWNER of the Job who gives the security-context?

    Read the article

  • What is effect of CTRL + Z on a unix\Linux application

    - by Kumar Alok
    I was curious and confused that what exactly is the behaviour of CTRl+Z. I know, If a process in running in foreground, and we press ctrl+z, it goes to background. But what exactly happens. Does it keep doing it's job, or does it get suspended, and stopped at the point where it was. Can someone please explain. And if it gets stopped at that point, and what is the meaning of background job. Regards Kumar Alok

    Read the article

  • ArcServer creates additional jobs

    - by wullxz
    We're running CA ARCserve Backup r12.5 (Build 5854) - Small Business Server Edition on our Small Business Server 2008. There is a daily job which saves the systemdrive, exchange and 3 databases to our backup storage. It seems like this daily job creates these additional jobs every time it runs. I don't know why it's doing this. Can anybody tell me why this is happening? (I'm sorry this screenshot is in german... "Ergänzungsjob" means something like "extensionjob" or "additionjob")

    Read the article

  • Why when trying to restore a SQL DB from backup, it keeps referring to backup's original location an

    - by rm
    I've done a full DB backup to C:\Backups\MyDb.bak Then I've setup a job to incrementally back the DB up every day to that same location. Then I've moved the backup files to X:\Backups\, changing the target directory in backup job Now that I try to restore the database from backup, it keeps trying to refer to C:\Backups\MyDb.bak, but that file is no longer there. How do I fix the issue w/o having to move the backup files back to C:\Backups?

    Read the article

  • Sarg report error

    - by amyassin
    I have a proxy server that runs Ubuntu Server 11.10, Squid 2.7.STABLE9. I installed sarg (version 2.3.1 Sep-18-2010) to generate reports using the ordinary apt-get install, and added a cron job to generate a report of the day every 5 minutes (that will overwrite the 5-minutes-older one): */5 * * * * /root/proxy_report.sh And the content of /root/proxy_report.sh is: #!/bin/bash /usr/bin/sarg -nd `date +"%d/%m/%Y"` > /dev/null 2>&1 And I added another cron job to generate a full report every hour at :32 (not to collide with the 5 minutes job): */32 * * * * /root/proxy_report_full.sh And the content of /root/proxy_report_full.sh is : #!/bin/bash /usr/bin/sarg -n > /dev/null 2>&1 And I added a small script to remove the yesterday full report (the full report that ends in yesterday that won't be overwritten by the new today full report) in /etc/rc.local to run at startup: /usr/bin/rm_yesterday.sh &>> /var/log/rm_yesterday Where /usr/bin/rm_yesterday.sh: #!/bin/bash find /var/www/sarg/ | grep `date -d Apr1 +"%Y%b%d"`-* | grep -v `date +"%Y%b%d"` | xargs rm -rf * Apr1 is the starting date of the proxy... ** I've placed it in /usr/bin to be mounted early at startup... That arrangement went OK for about a month and a half, except for one time I noticed some errors and reports wasn't generated, and fixed that by making an offset (the two minutes in 32 of the second cron job). However, it then started not to generate reports anymore. By manually trying to generate it it gives the following error: root@proxy-server:~# sarg -n SARG: getword_atoll loop detected after 3 bytes. SARG: Line="154 192.168.10.40 TCP_MISS/200 39 CONNECT www.google.com" SARG: Record="154 192.168.10.40 TCP_MISS/200 39 CONNECT www.google.com" SARG: searching for 'x2f' SARG: getword backtrace: SARG: 1:sarg() [0x8050a4a] SARG: 2:sarg() [0x8050c8b] SARG: 3:sarg() [0x804fc2e] SARG: 4:/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf3) [0x129113] SARG: 5:sarg() [0x80501c9] SARG: Maybe you have a broken date in your /var/log/squid/access.log file When I looked to /var/log/squid/ folder, I noticed that it contains some rotated logs: root@proxy-server:~# ls /var/log/squid/ access.log access.log.1 cache.log cache.log.1 store.log store.log.1 So maybe sarg installed logrotate with it? Or it comes with the standard Ubuntu? I don't remember I installed it manuallly. The question is: What could've gone wrong? Does it have something to do with rotating the log? How can I trace the error and start generating reports again?

    Read the article

  • A way to rename keep the first 10 charaters of a image name

    - by Chris
    Hi there, I have a very big job to do. I have about 930 pictures which are called like: 5210841 Tuinset Senator.jpg 5210898 Traptrede Premium.jpg etc. I'm looking for a way to rename these pictures, without losing the number part. So for the first one, its name would be: 5210841.jpg and the second 5210898.jpg Can you guys think about a program which can do this job? It's for a Windows platform.

    Read the article

  • Bacula Volume Retention and Automatic Recycling

    - by Kyle Brandt
    Will a bacula volume be recycled if: No more volumes are appendable The date of the last job on it is past the volume retention period Jobs on that volume are not past the job and file retention periods This is the way the manual reads to me, but I saw a post saying all retention periods have to be up before a volume is recycled (which does make less sense to me). Anyone know for sure from experience?

    Read the article

  • Localhost has just stopped working (using xampp)

    - by Joe Taylor
    I installed Xampp to use for local development of a Drupal site. Its been working fine out of the box until now. The main Xampp localhost welcome menu loads, however my subdirectory (localhost/drupal) doesn't. It just spins in the browser for ages and nothing happens. Just a blank screen. I've tried the edit people suggest in the hosts file but that hasn't work and I'm getting no errors so not sure what to do. Anyone have any ideas what might be wrong? PS I'm running Windows 7 edit: Log files: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 [Tue Nov 05 20:52:07.242454 2013] [ssl:warn] [pid 8432:tid 260] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:07.331459 2013] [core:warn] [pid 8432:tid 260] AH00098: pid file C:/xampp/apache/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Tue Nov 05 20:52:07.820487 2013] [ssl:warn] [pid 8432:tid 260] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:07.898492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00455: Apache/2.4.4 (Win32) OpenSSL/0.9.8y PHP/5.4.16 configured -- resuming normal operations [Tue Nov 05 20:52:07.898492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00456: Server built: Feb 23 2013 13:07:34 [Tue Nov 05 20:52:07.898492 2013] [core:notice] [pid 8432:tid 260] AH00094: Command line: 'c:\xampp\apache\bin\httpd.exe -d C:/xampp/apache' [Tue Nov 05 20:52:07.905492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00418: Parent: Created child process 7588 [Tue Nov 05 20:52:08.882548 2013] [ssl:warn] [pid 7588:tid 272] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:09.467582 2013] [ssl:warn] [pid 7588:tid 272] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:09.534585 2013] [mpm_winnt:notice] [pid 7588:tid 272] AH00354: Child: Starting 150 worker threads. Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41

    Read the article

  • Problem installing CanonMF5880dn

    - by Paul
    Just got a CanonMF5880dn and cannot print to it from Suse 11.1 MacBook prints w/o issue ping 192.168.1.103 no problem cups sees it as Canon MF5880/MF5840 PCL at URI socket://192.168.1.103:9100 cups test print appears to submit and complete job but no action from printer Yast also seems to install printer correctly CQue2 also seems to install printer correctly all attempts to print yield same results: Suse indicates job processed correctly and completely but no printing happens. firewall is off http://192.168.1.103 in FF gives me the printer config menus correctly What have I failed to do?

    Read the article

  • Does anybody knows a replacement for TorrentSpy?

    - by Ither
    Hi, I've used TorrentSpy since 2004 and it does a pretty good job but it is very slow with torrent files that have hundreds or thousands of files. Does anyboy knows a better tool for the same job in XP? If you didn't know it, TorrentSpy show the data contained in a torrent file in a readable way: URL's tracker, number of files, its size, number of complete and incomplete downloaders, etc. Edit: What I want is something that can be used from explorer right-clicking in the torrent file, like MediaInfo does with media files.

    Read the article

  • sudo with -u via ssh -t in a crontab

    - by DJK_devel
    I'm trying to create a cron job that uses ssh to login to a remote server and run a script as a different user. I try: * * * * * source $HOME/.keychain/$HOST-sh && sudo -u $USER $PATH/$SCRIPT but this doesn't work because there is no -t option specified for ssh. The cron job needs to source the keychain file in order to work without a password, but I'm not sure where to include the -t option for ssh in this instance.

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

  • When RAID 10 is SLOWER than RAID 1, why?

    - by Paul
    We have a Dell 2950 with PERC and 14 external SAS 15K 73GB drives. An Oracle database job takes 3 hours to run with the drives set as hardware RAID 10 (striped across 7 mirrored pairs). The same job with the drives in RAID 1 takes only 1 hour. OS is Win 2008 R2 I think. Before we change the RAID level (with considerable downtime) on the production box, does anyone know why we're seeing this odd result, and if there's a better way to fix it?

    Read the article

  • Popover with embedded navigation controller doesn't respect size on back nav

    - by quixoto
    I have a UIPopoverController hosting a UINavigationController, which contains a small hierarchy of view controllers. I followed the docs and for each view controller, I set the view's popover-context size like so: [self setContentSizeForViewInPopover:CGSizeMake(320, 500)]; (size different for each controller) This works as expected as I navigate forward in the hierarchy-- the popover automatically animates size changes to correspond to the pushed controller. However, when I navigate "Back" through the view stack via the navigation bar's Back button, the popover doesn't change size-- it remains as large as the deepest view reached. This seems broken to me; I'd expect the popover to respect the sizes that are set up as it pops through the view stack. Am I missing something? Thanks.

    Read the article

  • Getting into a technology which requires experience when you have no experience

    - by dotnetdev
    It seems that Sharepoint is a technology which is very hard to get into. All the jobs in this tech require experience in working with it (Eg 2 years development experience in MOSS). If I wanted to get into this - but had no job that used the tech, how can I get experience in it to get an experienced job? Jobs state you need "2 years professional experience in MOSS 2007" but then if you have never done it, you won't get the job. The only possible way is you will be doing this at home and not in a team, but if you work in the mean time, that will negate this (it's not like teamworking is tech specific). Many people think if you decide to make a project at home you're just going to play about aimlessly rather than work to specs (where as in my current situation it's vice versa) but if you're dedicated, like me, you would write them - just not with the same presentation. Would employers treat experience at home as professional experience? Biztalk is another prime example of this. Thanks

    Read the article

  • Custom broadcast events in AS3?

    - by Ender
    In Actionscript 3, most events use the capture/target/bubble model, which is pretty popular nowadays: When an event occurs, it moves through the three phases of the event flow: the capture phase, which flows from the top of the display list hierarchy to the node just before the target node; the target phase, which comprises the target node; and the bubbling phase, which flows from the node subsequent to the target node back up the display list hierarchy. However, some events, such as the Sprite class's enterFrame event, do not capture OR bubble - you must subscribe directly to the target to detect the event. The documentation refers to these as "broadcast events." I assume this is for performance reasons, since these events will be triggered constantly for each sprite on stage and you don't want to have to deal with all that superfluous event propagation. I want to dispatch my own broadcast events. I know you can prevent an event from bubbling (Event.bubbles = false), but can you get rid of capture as well?

    Read the article

  • how to use web api with javascript/php?

    - by fayer
    geonames.org got a web api you can use to get all the hierarchy for a city. you just enter the id and you will get the data back in xml. http://ws.geonames.org/hierarchy?geonameId=2657896 i wonder how you fetch the url with php and javascript, and which one should i use. cause the id's are in the mysql database. i will get them with php. should i use file_get_contents, curl or fopen? and what function in javascript? jquery $.post? heard that it can only access localhost. would be great with some guidance here and even better with some code examples. thanks!

    Read the article

  • Is a switch statement the fastest way to implement operator interpretation in Java

    - by Mordan
    Is a switch statement the fastest way to implement operator interpretation in Java public boolean accept(final int op, int x, int val) { switch (op) { case OP_EQUAL: return x == val; case OP_BIGGER: return x > val; case OP_SMALLER: return x < val; default: return true; } } In this simple example, obviously yes. Now imagine you have 1000 operators. would it still be faster than a class hierarchy? Is there a threshold when a class hierarchy becomes more efficient in speed than a switch statement? (in memory obviously not) abstract class Op { abstract public boolean accept(int x, int val); } And then one class per operator.

    Read the article

  • Implicitly invoking parent class initializer

    - by Matt Joiner
    class A(object): def __init__(self, a, b, c): #super(A, self).__init__() super(self.__class__, self).__init__() class B(A): def __init__(self, b, c): print super(B, self) print super(self.__class__, self) #super(B, self).__init__(1, b, c) super(self.__class__, self).__init__(1, b, c) class C(B): def __init__(self, c): #super(C, self).__init__(2, c) super(self.__class__, self).__init__(2, c) C(3) In the above code, the commented out __init__ calls appear to the be the commonly accepted "smart" way to do super class initialization. However in the event that the class hierarchy is likely to change, I have been using the uncommented form, until recently. It appears that in the call to the super constructor for B in the above hierarchy, that B.__init__ is called again, self.__class__ is actually C, not B as I had always assumed. Is there some way in Python-2.x that I can overcome this, and maintain proper MRO when calling super constructors without actually naming the current class?

    Read the article

  • Parsing XML feed into Ruby object using nokogiri?

    - by Galen King
    Hi all, I am pretty green with coding in Ruby but am trying to pull an XML feed into a Ruby object as follows (ignore the ugly code please): <% doc = Nokogiri::XML(open("http://api.workflowmax.com/job.api/current?apiKey=#{@feed.service.api_key}&accountKey=#{@feed.service.account_key}")) %> <% doc.xpath('//Jobs/Job').each do |node| %> <h2><%= node['name'].text %></h2> <p><%= node['description'].text %></p> <% end %> Basically I want to iterate through each Job and output the name, description etc. What am I missing? Many thanks, Galen

    Read the article

  • ASP.NET ASPX Designer question. Bug?

    - by Velika
    Check out this Screenshot. Shouldn't a hierarchy list of tags appear here? Usually it appears. Sometimes, the tag "appears" there but without text (but the tag object is there as evident when you hover over it.) Other times, like this, nothing appears. It's a usual feature to see the hierarchy or tags which gives me easily access to the tags from design view for easy altering in the properties window. I think too many developers love to do things the hard way and sludge thru tags in HTML view and hardly use this, but it frustrates me when it doesn't work all the time. Is it me? Nah....

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >