Search Results

Search found 3615 results on 145 pages for 'cron daily'.

Page 96/145 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Script for checking the nologin accounts and then disable the account

    - by suma
    "Could you please share the scripts which does the below ?" I have written a script that scans all the relevent logs daily, makes a list of people that have had any activity that day, and maintains database (just a text file) of users and the last time they logged in. Then I have a second script that examines the database for dates more than x days ago, an notifies the user and administrator 2 weeks prior to locking the account. And if there are any dates more than x+y days ago, deletes the account altogether. This seems to be working for me - but I would like to use a non-proprietary solution if one is available. "Could you please share the scripts?"

    Read the article

  • Strange Maintenance Plan SubPlans behaviour: each SP runs all the tasks of all other SP at odd times

    - by Wentu
    SQL Server 2005: I have a problem with scheduling a Maintenance Plan (MP) with 3 subplans (SP). SP1 is scheduled to run hourly, SP2 daily at 7.00 and SP3 on sundays at 8.00 Reading MP history I see that what happened (I know it seems crazy) is: 11: SP1 runs and executes all the tasks of SP1 SP2 and SP3 12: SP2 runs and does the same 13: SP3 runs and does the same 14: SP1 runs and does the same From the job Activity monitor, SP1 has last run time at 14, SP2 and SP3 are never been executed. All of the SP are scheduled correctly in the Job Activity Monitor (SP2 for tomorrow at 7, SP3 for next sunday at 8) Do you have any idea what is happening? Thankx a lot Wentu

    Read the article

  • SQL Server backup and restore process

    - by Nai
    Just wondering what backup processes you guys have. I am currently operating a weekly full database backup with daily differential backups. My understanding is that with such a set up, the difference between Full recovery mode and Simple recovery mode is that with Full recovery mode, I will be able to use the transaction logs to rollback my DB to a specific point in time having applied the latest differential backup. Assuming that in my scenario, the last differential backup serves as my last and ultimate 'save point', I don't see a need to rollback my DB even further back using the logs. This brings me to my question: Is there any additional benefits to be had using a Full recovery mode for my current backup process?

    Read the article

  • Disable or sleep secondary HDD in Macbook

    - by cpak
    I've done some quick Googling but didn't find an answer. I've put an SSD in my Macbook, and at the same time moved the original HDD to the optical drive bay. I'm running the OS and most of my daily apps off the SDD so the HDD is really just for storing stuff I need now and then. Now I'd like to disable (as in power off or "force sleep") the HDD when I don't need it. Tried unmounting the disk using diskutil unmountDisk but it kept spinning for like 10 minutes. Maybe that's to be expected, but I'd imagined it would stop instantly on unmount. Also, it would be nice to have it disabled by default, and only mount it (= power on) when I need it. Grateful for any input on this!

    Read the article

  • syslog ip ranges to specific files using `rsyslog`

    - by Mike Pennington
    I have many Cisco / JunOS routers and switches that send logs to my Debian server, which uses rsyslogd. How can I configure rsyslogd to send these router / switch logs to a specific file, based on their source IP address? I do not want to pollute general system logs with these entries. For instance: all routers in Chicago (source ip block: 172.17.25.0/24) to only log to /var/log/net/chicago. all routers in Dallas (source ip block 172.17.27.0/24) to only log to /var/log/net/dallas. Finally, these logs should be rotated daily for up to 30 days and compressed. NOTE: I am answering my own question

    Read the article

  • Troubleshoot odd large transaction log backups...

    - by Tim
    I have a SQL Server 2005 SP2 system with a single database that is 42gigs in size. It is a modestly active database that sees on average 25 transactions per second. The database is configured in Full recovery model and we perform transaction log backups every hour. However it seems to be pretty random at some point during the day the log backup will go from it's average size of 15megs all the way up to 40gigs. There are only 4 jobs that are scheduled to run on the SQL server and they are all typical backup jobs which occur on a daily/weekly basis. I'm not entirely sure of what client activity takes place as the application servers are maintained by a different department. Is there any good way to track down the cause of these log file growths and pinpoint them to a particular application, or client? Thanks in advance.

    Read the article

  • Command-line access for Apple Time Machine?

    - by Stefan Lasiewski
    We use Apple's Time Machine to back up our workstations at the office. If I want to restore a file, I need to open up the Time Machine GUI and browse files there. The GUI is ugly eye-candy and gets in my way. Is there a way to browse the Time Machine archive using the Mac's command-line? I'm used to Netapps and other storage appliances. I use backintime for my Ubuntu workstation. To restore a file with one of those systems, you can restore a file with a simple command like: cp .snapshot/daily.0/filename.txt . or cp /backup/backintime/20100611-000002/backup/etc/shadow /etc/shadow Is there an equivalent for Apple's Time Machine?

    Read the article

  • Set up a root server using Ubuntu and Virtualization

    - by Daniel Völkerts
    Hello, I'd like to setup a fresh root server and install a linux based virtualization on it. My thoughts are on: Intel VTs Hardware Ubuntu 9.10 KVM based virt. The access to the root server will only be SSH for Administration. Has anybody done this before, what was your glues discovered in the daily use? My requirements are: very secure, so the root server only has ssh to the dom-0 and minimalistic ports for the guest (e.g. http/s). good monitoring of host/guest (my idea is to using zabbix for it) easy and fast administration (how are the command line tools working for you? cryptiv? high learning curve?) I'm pleased to learn from your suggestions. Regards, Daniel Völkerts

    Read the article

  • ubuntu pptp connections from command line

    - by Ian R
    I'm using a lot of vpn connections daily for my work and I want to program a pptp dialer in python to ease my job a little bit by automating things. I usually go with network-manager-pptp to setup my connections but I would like to skip this gui tool and do it from the script. Something like a dialer. My question is. Can pptp connections be established using the command tools only? Also, where does network-manager-pptp saves its failes so I can take a look and see what configs it generates. Any help is much appreciated.

    Read the article

  • CA ArcServe r11.1 - have to switch Tape Drive Offline then Online to finish backup

    - by Richard
    Ill keep it brief, I have an HP Ultrium 1 in a server currently running CA ArcServe r11.1. I have 5 daily backup tapes, each of which are new. 3 of the 5 work fine without intervention but 2 of them stop at varying points through the backup asking for a new tape, even though that tape is not full. The way I have found around this is to switch the tape drive offline for 10 minutes then switch it back online, whilst the backup is still running. Has anyone ever seen this before? If so, any ideas how to permanently fix this. If all else fails just some pointers in the right direction. Thanks

    Read the article

  • Alternatives to amavis for RAM-bound server

    - by rsuarez
    I'm running a small VPS server that works as web and mail server. It has only 256MB of RAM, and it's sucking 100MB of swap constantly. I've found that one of the culprits is amavis, taking about 30MB of resident memory, and would like to ditch it and use some alternative. I don't have much mail daily, so it being a bit slower wouldn't be a problem. I'd like to avoid Spamassassin altogether, if possible, because it's quite big even if used in offline mode. I'm already using RBLs and a few small blacklists, and used greylisting for a while but abandoned it because it gave me a few problems (don't remember which; I think it was related to not configuring properly white lists for several big ISPs). So, is there some alternative to amavis that I could use without much RAM (and if possible, CPU) usage? Thanks in advance.

    Read the article

  • Ubuntu server: Delete first folder in directory

    - by Martin
    How can I grab the first subfolder in a directory and delete it? I found a script to monitor the free diskspace. It sends an alert email if space runs low, but I want to also delete some unneeded stuff. I have a backup folder where I save daily and monthly backups. I want to delete the first folder since this always the oldest, but I don't know the name of the oldest backup. My folders without Jan-May and Dec: 06-01 07-01 08-01 09-01 10-01 11-01 Friday Monday Saturday Sunday Thursday Tuesday Wednesday How can I delete the first folder "06-01" without knowing its name?

    Read the article

  • cmd files to hta

    - by Frode Eskil
    I have a lot of cmd files i use daily for example to add users to local groups, installing printers, run as admin tasks etc. I like to take the scripts i use most frequently and add them to a tabbed hta file, but i have trouble finding a good guide on how to easily do it. Anyone having a good site to share with me? Or do i finally have to start with vb scripting? I have done some but it's so much faster to do a cmd file for me.

    Read the article

  • archive all messages account on exchange 2003 receiving multiple copies

    - by aeolist
    Hello everyone I am using microsoft exchange 2003 on windows 2003 small business server. For the past month or so, the archiver account has been receiving multiple copies (about 15 or so) of each and every email. edit: All copies share the same Message ID. The machine that hosts exchange is updated religiously and an antivirus scan is run on a daily basis. So would anyone please have any ideas about how to deal with this and furthermore, how i would be able to delete the multiple copies of emails from outlook 2003 inbox. I will edit the entry, answering any questions or updating on my efforts Thanks in advance

    Read the article

  • What RAID level for a backup server?

    - by ispirto
    I'm building a server with 12 x 3TB disks to use daily backups. I'm thinking to use RAID50 to get a good 27TB usable space. The disks will be used brutally to backup 9 servers with 1.5TB of data once a day. I'll keep the backups for 2 days. So for each server I'll have 3TB of separate partitions. Do you think this kind of huge backups would stress the disks too much and make them fail? Should I better go with RAID10? Oktay

    Read the article

  • Simplest way to shrink transaction log files on a mirrored production database

    - by MGOwen
    What's the simplest way to shrink transaction log file on a mirrored production database? I have to, as my disk space is running out. I will make a full database backup before I do this, so I don't need to keep anything from the transaction log (right? I have daily full database backup, probably never need point-in-time restore, though I'll keep the option open if I can - that's all the .ldf is really for, correct?). (Hope this isn't an exact duplicate, I read a lot of questions but couldn't find this exact scenario elsewhere).

    Read the article

  • Kaspersky processing error Explorer.exe (recycle bin)

    - by aeternus828
    I get daily critical errors from Kaspersky involving Explorer.exe... The file in question is almost always in the Recycle Bin, or something on the desktop. Here is an example error detail: Event type: Processing error Application\Name: EXPLORER.EXE Application\Path: C:\WINDOWS\ Application\Process ID: 2364 Application\Options: C:\windows\Explorer.EXE Component: File Anti-Virus Result\Description: Processing error Object: C:\$Recycle.Bin\S-1-5-21-1403139956-787289773-2644151291-500\$RIKKQKS Object\Type: File Object\Path: C:\$Recycle.Bin\S-1-5-21-1403139956-787289773-2644151291-500\ Object\Name: $RIKKQKS Reason: Read error Google searches didn't offer much insight, so I thought I'd ask here if anyone has encountered a similar situation. Not sure if it's a bug, something to be concerned about, or an easy fix, etc. I usually just empty the recycle bin for a temporary fix, but would like to get to the root of the error. Thoughts?

    Read the article

  • Recovering from bad ownership

    - by Christian Sciberras
    I was going to change the ownership of a directory to apache:apache, but I ended up running: chown -R apache:apache / Bad! Very bad! I knew what was going on when it started saying: chown: changing ownership of `/proc/2694/fd/48': Permission denied That's when I stopped everything (Ctrl+C). The current system I have is a server running virtualbox running CentOS 5. This problem happened inside the VM. Currently, everything seems to be working, but I have not restarted the system yet, and to be honest, I'm afraid that if I did something will break. I do not know chown's order, should I be concerned and assume something will break after a reboot? Is there a way to recover form this problem without having to rely on backups? I do have a daily one, but I thought there may be a simpler way out.

    Read the article

  • What's the easiest way to allow Exchange 2003 remote (no MSO client) users check their Mailbox size?

    - by Myrddin Emrys
    We are migrating from Exchange 2003 with no quota settings to Exchange 2010 with limited mailbox sizes. We are trying to get users to clean their mailboxes prior to the move to reduce the transfer load, as well as to comply with new quotas on the 2010 system. But many users access their mail through webmail only. I cannot see a way for users to access their mail store size in this manner. Has anyone else run into this problem? Is there a good way to easily let users check their own mailbox size? The only thing I've come up with as a workaround is a report that IT generates and mail-merge it out to users daily with their current mailbox size. This is cumbersome and time consuming compared to a way for them to check their own mailbox size however.

    Read the article

  • Is this a valid backup strategy for MongoDB?

    - by James Simpson
    I've got a single dedicated server with a MongoDB database of around 10GB. I need to do daily backups, but I can't have downtime with the database. Is it possible to use a replica set on a single disk (with 2 instances of mongod running on different ports), and simply take the secondary one offline and backup the data files to an offsite storage such as S3 (journaling is turned on)? Or would using master/slave be better than a replica set? Is this viable, and if so, what potential problems could I have? If not, how do I conceptualize this to work?

    Read the article

  • Is Rsync like subversion, but for a server?

    - by johnlai2004
    I'm trying to learn how to use rsync. I want to create daily backs up of my production server. Right now I run the command rsync -azr /var/www/* [email protected]:/var/www Now let's say one day, I want to roll back the /var/www/ directory on my production server to last month's version. How do I tell rsync to retrieve version N? On reading that rsync only copies differences between src and dest, I assumed rsync works like subversion where you commit changes to a destination, and keep track of every version, and with the option to checkout any version at anytime. Is that the way rsync works? It's like subversion but for an entire server? That would be great because then it means I don't have to do full ssh copies for my nightly backups.

    Read the article

  • Why is Varnish not caching?

    - by Justin
    I am troubleshooting the setup of Varnish 3.x on my Ubuntu server. I'm running Drupal 7 on two sites set up on the box, via named-based vhosts. Before trying to get Varnish to play nice with Drupal I'm trying to just get Varnish to a PNG from cache. Here are the headers I get from a curl -I request of the PNG file: HTTP/1.1 200 OK Server: Apache/2.2.22 (Ubuntu) Last-Modified: Sun, 07 Oct 2012 21:18:59 GMT ETag: "a57c2-3850-4cb7ea73db6c0" Accept-Ranges: bytes Content-Length: 14416 Cache-Control: max-age=1209600 Expires: Thu, 25 Oct 2012 22:55:14 GMT Content-Type: image/png Accept-Ranges: bytes Date: Thu, 11 Oct 2012 22:55:14 GMT X-Varnish: 1766703058 Age: 0 Via: 1.1 varnish Connection: keep-alive X-Varnish-Cache: MISS Here is the Varnish VCL file I'm using (It's a default VCL configuration designed for Drupal): # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.1"; .port = "8080"; } # Respond to incoming requests. sub vcl_recv { # Use anonymous, cached pages if all backends are down. if (!req.backend.healthy) { unset req.http.Cookie; } # Allow the backend to serve up stale content if it is responding slowly. set req.grace = 6h; # Pipe these paths directly to Apache for streaming. #if (req.url ~ "^/admin/content/backup_migrate/export") { # return (pipe); #} # Do not cache these paths. if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } # Do not allow outside access to cron.php or install.php. #if (req.url ~ "^/(cron|install)\.php$" && !client.ip ~ internal) { # Have Varnish throw the error directly. # error 404 "Page not found."; # Use a custom error page that you've defined in Drupal at the path "404". # set req.url = "/404"; #} # Always cache the following file types for all users. This list of extensions # appears twice, once here and again in vcl_fetch so make sure you edit both # and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } # Remove all cookies that Drupal doesn't need to know about. We explicitly # list the ones that Drupal does need, the SESS and NO_CACHE. If, after # running this code we find that either of these two cookies remains, we # will pass as the page cannot be cached. if (req.http.Cookie) { # 1. Append a semi-colon to the front of the cookie string. # 2. Remove all spaces that appear after semi-colons. # 3. Match the cookies we want to keep, adding the space we removed # previously back. (\1) is first matching group in the regsuball. # 4. Remove all other cookies, identifying them by the fact that they have # no space after the preceding semi-colon. # 5. Remove all spaces and semi-colons from the beginning and end of the # cookie string. set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { # If there are no remaining cookies, remove the cookie header. If there # aren't any cookie headers, Varnish's default behavior will be to cache # the page. unset req.http.Cookie; } else { # If there is any cookies left (a session or NO_CACHE cookie), do not # cache the page. Pass it on to Apache directly. return (pass); } } } # Set a header to track a cache HIT/MISS. sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } # Code determining what to do when serving items from the Apache servers. # beresp == Back-end response from the web server. sub vcl_fetch { # We need this to cache 404s, 301s, 500s. Otherwise, depending on backend but # definitely in Drupal's case these responses are not cacheable by default. if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } # Don't allow static files to set cookies. # (?i) denotes case insensitive in PCRE (perl compatible regular expressions). # This list of extensions appears twice, once here and again in vcl_recv so # make sure you edit both and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } # Allow items to be stale if needed. set beresp.grace = 6h; } # In the event of an error, show friendlier messages. sub vcl_error { # Redirect to some other URL in the case of a homepage failure. #if (req.url ~ "^/?$") { # set obj.status = 302; # set obj.http.Location = "http://backup.example.com/"; #} # Otherwise redirect to the homepage, which will likely be in the cache. set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" <html> <head> <title>Page Unavailable</title> <style> body { background: #303030; text-align: center; color: white; } #page { border: 1px solid #CCC; width: 500px; margin: 100px auto 0; padding: 30px; background: #323232; } a, a:link, a:visited { color: #CCC; } .error { color: #222; } </style> </head> <body onload="setTimeout(function() { window.location = '/' }, 5000)"> <div id="page"> <h1 class="title">Page Unavailable</h1> <p>The page you requested is temporarily unavailable.</p> <p>We're redirecting you to the <a href="/">homepage</a> in 5 seconds.</p> <div class="error">(Error "} + obj.status + " " + obj.response + {")</div> </div> </body> </html> "}; return (deliver); } I'm getting a MISS and age 0 every time. If I'm understanding correctly, this means the file isn't being returned from Varnish's cache. Is there a problem with my Varnish config?

    Read the article

  • Distributed Server Monitoring Solution

    - by MaterialEdge
    I belong to an independent IT firm that manages and maintains about 50 business clients networks, ranging from small 5 system networks to 200+ systems. Because we are unable to directly monitor each server at these locations (distributed over a very large area) on a regular basis I am looking for a method to monitor and alert us to any problems that may arise so that we can respond quickly with, hopefully, preventative measures. I'm not sure what solutions are available for this type of situation, but something that utilizes a central server at our business with all client servers sending alerts or logs to it for daily monitoring might work best. All these servers are running a Windows Server OS. In your opinion, what would be the best course of action to accomplish this?

    Read the article

  • Recommend AntiVirus for Plesk 8.6.0 + CentOS 5

    - by cappuccino
    I am using a virtual server on Media Temple running CentOS 5 and Pleak 8.6.0, I have done all their security recommendations and more some, blocking everything except http and mail, string passwords and running Rootkit Hunter daily. But i'm thinking I should run a antivirus of some sort? I'm still new to Linux CentOS security so please forgive :)... Can you recommend a good antivirus/antispyware software for CentOS 5 and Plesk 8.6.0? I've been searching for some plesk modules and have come across a few like Kaspersky, not sure which one to use... Any tips on security would be good too.

    Read the article

  • Select firefox search result

    - by Nicolas C.
    I am working on a daily basis on a web application with very large menus. Also doing lots of other Excel manipulations, copy and pasting, etc., I am quite fond of keyboard shortcuts as much faster than using the mouse to point, double-click and then going back to my keyboard etc. Hence, my question is quite simple, does anyone know if there is any shortcut under Firefox which would let me actually select (and not highlight) in my web page the search result so that I can for instance do the following manipulation sequence? [Ctrl]+[F] type the search string, for instance 'regional_unit' the missing shortcut to actually select in my page the string which is currently highlighted thanks to the search feature of FF [Space] or [Enter] key to activate the web element which in my case would systematically correspond to a link or button, etc. May be there would be an addon replacing the default search feature, I don't know... I tried to look over the internet but with the words I am using for this investigation, I do not get relevant search results under Google :(. Thanks a lot

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >