Search Results

Search found 8128 results on 326 pages for 'daily tips'.

Page 206/326 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • Mercurial mirror: abort: No such file or directory: http://[...]/00manifest.i

    - by Sridhar Ratnakumar
    I am trying to setup a daily mirror of a mercurial repository - code.python.org in particular - within our local network, and serve that via Apache HTTPD. On the remote host that hosts apache, I did this: $ cd /var/www $ hg clone http://code.python.org/hg/trunk/ On my macbook, I ran: $ hg -v clone http://remote/trunk/ (falling back to static-http) abort: No such file or directory: http://remote/trunk/.hg/store/00manifest.i Google does not show any relevant result for this particular error. I remember back in those days being able to setup Bazaar mirrors by a simple clone. Doesn't Mercurial work like that? How do I setup a mirror that must further act like a clone URL?

    Read the article

  • Are periodic full backups really necessary on an incremental backup setup?

    - by user2229980
    I intend to use an old computer I have as a remote backup server for myself and a few other people. We are all geographically separated, and the plan is to do incremental daily backups using rsync and ssh. My original idea was to make one initial full backup then never again have to deal with the overhead of doing it, and from that moment on only copy the files changed since the last backup. I've been told that this could be bad, but I fail to understand why. Since each snapshot is comprised of hard links to the unchanged files plus the original changed ones, isn't it going to be identical to a new full backup? Why would I want to make another full backup?

    Read the article

  • Wireless Keyboard/Mouse: Should batteries be removed when not in use?

    - by abel
    I recently bought a Logitech Wireless Keyboard and mouse. I use it almost daily but for a couple of hours only. The keyboard has 2 AAA batteries and the mouse has 1 AA battery. The box mentions that the keyboard has a 24 month battery life and the mouse has a 5 month battery life. Should I keep the batteries in the keyboard/mouse, when they are not in use?Is it safe? Does the battery life mean 24 months of continuous usage or 24 months of average usage?

    Read the article

  • Virus - Isn' t there any online solution ?

    - by Sarang
    In our daily life, we come across various Viruses. In this internet world, we do have lots of type of viruses come to visit us ! A programmer can create a Virus using programming & it can be put on internet. It flows across the world & harm all the system. Don't do we have a same way to run an Anti-virus that flows across the internet & can protect the network from being affected by Viruses ? Please give any Idea...

    Read the article

  • How to find out the reason why a scheduled task hang?

    - by Graviton
    I have scheduled quite a number of tasks in my Windows 7, doing a variety of cron jobs on my machines. I am perfectly sure that these tasks, when run on their own, won't hang. But the problem is that, when I run them daily, over the course of several days and weeks and months, occasionally some of them will hang. So I am forced to kill the hanged task and rerun it--- and now it can run successfully! I want to fix this problem once and for all, and for that, I need to know why a scheduled task will hang, even when I run it separately, it won't. Anyway for me to find out the reason?

    Read the article

  • CA ArcServe r11.1 - have to switch Tape Drive Offline then Online to finish backup

    - by Richard
    Ill keep it brief, I have an HP Ultrium 1 in a server currently running CA ArcServe r11.1. I have 5 daily backup tapes, each of which are new. 3 of the 5 work fine without intervention but 2 of them stop at varying points through the backup asking for a new tape, even though that tape is not full. The way I have found around this is to switch the tape drive offline for 10 minutes then switch it back online, whilst the backup is still running. Has anyone ever seen this before? If so, any ideas how to permanently fix this. If all else fails just some pointers in the right direction. Thanks

    Read the article

  • Data drive disappearing.

    - by Mike Keller
    We have a Windows 2003 R2 server with SP 2 here that randomly loses a partition. There are two partitions the C: and the D: (the one that disappears). When I go into Disk Management the space shows available on the drive but that it isn't formated. There are two drives that are set up in a RAID 1 array. There isn't anything sticking out in the event log as to something triggering this problem and thank god we do daily backups of the data, but it gets kind of annoying to have to go back in there and reformat the partition and restore the data. Any places I can poke around to find the cause of this or even better solutions to the problem would be appreciated.

    Read the article

  • A decent S3 bucket manager for Ubuntu

    - by Luke
    I'm looking for a decent S3 bucket manager for Ubuntu (Gnome). I prefer it to integrate with Nautilus so it will look like just any other drive (a la WebDAV) but so far I haven't been able to find anything that I'd like to use on a daily basis. What bucket managers do you use for Ubuntu or what bucket manager would you recommend? UPDATE: S3FS seems to be what I'd really want to use since it lets me integrate my buckets directly into my file-system. However, when trying S3FS I do not get the impression that it's ready for prime time. I'm stunned by the fact that there are no decent bucket managers out there for Ubuntu/Gnome, guess I have to build it myself...

    Read the article

  • Ubuntu server: Delete first folder in directory

    - by Martin
    How can I grab the first subfolder in a directory and delete it? I found a script to monitor the free diskspace. It sends an alert email if space runs low, but I want to also delete some unneeded stuff. I have a backup folder where I save daily and monthly backups. I want to delete the first folder since this always the oldest, but I don't know the name of the oldest backup. My folders without Jan-May and Dec: 06-01 07-01 08-01 09-01 10-01 11-01 Friday Monday Saturday Sunday Thursday Tuesday Wednesday How can I delete the first folder "06-01" without knowing its name?

    Read the article

  • Logrotate, is this a proper config for what I want to do?

    - by Felthragar
    I started using logrotate a few days ago on a new server setup (actually three of them). My config is as follows. /var/www/mywebsite.com/logs/*.log { rotate 14 daily dateext compress delaycompress sharedscripts postrotate /usr/sbin/apache2ctl graceful > /dev/null endscript } Problem is that this is putting several days of logs into the same file. For example, I've currently got a file called access.log-20121005 which has logs for Oct 3rd, Oct 4th and Oct 5th in it. Is that proper behaviour? What I want for it to do is to create one logfile for each day and keep 14 days of logs. Any help appreciated, thanks.

    Read the article

  • Indexing text file content with command line query

    - by Drew Carlton
    I take daily notes in a plaintext file labeled with date in the YYYYMMDD format. These files are no more than 100 lines long, and are written in a blog style format. I'd like to be able search these files as if they were blog posts indexed by google, with some phrase query returning the most relevant/recent date filenames, with a snippet containing the relevant part. Ideally it would be something like this: #searchindex "laptop no sound" returns: 20100909.txt: ... laptop sound isn't working... 20100101.txt ... sound is too loud... debating what laptop to buy... and so on and so forth. I'm working on a linux platform (Debian with GNOME). I've looked at beagle and tracker, but they just seem complete overkill for what I want.

    Read the article

  • syslog ip ranges to specific files using `rsyslog`

    - by Mike Pennington
    I have many Cisco / JunOS routers and switches that send logs to my Debian server, which uses rsyslogd. How can I configure rsyslogd to send these router / switch logs to a specific file, based on their source IP address? I do not want to pollute general system logs with these entries. For instance: all routers in Chicago (source ip block: 172.17.25.0/24) to only log to /var/log/net/chicago. all routers in Dallas (source ip block 172.17.27.0/24) to only log to /var/log/net/dallas. Finally, these logs should be rotated daily for up to 30 days and compressed. NOTE: I am answering my own question

    Read the article

  • SQL Server Agents jobs and turning off the server

    - by Tim Joseph
    I'm really new to SQL Agent jobs, but I am attempting to build up a maintainance regime for a server that will be turned off and on again at unknown intervals. It may run without being shutdown for a month, or it might only be turned on 9-5... we don't know and the client can't tell us because they don't know. So what I'm wondering is, what do I need to do to get SQL Server to run monthly and daily jobs either when they are due, or if the due date is missed, get them to be run when the server is next powered on. I could come up with a mish-mash of periodic jobs and 'on-power-up' jobs, but if there is something more elegant that would be wonderful. Obviously I'll need to ensure the SQL Server Agent is configure to start when the computer is powered up, but what else?

    Read the article

  • Altering policies in policy based management to look at even which happened only in last 24 hours

    - by Manjot
    Hi, I am using SQL server 2008 Standard edition. I am using Policy based management with policies which come with SQL server during installation. I want the policies to only look at events that happened in last 24 hours. For example for "Windows Event Log System Failure Error" policy if system restarted unexpectedly 5 days ago, i don't want to be alerted daily. Is there any way by which I can restrict a policy to look at events which happened in last 24 hours not older? Any help please? Thanks in advance.

    Read the article

  • dual/multi-boot computers and software licensing

    - by Matt
    Suppose you have a computer with two or more operating systems, and a certain piece of software whose license terms allows it to be installed on one computer, and it does a daily check with a remote server to verify that your serial is only used on the original install computer. You install this software on each of your OSes, but since its a different OS the remote server would have to determine that it is not on the same computer, and so would disable your license. So my question, when a license refers to a single computer, does a situation like this usually count as a single computer, or do the multiple OSes sort of make it multiple computers? How do you think a software vendor (specifically thinking AV companies that do this sort of serial check) would handle this situation?

    Read the article

  • Remap Apple MacBook Eject key in Windows?

    - by user1238528
    I have a MacBook with Windows 7 on it as my daily driver. My MacBook has a nearly useless Eject key, but I wish it was a forward delete key. KeyRemap4Macboook works great in OS X. Is there any software that is equivalent in Windows? I have tried KeyTweaks and HotKeys and neither of them will recognize the Eject key. I looked it up and I think it is key 161. Is there any way to make the key into a more useful forward delete? Could I just go into the registry and do it that way?

    Read the article

  • Script for checking the nologin accounts and then disable the account

    - by suma
    "Could you please share the scripts which does the below ?" I have written a script that scans all the relevent logs daily, makes a list of people that have had any activity that day, and maintains database (just a text file) of users and the last time they logged in. Then I have a second script that examines the database for dates more than x days ago, an notifies the user and administrator 2 weeks prior to locking the account. And if there are any dates more than x+y days ago, deletes the account altogether. This seems to be working for me - but I would like to use a non-proprietary solution if one is available. "Could you please share the scripts?"

    Read the article

  • Simplest way to shrink transaction log files on a mirrored production database

    - by MGOwen
    What's the simplest way to shrink transaction log file on a mirrored production database? I have to, as my disk space is running out. I will make a full database backup before I do this, so I don't need to keep anything from the transaction log (right? I have daily full database backup, probably never need point-in-time restore, though I'll keep the option open if I can - that's all the .ldf is really for, correct?). (Hope this isn't an exact duplicate, I read a lot of questions but couldn't find this exact scenario elsewhere).

    Read the article

  • SPF include: too many IP addresses

    - by sprezzatura
    I've hit a snag with SPF. The SPF record for my domain will contain four or five entries, plus it will contain: include:sgizmo.com The SPF record for sgizmo.com contains eleven entries! This, plus mine, is way over the maximum ten allowed by the RFC (and probably by most servers). I realize that there has to be a limit in order to prevent DoS attacks. However, in the real world, it is probably not unreasonable for large companies to have many server addresses. Furthermore, must I know monitor my 'include:' counterparts for changes and additions? Must I check weekly, daily, to insure that some combination of changes doesn't suddenly put me over the top? It doesn't seem to me that SPF is suitable for prime time. Is there another way to do this?

    Read the article

  • CA ArcServe r11.1 - have to switch Tape Drive Offline then Online to finish backup

    - by Richard
    Ill keep it brief, I have an HP Ultrium 1 in a server currently running CA ArcServe r11.1. I have 5 daily backup tapes, each of which are new. 3 of the 5 work fine without intervention but 2 of them stop at varying points through the backup asking for a new tape, even though that tape is not full. The way I have found around this is to switch the tape drive offline for 10 minutes then switch it back online, whilst the backup is still running. Has anyone ever seen this before? If so, any ideas how to permanently fix this. If all else fails just some pointers in the right direction. Thanks

    Read the article

  • Security measures for CentOS

    - by cappuccinodrinker
    I have been tightening up my web server security and wanted to know what else I can do. I am running CentOS 5 with these measures: - All passwords to FTP, MySQL etc are generated from grc.com/passwords.htm and microsoft.com/protect/fraud/passwords/create.aspx (for the ones which cannot be too long). - Running iptables with all ports shut off except for http mail and smtp, the important ports like FTP SSH are blocked to all except my static office IP. There is also no response to pings. - Rootkit Hunter running daily - The server is PCI compliant according to Comodo - Not running any crappy made php apps, we use Zend Framework for our stuff and do have kayako installed and keep them up to date. Can't really think of anything else I can do... I could implement a brute force measure, but I think I already have by simply changing my SSH port to a number above 10000 and blocking it off with iptables.

    Read the article

  • CentOS 6 - YUM Local Repo - Ensure consistent package distribution

    - by Mike Purcell
    I've read a few guides outlining how to setup a local YUM repo, but none of them explicitly stated an answer to my question; If I set up a local YUM repo, does that mean that any CentOS servers which pull from said repo will never be "ahead" of the local YUM repo? I want to ensure a consistent package distribution across all my servers. Right now, when I do a yum update, even on a daily basis, the servers can be out of alignment. For example if I run YUM update on my dev server in the morning, then run YUM update on one of my production servers in the afternoon, the production server may have picked up a new version of a package that the dev server did not pick up, due to the time window between the update commands. Rather, I'd prefer that I run yum update from my dev server which has access to remote upstream yum repos, then let it sit for 2 weeks, after which I run yum update on my production servers against the local repo on my dev server.

    Read the article

  • does windows incremental backup include system state backup?

    - by Kossel
    I'm managing my very small office server with windows server 2008. since I have only one server, and the user group is really small. I made the first hdd into 2 partitions. one (C:) for windows and Active directory, another (D:) for tomcat and database. I'm doing incremental back C: and D: daily to hdd2 (E:) using windows server backup. is it enough to let me do fully restore my server in case of disaster? I ask this because I have read there is also a system state backup, and I also have to do that periodically in order to get AD back? isn't it with incremental/full backup I can do full bare-metal recovery?

    Read the article

  • Mercurial mirror: abort: No such file or directory: http://[...]/00manifest.i

    - by Sridhar Ratnakumar
    I am trying to setup a daily mirror of a mercurial repository - code.python.org in particular - within our local network, and serve that via Apache HTTPD. On the remote host that hosts apache, I did this: $ cd /var/www $ hg clone http://code.python.org/hg/trunk/ On my macbook, I ran: $ hg -v clone http://remote/trunk/ (falling back to static-http) abort: No such file or directory: http://remote/trunk/.hg/store/00manifest.i Google does not show any relevant result for this particular error. I remember back in those days being able to setup Bazaar mirrors by a simple clone. Doesn't Mercurial work like that? How do I setup a mirror that must further act like a clone URL?

    Read the article

  • Disable or sleep secondary HDD in Macbook

    - by cpak
    I've done some quick Googling but didn't find an answer. I've put an SSD in my Macbook, and at the same time moved the original HDD to the optical drive bay. I'm running the OS and most of my daily apps off the SDD so the HDD is really just for storing stuff I need now and then. Now I'd like to disable (as in power off or "force sleep") the HDD when I don't need it. Tried unmounting the disk using diskutil unmountDisk but it kept spinning for like 10 minutes. Maybe that's to be expected, but I'd imagined it would stop instantly on unmount. Also, it would be nice to have it disabled by default, and only mount it (= power on) when I need it. Grateful for any input on this!

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >