Search Results

Search found 18475 results on 739 pages for 'log diff'.

Page 3/739 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Upgrading log shipping from 2005 to 2008 or 2008R2

    - by DavidWimbush
    If you're using log shipping you need to be aware of some small print. The general idea is to upgrade the secondary server first and then the primary server because you can continue to log ship from 2005 to 2008R2. But this won't work if you're keeping your secondary databases in STANDBY mode rather than IN RECOVERY. If you're using native log shipping you'll have some work to do. If you've rolled your own log shipping (ahem) you can convert a STANDBY database to IN RECOVERY like this:   restore database [dw]   with norecovery; and then change your restore code to use WITH NORECOVERY instead of WITH STANDBY. (Finally all that aggravation pays off!) You can either upgrade the secondary server in place or rebuild it. A secondary database doesn't actually get upgraded until you recover it so the log sequence chain is not broken and you can continue shipping from the primary. Just remember that it can take quite some time to upgrade a database so you need to factor that into the expectations you give people about how long it will take to fail over. For more details, check this out: http://msdn.microsoft.com/en-us/library/cc645954(SQL.105).aspx

    Read the article

  • Log php errors in ubuntu

    - by resting
    I followed the setup here: Where is the PHP error log When I look into /var/log/php_errors.log, I could see some PHP errors. PHP Warning: file_get_contents(/var/www/...): failed to open stream: No such file or directory in ... But what I'm trying to see is the error when I removed a semicolon from a statement. That error above has no relation to file from where I removed the semicolon so we can just ignore that. When I access the page with the removed semicolon, I get The website encountered an error while retrieving https://myapp/download/decode/testfile. It may be down for maintenance or configured incorrectly. HTTP Error 500 (Internal Server Error): An unexpected condition was encountered while the server was attempting to fulfill the request. But no logs in /var/log/php_errors.log. How do I see the error that usually says which line and which file the process failed? The real reason for trying to see the error is because I have a very huge loop, that throws the HTTP 500 error and I can't see the exact error. I'm just simulation with a removed semicolon to test things out. Other settings: error_reporting = E_ALL & ~E_DEPRECATED display_errors = On On Ubuntu 10.04.4 LTS Update Ok, I managed to get the error message to display. Parse error: syntax error, unexpected T_IF in ... However, it's still not logged. It wasn't displaying previously because Cakephp's debug level was at 0. Setting it to 2 displays the message, but no logs.

    Read the article

  • What is O(n log n) or O(n log(log n))

    - by Mark Tomlin
    What does O, if indeed it is a Oh (As in the letter O) not the number Zero (0) mean? I think the n would be number, but I'm not sure as I'm not a 'real' computer programmer, just a hobbyist. And log would be logarithmic function, but I only know that because of smarter people then I have told me this, while never really explaining what a logarithm is. So please, in plain English, explain what this is, and the differences between the two (such as their applications.

    Read the article

  • How to use bzdiff to find difference between 2 bzipped files with diff -I option?

    - by englebip
    I'm trying to do a diff on MySQL dumps (created with mysqldump and piped to bzip2), to see if there are changes between consecutive dumps. The followings are the tails of 2 dumps: tmp1: /*!40101 SET SQL_MODE=@OLD_SQL_MODE */; /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */; /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */; /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; -- Dump completed on 2011-03-11 1:06:50 tmp2: /*!40101 SET SQL_MODE=@OLD_SQL_MODE */; /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */; /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */; /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; -- Dump completed on 2011-03-11 0:40:11 When I bzdiff their bzipped version: $ bzdiff tmp?.bz2 10c10 < -- Dump completed on 2011-03-11 1:06:50 --- > -- Dump completed on 2011-03-11 0:40:11 According to the manual of bzdiff, any option passed on to bzdiff is passed on to diff. I therefore looked at the -I option that allows to define a regexp; lines matching it are ignored in the diff. When I then try: $ bzdiff -I'Dump' tmp1.bz2 tmp2.bz2 I get an empty diff. I would like to match as much as possible of the "Dump completed" line, though, but when I then try: $ bzdiff -I'Dump completed' tmp1.bz2 tmp2.bz2 diff: extra operand `/tmp/bzdiff.miCJEvX9E8' diff: Try `diff --help' for more information. Same thing happens for some variations: $ bzdiff '-IDump completed' tmp1.bz2 tmp2.bz2 $ bzdiff '-I"Dump completed"' tmp1.bz2 tmp2.bz2 $ bzdiff -'"IDump completed"' tmp1.bz2 tmp2.bz2 If I diff the un-bzipped files there is no problem: $diff -I'^[-][-] Dump completed on' tmp1 tmp2 gives also an empty diff. bzdiff is a shell script usually placed in /bin/bzdiff. Essentially, it parses the command options and passes them on to diff as follows: OPTIONS= FILES= for ARG do case "$ARG" in -*) OPTIONS="$OPTIONS $ARG";; *) if test -f "$ARG"; then FILES="$FILES $ARG" else echo "${prog}: $ARG not found or not a regular file" exit 1 fi ;; esac done [...] bzip2 -cdfq "$1" | $comp $OPTIONS - "$tmp" I think the problem stems from escaping the spaces in the passing of $OPTIONS to diff, but I couldn't figure out how to get it interpreted correctly. Any ideas? EDIT @DerfK: Good point with the ., I had forgotten about them... I tried the suggestion with the multiple level of quotes, but that is still not recognized: $ bzdiff "-I'\"Dump.completed.on\"'" tmp1.bz2 tmp2.bz2 diff: extra operand `/tmp/bzdiff.Di7RtihGGL'

    Read the article

  • How to scroll the diff buffer easily in Emacs while point is on the minibuffer

    - by RamyenHead
    In Emacs, after a lot of editing, I press C-x s (save-some-buffers), then Emacs asks "Save file ...? (y,n,.... d ...)" for each file, I sometimes answer d (diff) to see the changes, but then it's not easy to scroll the diff buffer because the cursor is on the minibuffer. Scrollbar does not work. C-M-v works, but if I try to back-scroll by pressing C-M-- C-M-v, Emacs just says "Type C-h for help". How do I scroll the diff buffer in such cases?

    Read the article

  • How to ignore moved lines in a diff

    - by klickverbot
    I am currently working on a source code generation tool. To make sure that my changes do no introduce any new bugs, a diff between the output of the program before and after my changes would theoretically be a valuable tool. However, this turns out to be harder than one might think, because the tool outputs lines where the order does not matter (like import statements, function declarations, …) in a semi-randomly ordered way. Because of this, the output of diff is cluttered with many changes that are in fact only lines moved to another position in the same file. Is there a way to make diff ignore these moves and only output the lines that have really been added or removed?

    Read the article

  • How to ignore moved lines in a diff

    - by klickverbot
    I am currently working on a source code generation tool. To make sure that my changes do no introduce any new bugs, a diff between the output of the program before and after my changes would theoretically be a valuable tool. However, this turns out to be harder than one might think, because the tool outputs lines where the order does not matter (like import statements, function declarations, …) in a semi-randomly ordered way. Because of this, the output of diff is cluttered with many changes that are in fact only lines moved to another position in the same file. Is there a way to make diff ignore these moves and only output the lines that have really been added or removed?

    Read the article

  • How can I back up my ubuntu system?

    - by Eloff
    I'm sure there's a lot of questions on here similar to this, and I've been reading them, but I still feel this warrants a new question. I want nightly, incremental backups (full disk images would waste a lot of space - unless compressed somehow.) Preferably rotating or deleting old backups when running out of space or after a fixed number of backups. I want to be able to quickly and painlessly restore my system from these backups. This is my first time running ubuntu as my main development machine and I know from my experience with it as a server and in virtual machines that I regularly manage to make it unbootable or damage it to the point of being unable to rescue it. So how would you recommend I do this? There are so many options out there I really don't know where to start. There seems to be a vocal school of thought that it's sufficient to backup your home directory and the list of installed packages from the package manager. I've already installed lots of things from source, or outside of the package manager (development tools, ides, compilers, graphics drivers, etc.) So at the very least, if I do not back up the operating system itself I need to grab all config files, all program binaries, all created but required files, etc. I'd rather backup too much than too little - an ubuntu install is tiny anyway. Also this drastically reduces the restore time, which would cost me more in my time than the extra storage space. I tried using Deja Dup to backup the root partition, excluding some things like /mnt /media /dev /proc etc. Although many websites assured me you can backup a running linux system this way - that seems to be false as it complained that it could not backup the following files: /boot/System.map-3.0.0-17-generic /boot/System.map-3.2.0-22-generic /boot/vmcoreinfo-3.0.0-17-generic /boot/vmlinuz-3.0.0-17-generic /boot/vmlinuz-3.2.0-22-generic /etc/.pwd.lock /etc/NetworkManager/system-connections/LAN Connection /etc/apparmor.d/cache/lightdm-guest-session /etc/apparmor.d/cache/sbin.dhclient /etc/apparmor.d/cache/usr.bin.evince /etc/apparmor.d/cache/usr.lib.telepathy /etc/apparmor.d/cache/usr.sbin.cupsd /etc/apparmor.d/cache/usr.sbin.tcpdump /etc/apt/trustdb.gpg /etc/at.deny /etc/ati/inst_path_default /etc/ati/inst_path_override /etc/chatscripts /etc/cups/ssl /etc/cups/subscriptions.conf /etc/cups/subscriptions.conf.O /etc/default/cacerts /etc/fuse.conf /etc/group- /etc/gshadow /etc/gshadow- /etc/mtab.fuselock /etc/passwd- /etc/ppp/chap-secrets /etc/ppp/pap-secrets /etc/ppp/peers /etc/security/opasswd /etc/shadow /etc/shadow- /etc/ssl/private /etc/sudoers /etc/sudoers.d/README /etc/ufw/after.rules /etc/ufw/after6.rules /etc/ufw/before.rules /etc/ufw/before6.rules /lib/ufw/user.rules /lib/ufw/user6.rules /lost+found /root /run/crond.reboot /run/cups/certs /run/lightdm /run/lock/whoopsie/lock /run/udisks /var/backups/group.bak /var/backups/gshadow.bak /var/backups/passwd.bak /var/backups/shadow.bak /var/cache/apt/archives/lock /var/cache/cups/job.cache /var/cache/cups/job.cache.O /var/cache/cups/ppds.dat /var/cache/debconf/passwords.dat /var/cache/ldconfig /var/cache/lightdm/dmrc /var/crash/_usr_lib_x86_64-linux-gnu_colord_colord.102.crash /var/lib/apt/lists/lock /var/lib/dpkg/lock /var/lib/dpkg/triggers/Lock /var/lib/lightdm /var/lib/mlocate/mlocate.db /var/lib/polkit-1 /var/lib/sudo /var/lib/urandom/random-seed /var/lib/ureadahead/pack /var/lib/ureadahead/run.pack /var/log/btmp /var/log/installer/casper.log /var/log/installer/debug /var/log/installer/partman /var/log/installer/syslog /var/log/installer/version /var/log/lightdm/lightdm.log /var/log/lightdm/x-0-greeter.log /var/log/lightdm/x-0.log /var/log/speech-dispatcher /var/log/upstart/alsa-restore.log /var/log/upstart/alsa-restore.log.1.gz /var/log/upstart/console-setup.log /var/log/upstart/console-setup.log.1.gz /var/log/upstart/container-detect.log /var/log/upstart/container-detect.log.1.gz /var/log/upstart/hybrid-gfx.log /var/log/upstart/hybrid-gfx.log.1.gz /var/log/upstart/modemmanager.log /var/log/upstart/modemmanager.log.1.gz /var/log/upstart/module-init-tools.log /var/log/upstart/module-init-tools.log.1.gz /var/log/upstart/procps-static-network-up.log /var/log/upstart/procps-static-network-up.log.1.gz /var/log/upstart/procps-virtual-filesystems.log /var/log/upstart/procps-virtual-filesystems.log.1.gz /var/log/upstart/rsyslog.log /var/log/upstart/rsyslog.log.1.gz /var/log/upstart/ureadahead.log /var/log/upstart/ureadahead.log.1.gz /var/spool/anacron/cron.daily /var/spool/anacron/cron.monthly /var/spool/anacron/cron.weekly /var/spool/cron/atjobs /var/spool/cron/atspool /var/spool/cron/crontabs /var/spool/cups

    Read the article

  • Extending Perforce to use a custom content diff tool for certain file extensions

    - by Fraser Graham
    I have various custom binary files stored in perforce and for many of the file types I have built a custom diff tool to show the content creators a diff of the actual changes to the file. E.g. If the file holds simple key value pairs as a compressed binary blob the diff tool would load each version into an in memory format and generate a list of additions, deletions and edits to the file presented in a nice clean report view. Much like the built in image diff tool in P4V i'd like to be able to use my own diff tool for certain file extensions within my depot and allow the users to use the existing P4V interface to pick revisions to diff between and examine history. So, I am aware you can write add-ins to P4V but I can't find any documentation on it and I'd like to know if this kind of extension functionality is available in P4V and how to use it?

    Read the article

  • AWStats: cannot access /var/log/apache2/access.log

    - by Joril
    I installed awstats on my new Ubuntu Lucid server, but when cron tries to run it as user www-data, it complains that cannot access /var/log/apache2/access.log: Permission denied. In /usr/share/doc/awstats/README.Debian there's this paragraph: By default Apache stores (since version 1.3.22-1) logfiles with uid=root and gid=adm, so you need to either... 1) Change the rights of the logfiles in /etc/logrotate.d/apache so that www-data has at least read access. 2) As 1) but change to a specific user, and use the suEXEC feature of Apache to run as same user (and either change the right of /var/lib/awstats as well or use another directory). This is more complicated, but then the logs are not generally accessible to the server (which was probably the point of the Apache default). 3) Change awstats.pl to group adm (but beware that you are then taking the risk of allowing a CGI-script access to admin stuff on the machine!). I'd go with 1, but what are the recommended permissions to grant?

    Read the article

  • “File does not exist” in apache error log when mod_rewrite is using

    - by Nithin
    I am getting below error in server log, when re-writing the urls. [Fri Jan 25 11:32:57 2013] [error] [client ***IP***] File does not exist: /home/testserver/public_html/testing/flats-in-delhi-for-sale, referer: http://domain.in/testing/flats-in-delhi-for-sale/ I searched very where, but not found any solution ! My .htaccess config is given below: Options +FollowSymLinks Options All -Indexes ErrorDocument 404 http://domain.in/testing/404.php RewriteEngine On #Category Link RewriteRule ^([a-zA-Z]+)-in-([a-zA-Z]+)-([a-zA-Z-]+)/?$ view-category.php?type=$1&dis=$2&cat=$3 [NC,L] #Single Property Link RewriteRule ^([a-zA-Z]+)-in-([a-zA-Z]+)-([a-zA-Z-]+)/([a-zA-Z0-9-]+)/?$ view-property.php?type=$1&district=$2&category=$3&title_alias=$4 [NC,L] I also found similar old dated question, but no answer :( (http://webmasters.stackexchange.com/questions/16606/file-does-not-exist-in-apache-error-log) Thanks in advance for your help. PS: My site is working fine even Apache log is showing the error Nithin

    Read the article

  • Loss of privileges when enabeling auto log-in to encrypted home folder

    - by reav
    I use Ubuntu 11.10 with Gnome Shell and have an encrypted home folder. I enabled auto log-in through the system settings/users-admin menu, as I expected it didn't work (because of my encrypted home folder/user I suspect). But now I don't have privileges to mount my eksternal hard-drive, and I can no longer disable the auto log-in function, since the un-lock button in users-admin menu is grayed out. It seems like my users privileges has been degraded. Does any one have an solution to how I disable auto log-in and regain my privileges?

    Read the article

  • How to change log rotate Extension..???

    - by Jayakrishnan T
    Hi all, currently my logrotate configuration adds a single number after the rotated log file: mylogfile.log is rotated to mylogfile.log.1 I would like to change the extension to mylogfile.log.Current date does anyone know a way to do this? my log rotate code is :- /usr/local/jboss/jboss-3.2.7-ND1/server/default/log/consolelog.log { copytruncate rotate 1 missingok notifempty } Currently am renaming the rotated file with script.is there any option to change the extension of log rotate default configuration. Please help me

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • Vim - show diff on commit in mercurial;

    - by JackLeo
    In my .hgrc I can provide an editor or a command to launch an editor with options on commit. I want to write a method or alias that launches $ hg ci, it would not only open up message in Vim, but also would split window and there print out $ hg diff. I know that I can give parameters to vim by using +{command} option. So launching $ vim "+vsplit" does the split but any other options goes to first opened window. So I assume i need a specific function, yet I have no experience in writing my own Vim scripts. The script should: Open new vertical split with empty buffer (with vnew possibly) In empty buffer launch :.!hg diff Set empty buffer file type as diff :set ft=diff I've written such function: function! HgCiDiff() vnew :.!hg diff set ft=diff endfunction And in .hgrc I've added option: editor = vim "+HgCiDiff()" It kind of works, but I would like that splited window would be in right side (now it opens up in left) and mercurial message would be focused window. Also :wq could be setted as temporary shortcut to :wq<CR>:q! (having an assumption that mercurial message is is focused). Any suggestions to make this a bit more useful and less chunky? UPDATE: I found vim split guide so changing vnew with rightbelow vnew opens up diff on the right side.

    Read the article

  • Stairway to Transaction Log Management in SQL Server, Level 1: Transaction Log Overview

    The transaction log is used by SQL Server to maintain data consistency and integrity. If the database is not in Simple-recovery mode, it can also be used in an appropriate backup regime to restore the database to a point in time. The Future of SQL Server Monitoring "Being web-based, SQL Monitor enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • How to generate a unified diff in Ruby?

    - by jstayton
    After reading through this question about Ruby diff packages, I'm still not sure how to generate a unified diff from two text files. I'm not having trouble reading each file into a string (IO.read()), but I'm not finding any package that can generate a unified diff. Does one exist? Is doing a system call to diff even an option I should consider? (I'm thinking no.) Any help is appreciated! Thanks.

    Read the article

  • Is it possible to have all "git diff" commands use the "Python diff", in all git projects?

    - by EOL
    When including the line *.py diff=python in a local .gitattributes file, git diff produces nice labels for the different diff hunks of Python files (with the name of the function where the lines changed take place, etc.). Is is possible to ask git to use this diff mode for all Python files across all git projects? I tried to set a global ~/.gitattributes, but it is not used by local git repositories. Is there a more convenient method than initializing each new git project with a ln -s ~/.gitattributes?

    Read the article

  • Activity log manager is not preventing Zeitgeist from logging files

    - by Vivek
    I am running Gnome Shell and I do not like Zeitgeist indexing all my files. This makes the search in dash very slow. I do not want the dash to search recent files, so I installed activity log manager to prevent zeitgeist's logging activity. I configured the log manager as below. But even after adding every folder, the files keep appearing in the dash under Recent Items. Is there any other software or tweak which will instruct zeitgeist to search only applications installed in my system and not my recent files.

    Read the article

  • Schedule sending mail of a log file content

    - by user3215
    I have postfix mail agent installed and I've configured gmail relay and I could send mails from the terminal as below: root@statino1:~# mail -s "subject_here" [email protected] CC: <hit enter for empty cc> Type the mesage here press Ctrl+d I have to send a log file contents as a mail and schedule it to run everyday. How do I send log file contents as mail message, how do I automate the inputs of mail command? so that I can schedule it. Anybody has any idea?

    Read the article

  • How to prevent system to generate log file

    - by shantanu
    My Question is little bit surprising, but i need it. I am using a slow processor laptop, now i found that HDD has some bad sectors and HDD response becomes slow. But disk health is ok(according to smart tools). I can not change my HDD right now. So decide to reduce disk operation. How do i prevent system to generate log file or any other file which are used to keep history? I know LOG file is very important but i don't care it right now. Please help.

    Read the article

  • Diff Algorithm

    - by Daniel Magliola
    I've been looking like crazy for an explanation of a diff algorithm that works and is efficient. The closest I got is this link to RFC 3284 (from several Eric Sink blog posts), which describes in perfectly understandable terms the data format in which the diff results are stored. However, it has no mention whatsoever as to how a program would reach these results while doing a diff. I'm trying to research this out of personal curiosity, because I'm sure there must be tradeoffs when implementing a diff algorithm, which are pretty clear sometimes when you look at diffs and wonder "why did the diff program chose this as a change instead of that?"... Does anyone know where I can find a description of an efficient algorithm that'd end up outputting VCDIFF? By the way, if you happen to find a description of the actual algorithm used by SourceGear's DiffMerge, that'd be even better. NOTE: longest common subsequence doesn't seem to be the algorithm used by VCDIFF, it looks like they're doing something smarter, given the data format they use. Thanks!

    Read the article

  • Listing lines from just one file in DIFF

    - by justintime
    I would like to get (GNU)DIFF to printout only lines that are different in one file. So given ==> diffa.txt <== line1 line2 - in a only line3 line4 changed line5 ==> diffb.txt <== line1 line3 line4 changed in b line5 line6 in b only i would like diff --someoption diffa.txt diffb.txt to produce line2 - in a only line4 changed The following looks as though it should be helpful but it is a bit cryptic : --GTYPE-group-format=GFMT Similar, but format GTYPE input groups with GFMT. --line-format=LFMT Similar, but format all input lines with LFMT. --LTYPE-line-format=LFMT Similar, but format LTYPE input lines with LFMT. LTYPE is `old', `new', or `unchanged'. GTYPE is LTYPE or `changed'. GFMT may contain: %< lines from FILE1 %> lines from FILE2

    Read the article

  • help with bash script using find and diff command

    - by su
    Helloe, i have a bash script that i need help with: #!/bin/bash if [ -f "/suid.old" ] then find / -perm -4000 -o -perm -2000 ls > suid.old else find / -perm 4000 -o -perm -2000 ls > suid.new diff suid.old suid.new > newchanges.list fi when i run it it gives me an error saying: diff: suid.old: No such file or directory. My script should say, if suid.old does not exist, then use the find command to create one, or else use find command to do whatever it needs to with the suid.new. after find any changes it made and redirect it to newchanges.list please help,

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >