Search Results

Search found 19881 results on 796 pages for 'log analysis'.

Page 8/796 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Security log overflowing with filtering blocks

    - by Jacob
    I have a Windows 7 workstation whose security log is overflowing with the following errors: Audit Failure 3/31/2010 2:00:50 PM Microsoft-Windows-Security-Auditing 5157 Filtering Platform Connection "The Windows Filtering Platform has blocked a connection." Audit Failure 3/31/2010 2:00:50 PM Microsoft-Windows-Security-Auditing 5152 Filtering Platform Packet Drop "The Windows Filtering Platform has blocked a packet." These are not unexpected events; the firewall is expected to drop unsolicited traffic. However, I can't figure out how to tell Windows to stop writing these events to the security log. I've seen this problem before and have been able to find an answer with the use of Google, but I wasn't able to locate on this this time. Thanks!

    Read the article

  • SQL Server Analysis Services, DNS, AD, Kerberos, Connection Issues

    - by ScaleOvenStove
    Running into a very weird issue. Converting servers to Windows 2008/SQL 2008. Have a server, SERVER_A, brand new, setup with Win2k8,Sql2k8 - works. Have a Server SERVER_B, running Windows2003/SQL2005. I want to migrate from SERVER_B to SERVER_A. I have all db's, cubes, etc setup on SERVER_A and it is mimicking functionality. Since users are using Excel to connect to SSAS, they connection string has SERVER_B in it. What I want to do, is, change DNS on the network to point SERVER_B (by name) at the ip of SERVER_A. I have successfully done this with another server, SERVER_C, but I need to do it with SERVER_B. What I have found is that with SERVER_C, after changing DNS, had to remove SERVER_C from AD and then it worked. I could connect to SERVER_C (DB), SERVER_C (SSAS Default Instance) and SERVER_C (SSAS Named instance) and it all was actually connecting to SERVER_A I tried to do the same with with SERVER_B, and no luck. Changed DNS, removed from AD, and it wouldn't connect. Found out that there were some SPN's in AD set up, so removed those and tried again. I then could connect to SERVER_B (DB), SERVER_B (SSAS Named Instance), but not SERVER_B (SSAS Default Instance). I could connect to SERVER_B (SSAS Default Intance WITH the Port #), but I need to be able to connect without the port number. I am at a loss to as why I can't connect to the default instance without a port #. Not sure if it is SPN's in AD, or another AD issue, or something else. Pretty sure it isnt something on the server (because SERVER_C works!) Any insight or suggestions would be greatly helpful!!

    Read the article

  • Apache log file problem

    - by Luke
    I've recently set up an Apache 2 web server and I noticed a quite a few lines in the error and access log that start with the follow sequence (but longer). Does anyone know where this comes from? ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ ....... My set up is an Apache 2 load balancer with mod_balancer enabled and two Apache 2 web servers. All three servers write to the same log files on a share located on a NFS. My first guess is that my problem has to do with it since it's the only difference that comes to mind from other set ups I've used in the past but I'm not sure.

    Read the article

  • sql server log shipping

    - by voam
    I am setting up log shipping with two 2008 sql servers connecting with a vpn. As far as I know as long as the sql agent is able to access the share on the primary/secondary servers everything will work. When I set up the Log shipping on the primary server using SQL Mangagement Studio, before I can set any of the "Secondary Database Settings" it asks me to connect to the secondary server. But I really don't want to open up the connection to the secondary server. I will be initialing this secondary database with a backup so as long as the transaction logs get copies everything should work. How to I work around the GUI not enabling any of the settings for the secondary server until I actually connect to it? Thanks in advance!

    Read the article

  • Text tagging/analysis tool for Mac

    - by Mark Porter
    I'm a doctoral student doing research in the humanities. As part of my research I have gathered together a lot of interview text. To analyse this data I want to be able to easily tag sections of text with keywords (the tags need to be able to overlap, and perhaps be organised hierarchically) and later be able to collate those sections from across multiple files. I need to be able to do this on a Mac. It feels like a simple task but I can't find any software for doing it that isn't either horribly clunky or a massive overkill worth hundreds of pounds. Is there any good software for doing this, or are there any good ways of doing it with other software?

    Read the article

  • Simplest way to shrink transaction log files on a mirrored production database

    - by MGOwen
    What's the simplest way to shrink transaction log file on a mirrored production database? I have to, as my disk space is running out. I will make a full database backup before I do this, so I don't need to keep anything from the transaction log (right? I have daily full database backup, probably never need point-in-time restore, though I'll keep the option open if I can - that's all the .ldf is really for, correct?). (Hope this isn't an exact duplicate, I read a lot of questions but couldn't find this exact scenario elsewhere).

    Read the article

  • View rotated log files Mac OS X Server (*.?.gz)

    - by Meltemi
    Trying to look at some of our older log files and find they're cryptic "Unix Executable Files". This particular server I'm working with is an older Mac OS X Server (10.4 - Tiger). -rw-r----- 1 root admin 36 1 Jun 15:48 wtmp -rw-r--r-- 1 root admin 578 27 May 17:40 wtmp.0.gz -rw-r----- 1 root admin 89 26 Apr 13:57 wtmp.1.gz -rw-r----- 1 root admin 78 29 Mar 16:43 wtmp.2.gz -rw-r----- 1 root admin 69 15 Feb 17:21 wtmp.3.gz -rw-r----- 1 root admin 137 16 Jan 13:09 wtmp.4.gz i'm using zless to try and view the contents of the .gz files. and what i see is unreadable: ... <DF>^R<AF>ttyp1^@^@^@joe54^@^@^@^@^@108.184.63.22^@^@^@^@K<DF>"<B8>ttyp1^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@K<DF>%<A1>console^@^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@K<E0>1 ~^@^@^@^@^@^@^@shutdown^@^@^@^@^@^@^@^@ ^@^@^@^@^@^@^@^@K<E0>1^L~^@^@^@^@^@^@^@reboot^@^@^@^@^@^@ ... same goes for system.log.0.gz, etc... anything that's been rolled in compressed .gz files. What am i missing?

    Read the article

  • Apache Custom Log Format

    - by Shishant
    Hello, I am trying to write a reward system wherein users will be given reward points if they download complete files, So what should be my log format. After searching alot this is what I understand its my first time and havent done custom logs before. First of all which file should I edit for custom logs because this thing I cant find. I am using ubuntu server with default apache, php5 and mysql installation # I use this commands and they work fine nano /etc/apache2/apache2.conf /etc/init.d/apache2 restart I think this is what I need to do for my purpose LogLevel notice LogFormat "%f %u %x %o" rewards CustomLog /var/www/logs/rewards_log rewards This is as it is command or there is something missing? and is there any particular location where I need to add this? and one more thing %o is for filesize that was sent and is it possible to log only files from a particular directory? or for files with size more than 10mb. Thank You.

    Read the article

  • Search for specific call in asterisk log files

    - by chiborg
    In my Asterisk log file, I have a line like this (truncated): Executing [123@mycontext:1] Set("SIP/myhost-b7111840", "__INCOMINGCLI=4711") Now I want to do the following filtering while looking at the log file with tail -f: Match lines with a specific value for "INCOMINGCLI", storing the call ID (the "SIP/myhost-b7111840" part) Output all subsequent lines that contain the call ID. As a bonus, having a grep-like option like -A would be nice. I could do that easily in various programming languages, but how would I do it with standard UNIX commands like sed or awk? Can it be done with these commands?

    Read the article

  • How to see the content-type of a response in Nginx log file

    - by MLister
    Is it possible to see the content-type of a response to a request in Nginx's log file? At the moment, this is what I see for the request in question: 127.0.0.1 - - [23/Nov/2012:10:17:19 -0500] "GET /fonts/cantarell-bold-webfont.eot? HTTP/1.1" 200 22679 "https://www.mysite.com/blah/doc" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.2; Trident/4.0; .NET CLR 1.1.4322; .NET4.0C)" If this is not feasible using Nginx's log, what are the other options? Thanks.

    Read the article

  • best tool for searching within unstructured log files [closed]

    - by Alex Holding
    i am supporting a number of bespoke applications at the minute and searching through their very non standard logs is a nightmare, so im looking for a tool which can do the following - load large text files search through multiple files at once and display all results can search with regex can be used to view and search unstructured text files There are some great looking log tools available but they all seem to be focused on structured logs, where the logs i deal with most days are just flat text files. I am currently using notepad++ but that has its own annoyances so im hoping there is a dedicated log analysis tool that i havent found yet.

    Read the article

  • How to log messages to a log file in a specific path from a bash script

    - by Erik
    How do you log messages to a log file in a specific path from a bash script? A naive implementation would be commands like: echo My message >>/my/custom/path/to/my_script.log But this probably has many disadvantages (no log rotation for example). I could use the 'logger' command, but it does not support logs in custom paths as far as I know and is not easy to configure if you have lots of bash scripts that could use a custom log file. In a scripting language like Ruby all this is quite easy: https://github.com/rudionrails/yell/wiki/101-the-datefile-adapter I could also make my own logger command based on this ruby library and call it from my bash scripts, but I guess there is already a well known solution that provides similar behavior for shell scripts?

    Read the article

  • Log Blog

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Logging – A log blog In a another blog (Missing Fields and Defaults) I spoke about not doing a blog about log files, but then I looked at it again and realized that this is a nice opportunity to show a simple yet powerful tool and also deal with static variables and functions in C#. My log had to be able to answer a few simple logging rules:   To log or not to log? That is the question – Always log! That is the answer  Do we share a log? Even when a file is opened with a minimal lock, it does not share well and performance greatly suffers. So sharing a log is not a good idea. Also, when sharing, it is harder to find your particular entries and you have to establish rules about retention. My recommendation – Do Not Share!  How verbose? Your log can be very verbose – a good thing when testing, very terse – a good thing in day-to-day runs, or somewhere in between. You must be the judge. In my Blog, I elect to always report a run with start and end times, and always report errors. I normally use 5 levels of logging: 4 – write all, 3 – write more, 2 – write some, 1 – write errors and timing, 0 – write none. The code sample below is more general than that. It uses the config file to set the max log level and each call to the log assigns a level to the call itself. If the level is above the .config highest level, the line will not be written. Programmers decide which log belongs to which level and thus we can set the .config differently for production and testing.  Where do I keep the log? If your career is important to you, discuss this with the boss and with the system admin. We keep logs in the L: drive of our server and make sure that we have a directory for each app that needs a log. When adding a new app, add a new directory. The default location for the log is also found in the .config file Print One or Many? There are two options here:   1.     Print many, Open but once once – you start the stream and close it only when the program ends. This is what you can do when you perform in “batch” mode like in a console app or a stsadm extension.The advantage to this is that starting a closing a stream is expensive and time consuming and because we use a unique file, keeping it open for a long time does not cause contention problems. 2.     Print one entry at a time or Open many – every time you write a line, you start the stream, write to it and close it. This work for event receivers, feature receivers, and web parts. Here scalability requires us to create objects on the fly and get rid of them as soon as possible.  A default value of the onceOrMany resides in the .config.  All of the above applies to any windows or web application, not just SharePoint.  So as usual, here is a routine that does it all, and a few simple functions that call it for a variety of purposes.   So without further ado, here is app.config  <?xml version="1.0" encoding="utf-8" ?> <configuration>     <configSections>         <sectionGroup name="applicationSettings" type="System.Configuration.ApplicationSettingsGroup, System, Version=2.0.0.0, Culture=neutral, ublicKeyToken=b77a5c561934e089" >         <section name="statics.Properties.Settings" type="System.Configuration.ClientSettingsSection, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />         </sectionGroup>     </configSections>     <applicationSettings>         <statics.Properties.Settings>             <setting name="oneOrMany" serializeAs="String">                 <value>False</value>             </setting>             <setting name="logURI" serializeAs="String">                 <value>C:\staticLog.txt</value>             </setting>             <setting name="highestLevel" serializeAs="String">                 <value>2</value>             </setting>         </statics.Properties.Settings>     </applicationSettings> </configuration>   And now the code:  In order to persist the variables between calls and also to be able to persist (or not to persist) the log file itself, I created an EventLog class with static variables and functions. Static functions do not need an instance of the class in order to work. If you ever wondered why our Main function is static, the answer is that something needs to run before instantiation so that other objects may be instantiated, and this is what the “static” Main does. The various logging functions and variables are created as static because they do not need instantiation and as a fringe benefit they remain un-destroyed between calls. The Main function here is just used for testing. Note that it does not instantiate anything, just uses the log functions. This is possible because the functions are static. Also note that the function calls are of the form: Class.Function.  using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; namespace statics {       class Program     {         static void Main(string[] args)         {             //write a single line             EventLog.LogEvents("ha ha", 3, "C:\\hahafile.txt", 4, true, false);             //this single line will not be written because the msgLevel is too high             EventLog.LogEvents("baba", 3, "C:\\babafile.txt", 2, true, false);             //The next 4 lines will be written in succession - no closing             EventLog.LogLine("blah blah", 1);             EventLog.LogLine("da da", 1);             EventLog.LogLine("ma ma", 1);             EventLog.LogLine("lah lah", 1);             EventLog.CloseLog(); // log will close             //now with specific functions             EventLog.LogSingleLine("one line", 1);             //this is just a test, the log is already closed             EventLog.CloseLog();         }     }     public class EventLog     {         public static string logURI = Properties.Settings.Default.logURI;         public static bool isOneLine = Properties.Settings.Default.oneOrMany;         public static bool isOpen = false;         public static int highestLevel = Properties.Settings.Default.highestLevel;         public static StreamWriter sw;         /// <summary>         /// the program will "print" the msg into the log         /// unless msgLevel is > msgLimit         /// onceOrMany is true when once - the program will open the log         /// print the msg and close the log. False when many the program will         /// keep the log open until close = true         /// normally all the arguments will come from the app.config         /// called by many overloads of logLine         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         /// <param name="logFileName"></param>         /// <param name="msgLimit"></param>         /// <param name="onceOrMany"></param>         /// <param name="close"></param>         public static void LogEvents(string msg, int msgLevel, string logFileName, int msgLimit, bool oneOrMany, bool close)         {             //to print or not to print             if (msgLevel <= msgLimit)             {                 //open the file. from the argument (logFileName) or from the config (logURI)                 if (!isOpen)                 {                     string logFile = logFileName;                     if (logFileName == "")                     {                         logFile = logURI;                     }                     sw = new StreamWriter(logFile, true);                     sw.WriteLine("Started At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     isOpen = true;                 }                 //print                 sw.WriteLine(msg);             }             //close when instructed             if (close || oneOrMany)             {                 if (isOpen)                 {                     sw.WriteLine("Ended At: " + DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss"));                     sw.Close();                     isOpen = false;                 }             }         }           /// <summary>         /// The simplest, just msg and level         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogLine(string msg, int msgLevel)         {             //use the given msg and msgLevel and all others are defaults             LogEvents(msg, msgLevel, "", highestLevel, isOneLine, false);         }                 /// <summary>         /// one line at a time - open print close         /// </summary>         /// <param name="msg"></param>         /// <param name="msgLevel"></param>         public static void LogSingleLine(string msg, int msgLevel)         {             LogEvents(msg, msgLevel, "", highestLevel, true, true);         }           /// <summary>         /// used to close. high level, low limit, once and close are set         /// </summary>         /// <param name="close"></param>         public static void CloseLog()         {             LogEvents("", 15, "", 1, true, true);         }           }     }   }   That’s all folks!

    Read the article

  • MySQL multiple instances: can you specify a separate general_log/general_log_file option?

    - by gravyface
    Have two working MySQL instances as well as the default instance. I have general logging enabled on the default; this is working fine. On the second instance, I've added: general_log = 1 general_log_file = /path/to/log/file under [mysqld1]. Restarted the instance (using mysqladmin and confirmed it was not running with mysqld_multi report 1), started it back up again, and the only data in the log file are the connect statements from when mysqld_multi report 1 was executed. Are all the instance #1 queries just being logged to the default instance general log file? The default instance is quite busy and has identical database names, tables, etc. so it's difficult to figure out right now.

    Read the article

  • Web log files analyzer

    - by Peter Štibraný
    I already use Google Analytics on my page, but I'd like to get additional info from log files. I've looked at various packages during last days, but nothing impressed me so far. Some requirements: must work on log file level (I use apache combined logs, but can configure apache to produce other types of logs) can generate static reports (windows/linux) or use GUI (windows only) should be easy to add custom user agents, and rerun analysis if it can recognize installation of eclipse plugins from log, that would be big plus understands google serp position referer should not require two days to setup (awstats, I am looking at you) should be still under active developement (i.e. analog isn't good answer) preferrably free, or at not very expensive :-) Any good analyzers programs out there?

    Read the article

  • Glassfish log files analysis

    - by Cem
    Can I get some recommendations for good log analysis software for Glassfish log files? Since it will not vary from application server to application server dramatically, I guess that there is a common solution for all servers. Thanks

    Read the article

  • Rsyslog is not working properly, it does not log anything

    - by Victor Henriquez
    I'm running a Debian server and a couple of days ago my rsyslog started to behave very weird, the daemon is running but it doesn't seem to do anything. Many people use the system but I'm the only one with (legal) root access. I'm using the default rsyslogd configuration (if you think is relevant I'll attach it, but it's the one that comes with the package). After I rotated all the log files, they have remained empty: # ls -l /var/log/*.log -rw-r--r-- 1 root root 0 Jun 27 00:25 /var/log/alternatives.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/auth.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/daemon.log -rw-r--r-- 1 root root 0 Jun 27 00:25 /var/log/dpkg.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/kern.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/lpr.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/mail.log -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/user.log Any try to force a log writing does not have any effect: # logger hey # ls -l /var/log/messages -rw-r----- 1 root adm 0 Jun 26 13:03 /var/log/messages Lsof shows that rsyslogd does not have any log files opened: # lsof -p 1855 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME rsyslogd 1855 root cwd DIR 202,0 4096 2 / rsyslogd 1855 root rtd DIR 202,0 4096 2 / rsyslogd 1855 root txt REG 202,0 342076 21649 /usr/sbin/rsyslogd rsyslogd 1855 root mem REG 202,0 38556 32153 /lib/i386-linux-gnu/i686/cmov/libnss_nis-2.13.so rsyslogd 1855 root mem REG 202,0 79728 32165 /lib/i386-linux-gnu/i686/cmov/libnsl-2.13.so rsyslogd 1855 root mem REG 202,0 26456 32163 /lib/i386-linux-gnu/i686/cmov/libnss_compat-2.13.so rsyslogd 1855 root mem REG 202,0 297500 1061058 /usr/lib/rsyslog/imuxsock.so rsyslogd 1855 root mem REG 202,0 42628 32170 /lib/i386-linux-gnu/i686/cmov/libnss_files-2.13.so rsyslogd 1855 root mem REG 202,0 22784 1061106 /usr/lib/rsyslog/imklog.so rsyslogd 1855 root mem REG 202,0 1401000 32169 /lib/i386-linux-gnu/i686/cmov/libc-2.13.so rsyslogd 1855 root mem REG 202,0 30684 32175 /lib/i386-linux-gnu/i686/cmov/librt-2.13.so rsyslogd 1855 root mem REG 202,0 9844 32157 /lib/i386-linux-gnu/i686/cmov/libdl-2.13.so rsyslogd 1855 root mem REG 202,0 117009 32154 /lib/i386-linux-gnu/i686/cmov/libpthread-2.13.so rsyslogd 1855 root mem REG 202,0 79980 17746 /usr/lib/libz.so.1.2.3.4 rsyslogd 1855 root mem REG 202,0 18836 1061094 /usr/lib/rsyslog/lmnet.so rsyslogd 1855 root mem REG 202,0 117960 31845 /lib/i386-linux-gnu/ld-2.13.so rsyslogd 1855 root 0u unix 0xebe8e800 0t0 640 /dev/log rsyslogd 1855 root 3u FIFO 0,5 0t0 2474 /dev/xconsole rsyslogd 1855 root 4u unix 0xebe8e400 0t0 645 /var/spool/postfix/dev/log rsyslogd 1855 root 5r REG 0,3 0 4026532176 /proc/kmsg I was so frustrated that even reinstall the rsyslog package, but it still refuses to log anything: # apt-get remove --purge rsyslog # apt-get install rsyslog I thought someone had hacked the system, so run rkhunter, chkrootkit, unhide in an attempt to find hide processes / ports and nmap in a remote host to compare with the ports shown by netstat. And I know this doesn't mean anything, but all looks ok. The system also have an iptables firewall that is very restrictive with incoming / outgoing connections. This is driving me crazy, any idea what is going on here? [EDIT - disk space info] # df -h Filesystem Size Used Avail Use% Mounted on rootfs 24G 22G 629M 98% / /dev/root 24G 22G 629M 98% / devtmpfs 10M 112K 9.9M 2% /dev tmpfs 76M 48K 76M 1% /run tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 151M 40K 151M 1% /tmp tmpfs 151M 0 151M 0% /run/shm

    Read the article

  • Problem with squid log files

    - by Gatura
    I am using SARG to get a report on the squid log files, I get this result /usr/local/Sarg/bin/sarg -l /usr/local/squid/var/logs/access.log SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% SARG: Records in file: 0, reading: 0.00% sort: open failed: +6.5nr: No such file or directory SARG: (index) Cannot open file: /Applications/Sarg/reports/index.sort SARG: Records in file: 0, reading: 0.00% What could be the problem?

    Read the article

  • Event Log: atapi - the device did not respond within the timeout period - Freeze

    - by rjlopes
    Hi, I have a Windows Server 2003 that stops working randomly (displays image on monitor but is completely frozen), all I could found on the event log as causes were an error from atapi and a warning from msas2k3. The event log entries are: Event Type: Error Event Source: atapi Event Category: None Event ID: 9 Date: 22-07-2009 Time: 16:13:33 User: N/A Computer: SERVER Description: The device, \Device\Ide\IdePort0, did not respond within the timeout period. For more information, see Help and Support Center at http : // go.microsoft.com / fwlink / events.asp. Data: 0000: 0f 00 10 00 01 00 64 00 ......d. 0008: 00 00 00 00 09 00 04 c0 .......À 0010: 01 01 00 50 00 00 00 00 ...P.... 0018: f8 06 20 00 00 00 00 00 ø. ..... 0020: 00 00 00 00 00 00 00 00 ........ 0028: 00 00 00 00 01 00 00 00 ........ 0030: 00 00 00 00 07 00 00 00 ........ Event Type: Warning Event Source: msas2k3 Event Category: None Event ID: 129 Date: 22-07-2009 Time: 16:14:23 User: N/A Computer: SERVER Description: Reset to device, \Device\RaidPort0, was issued. For more information, see Help and Support Center at http : // go.microsoft.com / fwlink / events.asp. Data: 0000: 0f 00 10 00 01 00 68 00 ......h. 0008: 00 00 00 00 81 00 04 80 ......? 0010: 04 00 00 00 00 00 00 00 ........ 0018: 00 00 00 00 00 00 00 00 ........ 0020: 00 00 00 00 00 00 00 00 ........ 0028: 00 00 00 00 00 00 00 00 ........ 0030: 01 00 00 00 81 00 04 80 ......? Any hints?

    Read the article

  • SQL log shipping for reporting

    - by Patrick J Collins
    I would like to create a read-only copy of my SQL Server 2008 database on a secondary server for reporting and analysis. I've been testing log shipping, configured to run every 5 minutes or so. Alas, there appears to be a stumbling block, for exclusive access is required on the target database during the restore, which in turn requires killing all active connections. This is far from ideal, especially if a user is in the middle of running a report. Any better suggestions? Edit : I'm doing this on the Express edition.

    Read the article

  • Log shipping on select tables.

    - by Scott Chamberlain
    I know I am most likely using incorrect terminology so please correct me if I use the wrong terms so I can search better. We have a very large database at a client's site and we would like to have up to date copies of some of the tables sent across the internet to our servers at our office. We would like to only copy a few of the tables because the bandwidth requirement to do log shipping of the entire database (our current solution) is too high. Also replication directly to our servers is out of the question as our servers are not accessible from the internet and management does not want to do replication (more on that later). One possible Idea we had is to do some form of replication on the tables we need to another database on the same server and do log shipping of that second smaller database but management is concerned that the clients have broken replication (it was between two servers on their internal network however) on us in the past and would like to stay away from it if possible. Any recommendations would be greatly appreciated. If using some form of replication is the only solution, I am not against replication, I just need compelling arguments to convince management to do it. This is to be set up on multiple sites that are running either Sql2005 or Sql2008 we will have both versions on our end to restore the data to so that is not a issue. Thank you.

    Read the article

  • Move SQL Server transaction log to another disk

    - by Jim Lahman
    When restoring a database backup, by default, SQL Server places the database files in the master database file directory.  In this example, that location is in L:\MSSQL10.CHTL\MSSQL\DATA as shown by the issuance of sp_helpfile   Hence, the restored files for the database CHTL_L2_DB are in the same directory     Per SQL Server best practices, the log file should be on its own disk drive so that the database and log file can operate in a sequential manner and perform optimally. The steps to move the log file is as follows: Record the location of the database files and the transaction log files Note the future destination of the transaction log file Get exclusive access to the database Detach from the database Move the log file to the new location Attach to the database Verify new location of transaction log Record the location of the database file To view the current location of the database files, use the system stored procedure, sp_helpfile 1: use chtl_l2_db 2: go 3:   4: sp_helpfile 5: go   Note the future destination of the transaction log file The future destination of the transaction log file will be located in K:\MSSQLLog   Get exclusive access to the database To get exclusive access to the database, alter the database access to single_user.  If users are still connected to the database, remove them by using with rollback immediate option.  Note:  If you had a pane connected to the database when the it is placed into single_user mode, then you will be presented with a reconnection dialog box. 1: alter database chtl_l2_db 2: set single_user with rollback immediate 3: go Detach from the database   Now detach from the database so that we can use windows explorer to move the transaction log file 1: use master 2: go 3:   4: sp_detach_db 'chtl_l2_db' 5: go   After copying the transaction log file re-attach to the database 1: use master 2: go 3:   4: sp_attach_db 'chtl_l2_db', 5: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB.MDF', 6: 'K:\MSSQLLog\CHTL_L2_DB_4.LDF', 7: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_1.NDF', 8: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_2.NDF', 9: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_3.NDF' 10: GO

    Read the article

  • Ubuntu Dependency Problem in Activity log Manager

    - by Incredible
    incredible@incredible-Inspiron-N5010:~$ sudo apt-get -f install [sudo] password for incredible: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: activity-log-manager The following packages will be upgraded: activity-log-manager 1 upgraded, 0 newly installed, 0 to remove and 287 not upgraded. 1 not fully installed or removed. Need to get 0 B/60.3 kB of archives. After this operation, 29.7 kB disk space will be freed. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of activity-log-manager: activity-log-manager depends on activity-log-manager-common (= 0.9.4-0ubuntu3); however: Version of activity-log-manager-common on system is 0.9.4-0ubuntu3.1. activity-log-manager-control-center (0.9.4-0ubuntu3.1) breaks activity-log-manager (<< 0.9.4-0ubuntu3.1) and is installed. Version of activity-log-manager to be configured is 0.9.4-0ubuntu3. dpkg: error processing activity-log-manager (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: activity-log-manager E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Binary Log Format in MySQL

    - by amritansu
    Reference manual for MySQL 5.6 states that " Some changes, however, still use the statement-based format. Examples include all DDL (data definition language) statements such as CREATE TABLE, ALTER TABLE, or DROP TABLE. " Does this statement means that even if we have ROW format for binary logs all DDLs will be logged in binary log as statement based? How does this affect replication? Kindly help me to understand this.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >