Search Results

Search found 17971 results on 719 pages for 'log analyzer'.

Page 40/719 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Wanted red color for text in the error message in the log file

    - by swati
    Hello Everyone, I have a question but not sure it is possible or not. I am using apache logger for my logging which creates a log file which works fine with no issues.my question is when i open the log file i get the different messages like messages with INFO,DEBGU,ERROR etc. But i wanted to see the error message in red color in text in my logger file.. is it possible? So in that way if some one opens my log file if some thing is there in red they can clearly can guess that it is an error message.. Is it possible.. I would really appreciate if some one can respond to me . Thanks, Swati

    Read the article

  • How can i use 'log' inside a src/groovy/ class

    - by firnnauriel
    I'm encountering this error: groovy.lang.MissingPropertyException: No such property: log for class: org.utils.MyClass Here's the content of the class: package org.utils class MyClass { int organizationCount = 0 public int getOrganizationCount(){ log.debug "There are ${organizationCount} organization(s) found." return organizationCount } } Do i need to add an import statement? What do i need to add? Note that the class is located in src/groovy/org/utils. I know that the 'log' variable is accessible in controllers, services, etc. Not sure in 'src' classes. Thanks.

    Read the article

  • Configuring Hadoop logging to avoid too many log files

    - by Eric Wendelin
    I'm having a problem with Hadoop producing too many log files in $HADOOP_LOG_DIR/userlogs (the Ext3 filesystem allows only 32000 subdirectories) which looks like the same problem in this question: http://stackoverflow.com/questions/2091287/error-in-hadoop-mapreduce My question is: does anyone know how to configure Hadoop to roll the log dir or otherwise prevent this? I'm trying to avoid just setting the "mapred.userlog.retain.hours" and/or "mapred.userlog.limit.kb" properties because I want to actually keep the log files. I was also hoping to configure this in log4j.properties, but looking at the Hadoop 0.20.2 source, it writes directly to logfiles instead of actually using log4j. Perhaps I don't understand how it's using log4j fully. Any suggestions or clarifications would be greatly appreciated.

    Read the article

  • Linq DataContext.Log - logging sql comman with parameters

    - by dzajdol
    Hi. I using Linq DataContext.Log and I want to save sql command with parameters. how may I do this?? Now to log is writing: SELECT [t0].[Id_User], [t0].[FirstName], [t0].[LastName], [t0].[UserName], [t0].[Password], [t0].[District_Id], [t0].[Active], [t0].[MobileDevice_Id], [t0].[IsMobile], [t0].[IsWWW], [t0].[IsWholesaler], [t0].[Acc_Admin], [t0].[Warehouse_Id], [t0].[PIN], [t0].[ValidFrom], [t0].[ValidTo], [t0].[IsExternal], [t0].[UserType], [t0].[DefaultDepartment_Id], [t0].[Code], [t0].[RowsOnPage], [t0].[ClientGroup_Id], [t0].[ClientGroup2_Id], [t0].[ServerHash], [t0].[CanOrderInPacks], [t0].[Email], [t0].[IsAdmin], [t0].[HasAccessToAllInferiorsData], [t0].[IsSupplier], [t0].[Position], [t0].[syncstamp] AS [Syncstamp], [t0].[Source], [t0].[Deleted], [t0].[DefaultClient_Id] FROM [dbo].[Users] AS [t0] WHERE ([t0].[UserName] = @p0) AND ([t0].[Deleted] = @p1) I want write @p0 and @p1 to log Regards

    Read the article

  • What about the Sql transaction log

    - by Michel
    Hi, i always thought that the sql transaction log keeps track of all the transactions done in the database so it could help recovering the database file in case of a unexpected power down or something like that So then, in normal usage, when the data is committed and written to disk, it is cleared because all the data is nice and safe in the mdf file. Seeing the ldf file grow and reading some i understand that that is not the case, and it will keep growing, until: you shrink the log. Only at that point all the commited transactions are cleared and the log file is shrinked. I found some sp's who should do this, but also found the theory that you first have to backup the database? That last step doesn't make sense to me, so can anyone tell me of that is correct and if so, why that is?

    Read the article

  • Django design question: extending User to make users that can't log in

    - by jobrahms
    The site I'm working on involves teachers creating student objects. The teacher can choose to make it possible for a student to log into the site (to check calendars, etc) OR the teacher can choose to use the student object only for record keeping and not allow the student to log in. In the student creation form, if the teacher supplies a username and a password, it should create an object of the first kind - one that can log in, i.e. a regular User object. If the teacher does not supply a username/password, it should create the second type. The other requirement is that the teacher should be able to go in later and change a non-logging-in student to the other kind. What's the best way to design for this scenario? Subclass User and make username and password not required? What else would this affect?

    Read the article

  • Batch file writing to log then ending process

    - by Andrew Service
    I have a batch file that calls a process and in that process I have: IF %ERRORLEVEL% NEQ 0 EXIT /B %ERRORLEVEL% Now I wanted to upgrade this a bit and give some meaningful message to an output log when if the process fails, also I do not want the main batch to continue processing because the next processes are dependent on data from previous calls; I wonder if this would be correct but not sure: CALL Process 1 IF %ERRORLEVEL% NEQ 0 GOTO ErrorInfirstProcess /B %ERRORLEVEL% :ErrorInfirstProcess ECHO Process 1 Failed on %Date% at %Time%. >>C:\Log.txt" CALL Process 2 IF %ERRORLEVEL% NEQ 0 GOTO ErrorInSecondProcess /B %ERRORLEVEL% :ErrorInSecondProcess ECHO Process 2 Failed on %Date% at %Time%. >>C:\Log.txt" I also want to know if I still need the /B or do I need to put an EXIT command after the echo? Thanks A

    Read the article

  • Embed Git Commit Log in Rails App?

    - by Andrew
    So, I have a 'development blog' in a rails app I'm working on right now. I'm using Git for version control and deployment (although right now I'm the only person working on it). Now, when I make changes in Git I put a pretty decent log entry about what I've done. I'd love to have the Git commit log automatically posted to the development blog -- or otherwise available for others to read within the deployed site. Is there an automated way to pull the Git Commit Log into a view in a rails app?

    Read the article

  • O(N log N) Complexity - Similar to linear?

    - by gav
    Hey All, So I think I'm going to get buried for asking such a trivial but I'm a little confused about something. I have implemented quicksort in Java and C and I was doing some basic comparissons. The graph came out as two straight lines, with the C being 4ms faster than the Java counterpart over 100,000 random integers. The code for my tests can be found here; android-benchmarks I wasn't sure what an (n log n) line would look like but I didn't think it would be straight. I just wanted to check that this is the expected result and that I shouldn't try to find an error in my code. I stuck the formula into excel and for base 10 it seems to be a straight line with a kink at the start. Is this because the difference between log(n) and log(n+1) increases linearly? Thanks, Gav

    Read the article

  • How do I write a Java text file viewer for big log files

    - by Hannes de Jager
    I am working on a software product with an integrated log file viewer. Problem is, its slow and unstable for really large files because it reads the whole file into memory when you view a log file. I'm wanting to write a new log file viewer that addresses this problem. What are the best practices for writing viewers for large text files? How does editors like notepad++ and VIM acomplish this? I was thinking of using a buffered Bi-directional text stream reader together with Java's TableModel. Am I thinking along the right lines and are such stream implementations available for Java?

    Read the article

  • Microsoft Enterprise Logging Application Block - Reading Log File

    - by Or A
    Hi, I'm using MS log application block for logging my application event into a file called app-trace.log which located on the c:\temp folder. I'm trying to find the best way to read this file at runtime and display it when the user asks for it. now i have 2 issues: it seems that this kind of feature is not supported by the framework, hence i have to write this reader myself. am i missing something here? is there any better way of getting this data (w/o buffering it in the memory or saving it into another file). if i'm taking the only alternative that left for me, and implementing the reader myself, when i'm tring to do: System.IO.FileStream fs = new System.IO.FileStream(@"c:\temp\app-trace.log", FileMode.Open, FileAccess.Read); i'm getting "File being used by another process c#", probably the file is locked by the application block. is there any way to access and read it anyhow? Thank

    Read the article

  • A simple log file format

    - by hgulyan
    Hi, I'm not sure if it was asked, but I couldn't find anything like this. My program uses a simple .txt file for log purposes, It just creates/opens a file and appends lines. After some time, I started to log quite a lot of activities, so the file became too large and hardly readable. I know, that it's not write way to do this, but I simply need to have a readable file. So I thought maybe there's a simple file format for log files and a soft to view it or if you'd have any other suggestions on this question? Thanks for help in advance.

    Read the article

  • Exim log and send all mail for a given domain through another server

    - by Josh
    I administer a handful of shared web hosting servers. Recently, Yahoo has been deprioritizing/greylising all email sent from these servers. I am getting the dereaded 421 4.7.0 [TS02] Messages from my.ip.address temporarily deferred message from Yahoo and their postmaster has been unresponsive. I am unable to find any way to set up a feedback loop like AOL has for my IP address -- I did find a way to set up a feedback loop for a given domain, but we host hundreds of domains, and don't have the time to set up that many feedback loops. So what I'd like to do is twofold: Configure Exim to send all email destined to an @yahoo.com address to a relay, a new server which has an IP that yahoo is not blocking. Configure Exim (or maybe the relay) to log all emails sent to @yahoo.com, so I can review them and, in case one of my uses is violating ToS and sending SPAM to yahoo users, take the appropriate action. How could I accomplish these? Or, does anyone have any other advice for how to get mail to flow through Yahoo and ensure that any email generating complaints is brought to my attention? (For what it's worth, these servers are not listed on any major blacklists)

    Read the article

  • Logfiles go blank after logrotate rotates them.

    - by Hilt86
    I have an ubuntu 8.04 LTS server that runs openvpn. The openvpn server writes to a standard logfile under /var/log and prior to a month ago logrotate would automatically rotate the files and compress them. The files are still being rotated however the new logfile (ovpn.log) is empty. Restarting the openvpn daemon fixes the issue (ie: openvpn writes status events to the file) but after about 10 days the file is rotated again openvpn can't write to the logfile again. This is also strange because logrotate is set to rotate every 6 months. Openvpn runs as nobody and the logfiles are owned by root and admin which is strange because it should either work at all times or not work at all if the permissions are the cause, unless openvpn runs as root temporarily and then drops down to nobody after initializing ?

    Read the article

  • Syslog - capturing event logs from Win2k boxes

    - by molecule
    Hi all, I asked this question in SuperUser without much luck and so I am posting it here to see if anyone can assist. We have a central syslog server and we want it to capture event log events from Windows hosts. We are specifically interested in logging service start/stop events. We installed "Eventlog to Syslog" on these windows hosts and all works well with XP hosts (Events come from Service Control Manager). However, we are having issues with Win2k hosts. For some reason, service start/stop events do not get logged in the Event Log for Win2k hosts. I got another friend from another company to test on a Win2k host and he does get start/stop events on them. I have searched around for local audit policies i need to enable but with not much luck. Anyone have any ideas? Thanks in advance.

    Read the article

  • How to Protect Sensitive (HIPAA) SQL Server Standard Data and Log Files

    - by Quesi
    I am dealing with electronic personal health information (ePHI or PHI) and HIPAA regulations require that only authorized users can access ePHI. Column-level encryption may be of value for some of the data, but I need the ability to do like searches on some of the PHI fields such as name. Transparent Data Encryption (TDE) is a feature of SQL Server 2008 for encrypting database and log files. As I understand it this prevents someone who gains access to the MDF, LDF, or backup files from being able to do anything with the files because they are encrypted. TDE is only on enterprise and developer versions of SQL Server and enterprise is cost-prohibitive for my particular scenario. How can I get similar protection on SQL Server Standard? Is there a way to encrypt the database and backup files (is there a third-party tool)? Or just as good, is there a way to prevent the files from being used if the disk were attached to another machine (linux or windows)? Administrator access to the files from the same machine is fine, but I just want to prevent any issues if the disk were removed and hooked up to another machine. What are some of the solutions for this that are out there?

    Read the article

  • mysql startup, shtudown and logging on osx

    - by Joelio
    Hi, I am trying to troubleshoot some mysql problems (I have a table I cant seem to delete or drop, it hangs forever) I have 10.5.8 osx, I dont remember how/if I installed mysql, here is what I know: it automatically starts on boot the process looks like this: /usr/local/mysql/libexec/mysqld --basedir=/usr/local/mysql --datadir=/usr/local/mysql/var --pid-file=/usr/local/mysql/var/Joels-New-Pro.local.pid _mysql 96 0.0 0.0 75884 684 ?? Ss Sat06PM 0:00.02 /bin/sh /usr/local/mysql/bin/mysqld_safe when I run: /usr/local/mysql/libexec/mysqld --verbose --help it says: /usr/local/mysql/libexec/mysqld Ver 5.0.45 for apple-darwin9.1.0 on i686 (Source distribution) it seems to use my.cnf from /etc/my.cnf Now here are my questions: I dont see anything in the startupitems that remotely looks like mysql ls /Library/StartupItems/ BRESINKx86Monitoring ChmodBPF HP IO HP Trap Monitor Parallels ParallelsTransporter 1.) So how does it startup automatically? 2.) How do I start & stop this type of installation? Also, looking at the config, the logs have no values: /usr/local/mysql/libexec/mysqld --verbose --help|grep '^log' log (No default value) log-bin (No default value) log-bin-index (No default value) log-bin-trust-function-creators FALSE log-bin-trust-routine-creators FALSE log-error log-isam myisam.log log-queries-not-using-indexes FALSE log-short-format FALSE log-slave-updates FALSE log-slow-admin-statements FALSE log-slow-queries (No default value) log-tc tc.log log-tc-size 24576 log-update (No default value) log-warnings 1 3.) Does that mean there is no logging enabled in mysetup? thanks in advance! Joel

    Read the article

  • Expected IOPS for log writing on PS6000X SAN?

    - by dssz
    Customer is experiencing poor Sybase ASE 15 performance on a PS6000X SAN with 16 X 450GB 10K in RAID-50. The server is a Dell R710 running 2003 server R2 64bit in ESX 4.0.0,256968 I've used sqlio to benchmark the sequential write performance of 4KB blocks on the drive. sqlio -kW -t1 -s600 -dE -o1 -fsequential -b4 -BH -LS sqliotestfile.dat Result is 1900 IOPS. However, when Sybase is running a sustained workload of small inserts SAN HQ shows a consistent 590 IOPS (and 100% 4K write activity). It also shows that the write latency increases to 1.2ms from <1ms. Monitoring and tests in Sybase demonstrate the performance problem is IO related and in particular there is a lot of wait time writing to the log. The SAN indicates that write caching is enabled. What IOPS should the SAN be capable of for 4k sequential write activity? Also, with write caching enabled, shouldn't the controller be batching up the 4K writes into something more efficient? Also, any tips on Sybase on ESX would be appreciated.

    Read the article

  • Cannot read/access Apache2 access logs

    - by webworm
    I have been asked to take a look at some access logs for an Apcahe2 web server running on Ubuntu. I have been told by the administrator of the machine that my login has "admin" access yet I cannot seem to copy the access logs from Apache2 to my local machine via FTP for analysis. I figure one of two things is happening ... I don't really have full admin access Some other process (perhaps Apache2) has control of the log files and won't let me copy them. How can I tell if I truly have admin access? What type of access do I need to request? Root access? Something else? Should I be able to copy these log files with admin access?

    Read the article

  • Network Traffic Log

    - by Chris Becke
    Background - On my "home" network I have a Linksys WTR45GL router providing my internet access as well as a wireless AP. Connected I have * 2 Windows PCs (wired) * At least one laptop (Wired) * Some 802.11 enabled handheld consoles (PSPs) * A Nintendo Wii * Some windows XP pcs used by the people in the granny flat. Where I live, South Africa, well, 1Gb worth of monthly cap is, while not expensive, costly enough that I'd like to be sure that all the bandwidth used by devices on my network is ... well ... legitimate and not the result of neighbors parasiting my wireless, malware or just the result of "liberal" download policies in my software. I got the Linksys WRT45GL on the understanding that there were custom firmwares (DD-WRT and Tomato) that allowed bandwidth tracking, but there doesn't seem to be any facility to get a log of traffic that can be examined to see (a) which local devices were the biggest consumers of bandwidth and (b) what they were connected to. What tools are there for logging traffic such that, when it gets to that OMG moment in the month when all my bandwidth is gone, I have a chance to find out what the hell used it all up (and hopefully attempt some corrective action).

    Read the article

  • Event Viewer shows service name as a truncated 8 character name

    - by Retrocoder
    I have written a service which logs to the Windows Event Log when it has any problems. This works fine and the service name is shown correctly in the Source column of the Event Viewer. The problem I am seeing is when my service hits some major problems like the networking layer has died etc. When this happens the event log shows errors about my service but the service name is shown as a truncated 8 character name. This name looks to be that of the executable and not the service name. Is this normal behaviour for a truncated name to be show ?

    Read the article

  • Can't log in after restoring from Time Machine

    - by Jay Conrod
    My friend uses a Macbook Pro with Snow Leopard 10.6.2. She uses both FileVault and Time Machine to preserve her data. Recently, she suffered a hard disk failure. After restoring from Time Machine using the Snow Leopard install disk, she gets the following error when logging in: You are unable to log in to the FileVault user account at this time. Logging into the account failed because an error occurred. When examining the file system through Terminal, I noticed her home directory is not present: there is no /Users/username directory, or the FileVault .sparsebundle file that's supposed to be there. When using Time Machine.app on /Users, it appears as if her home directory as never there. Additionally, I did a search on the backup disk with the following command: sudo find /Volumes/backup -name '*.sparsebundle' No results. She told me that after working with some large data files, Time Machine would come on, and it would sound like it was transferring a lot of data to the hard disk. Time Machine must have been doing something, right? How can we recover her files? Are they still there?

    Read the article

  • OSX Server 10.5 - Cannot log into Workgroup Manager - diradmin password is correct

    - by Mister IT Guru
    I've got a setup where I am trying to rescue a broken AD. We can no longer authenticate on the Workgroup manager, with passwords being rejected all the time - even though it is correct. I can connect using the workgroup manager on another server and I get the user list as expected, but when I click the padlock to make changes, I get the following screen: The problem is, I know the password is correct, I just used it to connect to the server in the first place. I can log into the server using the local admin, and services such as AFP, VPN and SMB continue to serve users. I have about 300 or so users on this server, and I would very much like to avoid having a rebuild. As there is much configuration that has been done without my knowledge (it's a client machine), I'd like to attempt to fix it, and then create another server and migration OD off this broken machine, then decommission it "gently". Ultimately this would mean no disruption of services. What I'd like it some tips as to how to fix the problem with authenticating to make changes in the work group manager, and maintenance on open directory in general. Thanks

    Read the article

  • Anyone else being hit by traffic on TCP port 11370

    - by Jakub
    I've been watching my logs (Ubuntu 9.10 server) and dunno about any of you but I am getting a ton of traffic from sources like Russia, Romania, etc.. on port 11370 (my iptables are logndrop'ing it. But was just curious). Some googling revealed this info: http://www.keysigning.org/sks/ -which seems to use port 11370 & 11371 Could that be the service they are scanning for (i don't run it)? ICS shows this: https://isc.incidents.org/port.html?port=11370 Just curious what you guys think and if anyone has seen this before? If need be I can post my log on here, but its just a dropped log of TCP port 11370 from various IPs. Thought it was strange as thats the ONLY Port I seem to repeatedly be hit on (from logs). I'm running on a Linode (VPS) if that matters to annyone.

    Read the article

  • SCCM 2012 R2 - OSD Task Sequence failure on physical computers

    - by Svanste
    I'm trying to deploy windows 7 with SCCM 2012 R2 to physical desktops and laptops. But the task sequence keeps failing, no matter what I try. When I try it on a VM it works fine. However, when I try it on a physical computer it fails. So I think it has something to do with drivers, but I already tried both the "auto apply drivers" + wmi query for model method, and also the "apply driver package" + wmi query for model method. In the link below I added a zip file, containing two other zip files. One is a captured log from a failed osd on a desktop, the other is the export of my task sequence. Download zip-file with log and TS If anyone could resolve the issue, or share their own task sequence for such a task (pure sccm 2012 (R2), no MDT), that would be great.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >