Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 6/780 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • How to completely disable apache access log? [closed]

    - by Miljenko Barbir
    I'm running WAMP server on Windows Server 2003, Apache 2.2, and I would like to completely disable writing into the access log. It would be neat if I could do the following, but I'm on Windows: CustomLog "|/dev/null" common All I get in the error log is "piped log program '/dev/null' failed unexpectedly", although I kinda expected this... Is there a Windows alternative to this or any other way to just disable writing the access log?

    Read the article

  • Odd log entries when starting up PotgreSQL

    - by Shadow
    When restarting pgSQL, I get the following log entries: 2010-02-10 16:08:05 EST LOG: received smart shutdown request 2010-02-10 16:08:05 EST LOG: autovacuum launcher shutting down 2010-02-10 16:08:05 EST LOG: shutting down 2010-02-10 16:08:05 EST LOG: database system is shut down 2010-02-10 16:08:07 EST LOG: database system was shut down at 2010-02-10 16:08:05 EST 2010-02-10 16:08:07 EST LOG: autovacuum launcher started 2010-02-10 16:08:07 EST LOG: database system is ready to accept connections 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST LOG: incomplete startup packet 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST LOG: incomplete startup packet My question regarding a potential consequence of this is posted here: http://stackoverflow.com/questions/2238954/mdb2-says-connection-failed-db-logs-say-otherwise , but I didn't realize this was happening when I asked that question, and I figured this [part of the] problem is for SF. Edit: I can connect to the database and manipulate things normally with the psql CLI and the postgres user.

    Read the article

  • Weird stuff in in my /var/log/auth.log

    - by xXx
    I just check my logs on my deed serv, i spotted some weird log in the auth.log : Jun 17 22:27:01 mutualab CRON[16249]: pam_unix(cron:session): session opened for user user by (uid=0) Jun 17 22:27:01 mutualab CRON[16249]: pam_unix(cron:session): session closed for user user Jun 17 22:28:01 mutualab CRON[16253]: pam_unix(cron:session): session opened for user user by (uid=0) Jun 17 22:28:01 mutualab CRON[16253]: pam_unix(cron:session): session closed for user alain Jun 17 22:29:01 mutualab CRON[16257]: pam_unix(cron:session): session opened for user user by (uid=0) Jun 17 22:29:01 mutualab CRON[16257]: pam_unix(cron:session): session closed for user user Looks like somebody try to log - and suceed ? - but delog instantly ? I got the same log for hours now... Do you know what happens ? N.B : it's a 10.04 ubuntu server

    Read the article

  • Refreshing user's group membership in active directory without log-off/log-on

    - by Serge
    So, when user logs in to their workstation, they receive SIDs of groups they are members of, and this is used for the length of the session, until logging off. Is there a way to refresh membership SIDs information without actually having to log off and log on again? I've added myself to a group, but can't log off without interrupting running process that requires these permissions. Don't want to have to go through these steps again...

    Read the article

  • New event log nowhere to be found after creating in PowerShell

    - by Mega Matt
    Through PowerShell, I am attempting to create a new event log and write a test entry to it, but it is not showing up the Event Viewer. This is the command I'm using to create a new event log: new-eventlog -logname TestLog -source TestLog And to write a new event to it: write-eventlog TestLog -source TestLog -eventid 12345 -message "Test message" After running the first command, there is no "TestLog" log in the event viewer anywhere, and I would expect it to show up in the Applications and Services Logs section. After running the second command, same result. However, I am seeing a registry key for the log at HKLM\SYSTEM\services\eventlog\TestLog. Just not seeing anything in the event viewer. So, 2 questions: When should I be seeing the event log? After it gets created or after I write the first event to it? And, more importantly, why am I not seeing it at all? I'm using Windows Server 2008R2, and am logged in and running the PS as an administrator. Thanks.

    Read the article

  • Very large log files in IIS 7.x

    - by Neal
    Hello, I had a site stop working today and when I RDP'd into the server I saw a warning about low disk space. The first thing I checked was the inetpub folder where the log files are stored and sure enough it was huge, 40 GB huge. I do clean the files monthly but what is causing a day's worth of logging on a medium activity site (www.vbdotnetforums.com) to create 300-500 MB log files? I do have everything being logged so my SmarterStats software gives me the most info, but are there specific things I should/can turn off that is causing the most growth in these log files? Also, sure would be nice if Microsoft someday had some sort of log file management such as deleting log files after they exceed a certain size (total), X days, etc. We all have to come up with some solution to delete the old ones manually. Thanks Neal

    Read the article

  • Bash Script to Compress / Transfer / Remove Log Files

    - by Jason
    I am currently using chronolog to set log file names for Apache with date. They are in the following format: /WEB/LOGS/APACHE_ACCESS_YYYY-MM-DD.log /WEB/LOGS/APACHE_ERROR_YYYY-MM-DD.log I would like to have a script that runs on the first of every month and compresses the log files from the previous month, transfers them to another host (via SCP) and then deletes the compressed file. find . -name '*.log' -mtime +1 -type f I've found several examples like the one above that allow you to select files x days old, but I need all files from the previous month. I am the first to admit my bash scripting skills are weak so would really appreciate any help and guidance.

    Read the article

  • Thoughts on Apache log file sizes?

    - by Nathan Long
    Do you place any limits on the size of Apache log files - access.log and error.log? Specifically, can you give: Reasons to limit log file sizes Disk space Any other? Reasons NOT to limit log file sizes Research into performance issues or security breaches Any other? Methods of doing so Cron job that periodically deletes the file, or the first N lines? Any other? Anything you might salvage before deleting For example, grep out how many times a file was downloaded before deleting the access logs I'd like get the thoughts of experienced sysadmins before I do anything. (Marking as community wiki since this may be a matter of opinion.)

    Read the article

  • Accessing large log files on a unix machine with textpad

    - by Jason
    Hi, I'm interested to access large log files on a unix server with textpad. (textpad for history reasons, i personally prefer ofcourse less awk grep etc) but I have many personal who rather be using textpad they have years of experience with it and can tweak it to do whatever they want. The problem is that if i connect for example with winscp to get the log files to textpad it first fetches the full log and user needs to wait and it bloats etc. I would rather the textpad to somehow access the unix machine and get only the relevant segment of the log file (large log files could be GB) anyone knows how can this be achieved?

    Read the article

  • Split big Apache log to folder structure

    - by Dough
    I just changed my Apache log behavior because it was making me having very BIG files... So I now use cronolog to split my logs to log/httpd/2012/11/access_2012.11.30.log for exemple, pattern : %Y/%m/access_%Y.%m.%d.log I now want to split my old 42GB file to the same structure but really don't know how to do that efficiently. I tried some simple commands with cat, egrep, awk... but really don't know how to handle all that in a more powerful script. Here is how the log looks like : x.x.237.134 - - [08/Apr/2011:14:43:09 +0200] "GET... x.x.50.15 - - [08/Apr/2011:14:43:09 +0200] "GET... [...] x.x.254.19 - - [28/Feb/2012:15:24:48 +0100] "GET... So I need for yeah line to get : year %Y (ex. 2012) month %m (ex. 11) day %d And to push out the entire line to : %Y/%m/access_%Y.%m.%d.log Can someone give me clues to get that working ? Thanks a lot for your interest.

    Read the article

  • Log backups "stalling" on SQL 2008?

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • Polynomial fitting with log log plot

    - by viral parekh
    I have a simple problem to fit a straight line on log-log scale. My code is, data=loadtxt(filename) xdata=data[:,0] ydata=data[:,1] polycoeffs = scipy.polyfit(xdata, ydata, 1) yfit = scipy.polyval(polycoeffs, xdata) pylab.plot(xdata, ydata, 'k.') pylab.plot(xdata, yfit, 'r-') Now I need to plot fit line on log scale so I just change x and y axis, ax.set_yscale('log') ax.set_xscale('log') then its not plotting correct fit line. So how can I change fit function (in log scale) so that it can plot fit line on log-log scale? Thanks -Viral

    Read the article

  • Transactional Messaging in the Windows Azure Service Bus

    - by Alan Smith
    Introduction I’m currently working on broadening the content in the Windows Azure Service Bus Developer Guide. One of the features I have been looking at over the past week is the support for transactional messaging. When using the direct programming model and the WCF interface some, but not all, messaging operations can participate in transactions. This allows developers to improve the reliability of messaging systems. There are some limitations in the transactional model, transactions can only include one top level messaging entity (such as a queue or topic, subscriptions are no top level entities), and transactions cannot include other systems, such as databases. As the transaction model is currently not well documented I have had to figure out how things work through experimentation, with some help from the development team to confirm any questions I had. Hopefully I’ve got the content mostly correct, I will update the content in the e-book if I find any errors or improvements that can be made (any feedback would be very welcome). I’ve not had a chance to look into the code for transactions and asynchronous operations, maybe that would make a nice challenge lab for my Windows Azure Service Bus course. Transactional Messaging Messaging entities in the Windows Azure Service Bus provide support for participation in transactions. This allows developers to perform several messaging operations within a transactional scope, and ensure that all the actions are committed or, if there is a failure, none of the actions are committed. There are a number of scenarios where the use of transactions can increase the reliability of messaging systems. Using TransactionScope In .NET the TransactionScope class can be used to perform a series of actions in a transaction. The using declaration is typically used de define the scope of the transaction. Any transactional operations that are contained within the scope can be committed by calling the Complete method. If the Complete method is not called, any transactional methods in the scope will not commit.   // Create a transactional scope. using (TransactionScope scope = new TransactionScope()) {     // Do something.       // Do something else.       // Commit the transaction.     scope.Complete(); }     In order for methods to participate in the transaction, they must provide support for transactional operations. Database and message queue operations typically provide support for transactions. Transactions in Brokered Messaging Transaction support in Service Bus Brokered Messaging allows message operations to be performed within a transactional scope; however there are some limitations around what operations can be performed within the transaction. In the current release, only one top level messaging entity, such as a queue or topic can participate in a transaction, and the transaction cannot include any other transaction resource managers, making transactions spanning a messaging entity and a database not possible. When sending messages, the send operations can participate in a transaction allowing multiple messages to be sent within a transactional scope. This allows for “all or nothing” delivery of a series of messages to a single queue or topic. When receiving messages, messages that are received in the peek-lock receive mode can be completed, deadlettered or deferred within a transactional scope. In the current release the Abandon method will not participate in a transaction. The same restrictions of only one top level messaging entity applies here, so the Complete method can be called transitionally on messages received from the same queue, or messages received from one or more subscriptions in the same topic. Sending Multiple Messages in a Transaction A transactional scope can be used to send multiple messages to a queue or topic. This will ensure that all the messages will be enqueued or, if the transaction fails to commit, no messages will be enqueued.     An example of the code used to send 10 messages to a queue as a single transaction from a console application is shown below.   QueueClient queueClient = messagingFactory.CreateQueueClient(Queue1);   Console.Write("Sending");   // Create a transaction scope. using (TransactionScope scope = new TransactionScope()) {     for (int i = 0; i < 10; i++)     {         // Send a message         BrokeredMessage msg = new BrokeredMessage("Message: " + i);         queueClient.Send(msg);         Console.Write(".");     }     Console.WriteLine("Done!");     Console.WriteLine();       // Should we commit the transaction?     Console.WriteLine("Commit send 10 messages? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     } } Console.WriteLine(); messagingFactory.Close();     The transaction scope is used to wrap the sending of 10 messages. Once the messages have been sent the user has the option to either commit the transaction or abandon the transaction. If the user enters “yes”, the Complete method is called on the scope, which will commit the transaction and result in the messages being enqueued. If the user enters anything other than “yes”, the transaction will not commit, and the messages will not be enqueued. Receiving Multiple Messages in a Transaction The receiving of multiple messages is another scenario where the use of transactions can improve reliability. When receiving a group of messages that are related together, maybe in the same message session, it is possible to receive the messages in the peek-lock receive mode, and then complete, defer, or deadletter the messages in one transaction. (In the current version of Service Bus, abandon is not transactional.)   The following code shows how this can be achieved. using (TransactionScope scope = new TransactionScope()) {       while (true)     {         // Receive a message.         BrokeredMessage msg = q1Client.Receive(TimeSpan.FromSeconds(1));         if (msg != null)         {             // Wrote message body and complete message.             string text = msg.GetBody<string>();             Console.WriteLine("Received: " + text);             msg.Complete();         }         else         {             break;         }     }     Console.WriteLine();       // Should we commit?     Console.WriteLine("Commit receive? (yes or no)");     string reply = Console.ReadLine();     if (reply.ToLower().Equals("yes"))     {         // Commit the transaction.         scope.Complete();     }     Console.WriteLine(); }     Note that if there are a large number of messages to be received, there will be a chance that the transaction may time out before it can be committed. It is possible to specify a longer timeout when the transaction is created, but It may be better to receive and commit smaller amounts of messages within the transaction. It is also possible to complete, defer, or deadletter messages received from more than one subscription, as long as all the subscriptions are contained in the same topic. As subscriptions are not top level messaging entities this scenarios will work. The following code shows how this can be achieved. try {     using (TransactionScope scope = new TransactionScope())     {         // Receive one message from each subscription.         BrokeredMessage msg1 = subscriptionClient1.Receive();         BrokeredMessage msg2 = subscriptionClient2.Receive();           // Complete the message receives.         msg1.Complete();         msg2.Complete();           Console.WriteLine("Msg1: " + msg1.GetBody<string>());         Console.WriteLine("Msg2: " + msg2.GetBody<string>());           // Commit the transaction.         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     Unsupported Scenarios The restriction of only one top level messaging entity being able to participate in a transaction makes some useful scenarios unsupported. As the Windows Azure Service Bus is under continuous development and new releases are expected to be frequent it is possible that this restriction may not be present in future releases. The first is the scenario where messages are to be routed to two different systems. The following code attempts to do this.   try {     // Create a transaction scope.     using (TransactionScope scope = new TransactionScope())     {         BrokeredMessage msg1 = new BrokeredMessage("Message1");         BrokeredMessage msg2 = new BrokeredMessage("Message2");           // Send a message to Queue1         Console.WriteLine("Sending Message1");         queue1Client.Send(msg1);           // Send a message to Queue2         Console.WriteLine("Sending Message2");         queue2Client.Send(msg2);           // Commit the transaction.         Console.WriteLine("Committing transaction...");         scope.Complete();     } } catch (Exception ex) {     Console.WriteLine(ex.Message); }     The results of running the code are shown below. When attempting to send a message to the second queue the following exception is thrown: No active Transaction was found for ID '35ad2495-ee8a-4956-bbad-eb4fedf4a96e:1'. The Transaction may have timed out or attempted to span multiple top-level entities such as Queue or Topic. The server Transaction timeout is: 00:01:00..TrackingId:947b8c4b-7754-4044-b91b-4a959c3f9192_3_3,TimeStamp:3/29/2012 7:47:32 AM.   Another scenario where transactional support could be useful is when forwarding messages from one queue to another queue. This would also involve more than one top level messaging entity, and is therefore not supported.   Another scenario that developers may wish to implement is performing transactions across messaging entities and other transactional systems, such as an on-premise database. In the current release this is not supported.   Workarounds for Unsupported Scenarios There are some techniques that developers can use to work around the one top level entity limitation of transactions. When sending two messages to two systems, topics and subscriptions can be used. If the same message is to be sent to two destinations then the subscriptions would have the default subscriptions, and the client would only send one message. If two different messages are to be sent, then filters on the subscriptions can route the messages to the appropriate destination. The client can then send the two messages to the topic in the same transaction.   In scenarios where a message needs to be received and then forwarded to another system within the same transaction topics and subscriptions can also be used. A message can be received from a subscription, and then sent to a topic within the same transaction. As a topic is a top level messaging entity, and a subscription is not, this scenario will work.

    Read the article

  • Can a large transaction log cause cpu hikes to occur

    - by Simon Rigby
    Hello all, I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out. I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan. The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur. I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk. I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation). Many Thanks,

    Read the article

  • MySQL Create tables without commiting current transaction

    - by user276648
    I'd like my program to be able to install plugins, and rollback all the changes made if an error occurs. So I create a transaction that keeps all the things that were added while installing the plugin. The problem is that the plugin may want to create tables, and doing so automatically commits the current transaction in MySQL. See Statements That Cause an Implicit Commit on MySQL web site. Any idea on how I could do it? I thought of using temporary tables as they are not automatically commited, unless they are using too much memory, but it looks like temporary tables cannot be rolled back anyway (and I haven't found a way to convert them to permanent tables). I just found out about "save points" (http://dev.mysql.com/doc/refman/5.1/en/savepoint.html), but I don't really understand how/when it should be used nor if it can help me achieve what I want.

    Read the article

  • awstats parse of postfix mail log drops all records

    - by accidental admin
    I'm trying to get awstats to parse the postfix mail log, but it drops allmost all entries with messages like: Corrupted record (date 20091204042837 lower than 20091211065829-20000): 2009-12-04 04:28:37 root root localhost 127.0.0.1 SMTP - 1 17480 Few more are dropped with an invalid LogFormat: Corrupted record line 24 (record format does not match LogFormat parameter): 2009-11-16 04: 28:22 root root localhost 127.0.0.1 SMTP - 14755 My conf LogFormat="%time2 %email %email_r %host %host_r %method %url %code %bytesd" I believe matches the log format (and besides is the log format I've seen everywhere for awstats mail parsing). Besides, is the same entry format as all the other entries in the mail log. Whatever is left is dropped too: Dropped record (host localhost and 127.0.0.1 not qualified by SkipHosts): 2009-12-07 04:28:36 root root localhost 127.0.0.1 SMTP - 1 17152 I added SkipHosts="" to the .conf file but to no avail. I feel like awstats really has some personal quarrel with me today.

    Read the article

  • awstats parse of postfix mail log drops all records

    - by accidental admin
    I'm trying to get awstats to parse the postfix mail log, but it drops allmost all entries with messages like: Corrupted record (date 20091204042837 lower than 20091211065829-20000): 2009-12-04 04:28:37 root root localhost 127.0.0.1 SMTP - 1 17480 Few more are dropped with an invalid LogFormat: Corrupted record line 24 (record format does not match LogFormat parameter): 2009-11-16 04: 28:22 root root localhost 127.0.0.1 SMTP - 14755 My conf LogFormat="%time2 %email %email_r %host %host_r %method %url %code %bytesd" I believe matches the log format (and besides is the log format I've seen everywhere for awstats mail parsing). Besides, is the same entry format as all the other entries in the mail log. Whatever is left is dropped too: Dropped record (host localhost and 127.0.0.1 not qualified by SkipHosts): 2009-12-07 04:28:36 root root localhost 127.0.0.1 SMTP - 1 17152 I added SkipHosts="" to the .conf file but to no avail. I feel like awstats really has some personal quarrel with me today.

    Read the article

  • Get sessions' remote IP from Teamviewer log file

    - by etuardu
    I'd like to know who has logged in to my machine and when. I have two TeamViewer log files: Connections_incoming.txt and TeamViewer7_Logfile.log. The first one is quite plain and lists, as its name says, the incoming connections to the machine, reporting the local name of the remote host, login time, logout time, and some ids. e.g.: 173274362 MYLAPTOP 20-02-2012 17:32:16 20-02-2012 17:50:42 Master RemoteControl {C5AAE483-ED0B-54B8-9235-7AE597CAD342} This is almost all what I need, but unfortunately no remote IP address is reported here, so I checked for IPs in TeamViewer7_Logfile.log but it is really messy. It indeed contains some IP addresses but I can't understand which one is bound with the items in the first log file. Is there a way to interpolate the two logs to get what I need? Should I search the second file for some particular text? What do you suggest?

    Read the article

  • IIS 7.5 log to: sql server vs file

    - by stacker
    I want to know if get IIS to log directly to the sql server is resource costive, and a better solution maybe generate log files, and each hour import this files to sql server. Does it VERY big cost to log to sql server each request directly? The pages are open connection to the database anyway for each request.

    Read the article

  • Creating a custom view for windows log based on a "Contains {text}" rule

    - by jussinen
    I have a server running Windows Server 2008. I'm using Windows Server Auditing to check when and by which user a folder is modified to determine who is modifying it as the modifications are causing problems. I can see the log of the audit when a change is made in the System log. How do I create a Custom View that will return all events from System log where a certain text (which is the folder name) is present? The create custom view doesn't seem to have that option. I'm not sure whether it's possible via custom xml query or whether I'll need to export the system log to csv and search in Excel. John

    Read the article

  • Event log message size 31885? Windows 2008

    - by testuser
    We recently upgraded our production boxes to Windows 2008 from Windows 2003 servers. Everything works fine except the event logging. We log at max 32000 bytes of data for each message On 2008 servers, event logging fails if number of characters is greater than 31885. Is this new limit on Windows 2008 R2 servers? Any help appreciated. On Win 2003 servers, I am able to log 32000 bytes of data for each log entry.

    Read the article

  • Reading log files from web application

    - by Egorinsk
    Hi! I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • Apache log lines contain "..."

    - by mtah
    We have a custom log line format for Apache logs which are analyzed. CustomLog "|/usr/sbin/rotatelogs -l /mnt/var/log/apache2/access-%Y%m%d%H%M%S.log 900" "%a %{%s}t \"%r\"" However, some log lines are mysteriously shortened with "..." for some reason, but how can this be? The shortest length line discovered where this occurs is 317 chars while the longest line is way over 2000 chars. "GET /exposure?sg=&ap=0x0&fv=WIN%2010,0,22,87&si=IH95VDUAVLJ0&pt=Lage%20hjemmelaget%20sengegavl%20-%20Forum%20-%20Diskusjon.no&iv=0&sd=1024x600&ct=680&tz=-120&eu=http%3A//www.diskusjon.no/index.php%3Fshowtopic%3D1011139&l...AS3&an=NO%20-%20180x500%20Pretail%20CPC&wd=1024x483&rf=http%3A//www.google.no/search%3Fhl%3Dno%26source%3Dhp%26q%3Dsengegavl+lage%26meta%3D%26aq%3D2%26aqi%3Dg10%26aql%3D%26oq%3Dsengega%26gs_rfai%3D&ui=3INYF5QAZL10&ws=0x417&ad=180x500&sa= HTTP/1.1"

    Read the article

  • Transaction classification. Artificial intelligence

    - by Alex
    For a project, I have to classify a list of banking transactions based on their description. Supose I have 2 categories: health and entertainment. Initially, the transactions will have basic information: date and time, ammount and a description given by the user. For example: Transaction 1: 09/17/2012 12:23:02 pm - 45.32$ - "medicine payments" Transaction 2: 09/18/2012 1:56:54 pm - 8.99$ - "movie ticket" Transaction 3: 09/18/2012 7:46:37 pm - 299.45$ - "dentist appointment" Transaction 4: 09/19/2012 6:50:17 am - 45.32$ - "videogame shopping" The idea is to use that description to classify the transaction. 1 and 3 would go to "health" category while 2 and 4 would go to "entertainment". I want to use the google prediction API to do this. In reality, I have 7 different categories, and for each one, a lot of key words related to that category. I would use some for training and some for testing. Is this even possible? I mean, to determine the category given a few words? Plus, the number of words is not necesarally the same on every transaction. Thanks for any help or guidance! Very appreciated Possible solution: https://developers.google.com/prediction/docs/hello_world?hl=es#theproblem

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >