Search Results

Search found 17616 results on 705 pages for 'uls log'.

Page 4/705 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to completely disable apache access log? [closed]

    - by Miljenko Barbir
    I'm running WAMP server on Windows Server 2003, Apache 2.2, and I would like to completely disable writing into the access log. It would be neat if I could do the following, but I'm on Windows: CustomLog "|/dev/null" common All I get in the error log is "piped log program '/dev/null' failed unexpectedly", although I kinda expected this... Is there a Windows alternative to this or any other way to just disable writing the access log?

    Read the article

  • Odd log entries when starting up PotgreSQL

    - by Shadow
    When restarting pgSQL, I get the following log entries: 2010-02-10 16:08:05 EST LOG: received smart shutdown request 2010-02-10 16:08:05 EST LOG: autovacuum launcher shutting down 2010-02-10 16:08:05 EST LOG: shutting down 2010-02-10 16:08:05 EST LOG: database system is shut down 2010-02-10 16:08:07 EST LOG: database system was shut down at 2010-02-10 16:08:05 EST 2010-02-10 16:08:07 EST LOG: autovacuum launcher started 2010-02-10 16:08:07 EST LOG: database system is ready to accept connections 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST LOG: incomplete startup packet 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST LOG: incomplete startup packet My question regarding a potential consequence of this is posted here: http://stackoverflow.com/questions/2238954/mdb2-says-connection-failed-db-logs-say-otherwise , but I didn't realize this was happening when I asked that question, and I figured this [part of the] problem is for SF. Edit: I can connect to the database and manipulate things normally with the psql CLI and the postgres user.

    Read the article

  • Weird stuff in in my /var/log/auth.log

    - by xXx
    I just check my logs on my deed serv, i spotted some weird log in the auth.log : Jun 17 22:27:01 mutualab CRON[16249]: pam_unix(cron:session): session opened for user user by (uid=0) Jun 17 22:27:01 mutualab CRON[16249]: pam_unix(cron:session): session closed for user user Jun 17 22:28:01 mutualab CRON[16253]: pam_unix(cron:session): session opened for user user by (uid=0) Jun 17 22:28:01 mutualab CRON[16253]: pam_unix(cron:session): session closed for user alain Jun 17 22:29:01 mutualab CRON[16257]: pam_unix(cron:session): session opened for user user by (uid=0) Jun 17 22:29:01 mutualab CRON[16257]: pam_unix(cron:session): session closed for user user Looks like somebody try to log - and suceed ? - but delog instantly ? I got the same log for hours now... Do you know what happens ? N.B : it's a 10.04 ubuntu server

    Read the article

  • Refreshing user's group membership in active directory without log-off/log-on

    - by Serge
    So, when user logs in to their workstation, they receive SIDs of groups they are members of, and this is used for the length of the session, until logging off. Is there a way to refresh membership SIDs information without actually having to log off and log on again? I've added myself to a group, but can't log off without interrupting running process that requires these permissions. Don't want to have to go through these steps again...

    Read the article

  • New event log nowhere to be found after creating in PowerShell

    - by Mega Matt
    Through PowerShell, I am attempting to create a new event log and write a test entry to it, but it is not showing up the Event Viewer. This is the command I'm using to create a new event log: new-eventlog -logname TestLog -source TestLog And to write a new event to it: write-eventlog TestLog -source TestLog -eventid 12345 -message "Test message" After running the first command, there is no "TestLog" log in the event viewer anywhere, and I would expect it to show up in the Applications and Services Logs section. After running the second command, same result. However, I am seeing a registry key for the log at HKLM\SYSTEM\services\eventlog\TestLog. Just not seeing anything in the event viewer. So, 2 questions: When should I be seeing the event log? After it gets created or after I write the first event to it? And, more importantly, why am I not seeing it at all? I'm using Windows Server 2008R2, and am logged in and running the PS as an administrator. Thanks.

    Read the article

  • Very large log files in IIS 7.x

    - by Neal
    Hello, I had a site stop working today and when I RDP'd into the server I saw a warning about low disk space. The first thing I checked was the inetpub folder where the log files are stored and sure enough it was huge, 40 GB huge. I do clean the files monthly but what is causing a day's worth of logging on a medium activity site (www.vbdotnetforums.com) to create 300-500 MB log files? I do have everything being logged so my SmarterStats software gives me the most info, but are there specific things I should/can turn off that is causing the most growth in these log files? Also, sure would be nice if Microsoft someday had some sort of log file management such as deleting log files after they exceed a certain size (total), X days, etc. We all have to come up with some solution to delete the old ones manually. Thanks Neal

    Read the article

  • Bash Script to Compress / Transfer / Remove Log Files

    - by Jason
    I am currently using chronolog to set log file names for Apache with date. They are in the following format: /WEB/LOGS/APACHE_ACCESS_YYYY-MM-DD.log /WEB/LOGS/APACHE_ERROR_YYYY-MM-DD.log I would like to have a script that runs on the first of every month and compresses the log files from the previous month, transfers them to another host (via SCP) and then deletes the compressed file. find . -name '*.log' -mtime +1 -type f I've found several examples like the one above that allow you to select files x days old, but I need all files from the previous month. I am the first to admit my bash scripting skills are weak so would really appreciate any help and guidance.

    Read the article

  • Thoughts on Apache log file sizes?

    - by Nathan Long
    Do you place any limits on the size of Apache log files - access.log and error.log? Specifically, can you give: Reasons to limit log file sizes Disk space Any other? Reasons NOT to limit log file sizes Research into performance issues or security breaches Any other? Methods of doing so Cron job that periodically deletes the file, or the first N lines? Any other? Anything you might salvage before deleting For example, grep out how many times a file was downloaded before deleting the access logs I'd like get the thoughts of experienced sysadmins before I do anything. (Marking as community wiki since this may be a matter of opinion.)

    Read the article

  • Accessing large log files on a unix machine with textpad

    - by Jason
    Hi, I'm interested to access large log files on a unix server with textpad. (textpad for history reasons, i personally prefer ofcourse less awk grep etc) but I have many personal who rather be using textpad they have years of experience with it and can tweak it to do whatever they want. The problem is that if i connect for example with winscp to get the log files to textpad it first fetches the full log and user needs to wait and it bloats etc. I would rather the textpad to somehow access the unix machine and get only the relevant segment of the log file (large log files could be GB) anyone knows how can this be achieved?

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Split big Apache log to folder structure

    - by Dough
    I just changed my Apache log behavior because it was making me having very BIG files... So I now use cronolog to split my logs to log/httpd/2012/11/access_2012.11.30.log for exemple, pattern : %Y/%m/access_%Y.%m.%d.log I now want to split my old 42GB file to the same structure but really don't know how to do that efficiently. I tried some simple commands with cat, egrep, awk... but really don't know how to handle all that in a more powerful script. Here is how the log looks like : x.x.237.134 - - [08/Apr/2011:14:43:09 +0200] "GET... x.x.50.15 - - [08/Apr/2011:14:43:09 +0200] "GET... [...] x.x.254.19 - - [28/Feb/2012:15:24:48 +0100] "GET... So I need for yeah line to get : year %Y (ex. 2012) month %m (ex. 11) day %d And to push out the entire line to : %Y/%m/access_%Y.%m.%d.log Can someone give me clues to get that working ? Thanks a lot for your interest.

    Read the article

  • Log backups "stalling" on SQL 2008?

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • Polynomial fitting with log log plot

    - by viral parekh
    I have a simple problem to fit a straight line on log-log scale. My code is, data=loadtxt(filename) xdata=data[:,0] ydata=data[:,1] polycoeffs = scipy.polyfit(xdata, ydata, 1) yfit = scipy.polyval(polycoeffs, xdata) pylab.plot(xdata, ydata, 'k.') pylab.plot(xdata, yfit, 'r-') Now I need to plot fit line on log scale so I just change x and y axis, ax.set_yscale('log') ax.set_xscale('log') then its not plotting correct fit line. So how can I change fit function (in log scale) so that it can plot fit line on log-log scale? Thanks -Viral

    Read the article

  • awstats parse of postfix mail log drops all records

    - by accidental admin
    I'm trying to get awstats to parse the postfix mail log, but it drops allmost all entries with messages like: Corrupted record (date 20091204042837 lower than 20091211065829-20000): 2009-12-04 04:28:37 root root localhost 127.0.0.1 SMTP - 1 17480 Few more are dropped with an invalid LogFormat: Corrupted record line 24 (record format does not match LogFormat parameter): 2009-11-16 04: 28:22 root root localhost 127.0.0.1 SMTP - 14755 My conf LogFormat="%time2 %email %email_r %host %host_r %method %url %code %bytesd" I believe matches the log format (and besides is the log format I've seen everywhere for awstats mail parsing). Besides, is the same entry format as all the other entries in the mail log. Whatever is left is dropped too: Dropped record (host localhost and 127.0.0.1 not qualified by SkipHosts): 2009-12-07 04:28:36 root root localhost 127.0.0.1 SMTP - 1 17152 I added SkipHosts="" to the .conf file but to no avail. I feel like awstats really has some personal quarrel with me today.

    Read the article

  • awstats parse of postfix mail log drops all records

    - by accidental admin
    I'm trying to get awstats to parse the postfix mail log, but it drops allmost all entries with messages like: Corrupted record (date 20091204042837 lower than 20091211065829-20000): 2009-12-04 04:28:37 root root localhost 127.0.0.1 SMTP - 1 17480 Few more are dropped with an invalid LogFormat: Corrupted record line 24 (record format does not match LogFormat parameter): 2009-11-16 04: 28:22 root root localhost 127.0.0.1 SMTP - 14755 My conf LogFormat="%time2 %email %email_r %host %host_r %method %url %code %bytesd" I believe matches the log format (and besides is the log format I've seen everywhere for awstats mail parsing). Besides, is the same entry format as all the other entries in the mail log. Whatever is left is dropped too: Dropped record (host localhost and 127.0.0.1 not qualified by SkipHosts): 2009-12-07 04:28:36 root root localhost 127.0.0.1 SMTP - 1 17152 I added SkipHosts="" to the .conf file but to no avail. I feel like awstats really has some personal quarrel with me today.

    Read the article

  • SQL transaction log backups conflicting with full backups?

    - by BradC
    On our SQL servers (2000, 2005, and 2008), we run full backups once a day in the evening, and transaction log backups every 2 hrs. We haven't really worried about these two processes conflicting, but lately we've run into some of the following issues: On one server, the trans log backup occasionally blocks the full backup, and must be manually stopped before the full backup can complete We sometimes end up with a massively-sized trans log backup file (sometimes larger than the full backup!) that seems to occur at the same time the full backup is running. I found a reference that indicate that these are "not allowed" to run at the same time, whatever that means: SQL 2000 Books Online and SQL 2005 Books Online. I'm not sure whether that means that the server will simply prevent them from running simultaneously, or if we ought to be explicitly stopping the log backups while the full backups are running. So are there known conflicts/issues between these? Does the answer differ between SQL versions? Should I have the trans log backup job check to see if the full backup is running before it executes? (and how do I do that...?)

    Read the article

  • Get sessions' remote IP from Teamviewer log file

    - by etuardu
    I'd like to know who has logged in to my machine and when. I have two TeamViewer log files: Connections_incoming.txt and TeamViewer7_Logfile.log. The first one is quite plain and lists, as its name says, the incoming connections to the machine, reporting the local name of the remote host, login time, logout time, and some ids. e.g.: 173274362 MYLAPTOP 20-02-2012 17:32:16 20-02-2012 17:50:42 Master RemoteControl {C5AAE483-ED0B-54B8-9235-7AE597CAD342} This is almost all what I need, but unfortunately no remote IP address is reported here, so I checked for IPs in TeamViewer7_Logfile.log but it is really messy. It indeed contains some IP addresses but I can't understand which one is bound with the items in the first log file. Is there a way to interpolate the two logs to get what I need? Should I search the second file for some particular text? What do you suggest?

    Read the article

  • IIS 7.5 log to: sql server vs file

    - by stacker
    I want to know if get IIS to log directly to the sql server is resource costive, and a better solution maybe generate log files, and each hour import this files to sql server. Does it VERY big cost to log to sql server each request directly? The pages are open connection to the database anyway for each request.

    Read the article

  • Creating a custom view for windows log based on a "Contains {text}" rule

    - by jussinen
    I have a server running Windows Server 2008. I'm using Windows Server Auditing to check when and by which user a folder is modified to determine who is modifying it as the modifications are causing problems. I can see the log of the audit when a change is made in the System log. How do I create a Custom View that will return all events from System log where a certain text (which is the folder name) is present? The create custom view doesn't seem to have that option. I'm not sure whether it's possible via custom xml query or whether I'll need to export the system log to csv and search in Excel. John

    Read the article

  • Event log message size 31885? Windows 2008

    - by testuser
    We recently upgraded our production boxes to Windows 2008 from Windows 2003 servers. Everything works fine except the event logging. We log at max 32000 bytes of data for each message On 2008 servers, event logging fails if number of characters is greater than 31885. Is this new limit on Windows 2008 R2 servers? Any help appreciated. On Win 2003 servers, I am able to log 32000 bytes of data for each log entry.

    Read the article

  • Apache log lines contain "..."

    - by mtah
    We have a custom log line format for Apache logs which are analyzed. CustomLog "|/usr/sbin/rotatelogs -l /mnt/var/log/apache2/access-%Y%m%d%H%M%S.log 900" "%a %{%s}t \"%r\"" However, some log lines are mysteriously shortened with "..." for some reason, but how can this be? The shortest length line discovered where this occurs is 317 chars while the longest line is way over 2000 chars. "GET /exposure?sg=&ap=0x0&fv=WIN%2010,0,22,87&si=IH95VDUAVLJ0&pt=Lage%20hjemmelaget%20sengegavl%20-%20Forum%20-%20Diskusjon.no&iv=0&sd=1024x600&ct=680&tz=-120&eu=http%3A//www.diskusjon.no/index.php%3Fshowtopic%3D1011139&l...AS3&an=NO%20-%20180x500%20Pretail%20CPC&wd=1024x483&rf=http%3A//www.google.no/search%3Fhl%3Dno%26source%3Dhp%26q%3Dsengegavl+lage%26meta%3D%26aq%3D2%26aqi%3Dg10%26aql%3D%26oq%3Dsengega%26gs_rfai%3D&ui=3INYF5QAZL10&ws=0x417&ad=180x500&sa= HTTP/1.1"

    Read the article

  • Reading log files from web application

    - by Egorinsk
    Hi! I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • Cannot write log file 'ffmpeg2pass-0.log' for pass-1 encoding: Permission denied

    - by matt_tm
    Our PHP application is installed as 'root' on a Redhat5/CentOS system at: /var/www/html/beta/ After disabling SELINUX in order to allow these scripts to execute other programs on the system - http://serverfault.com/questions/192951/what-permissions-are-needed-to-run-a-system-command-within-a-php-script-that-wr I faced the error that the Apache error_log showed this: Cannot write log file 'ffmpeg2pass-0.log' for pass-1 encoding: Permission denied

    Read the article

  • Boost.Log - Multiple processes to one log file?

    - by Kevin
    Reading through the doc for Boost.Log, it explains how to "fan out" into multiple files/sinks pretty well from one application, and how to get multiple threads working together to log to one place, but is there any documentation on how to get multiple processes logging to a single log file? What I imagine is that every process would log to its own "private" log file, but in addition, any messages above a certain severity would also go to a "common" log file. Is this possible with Boost.Log? Is there some configuration of the sinks that makes this easy? I understand that I will likely have the same "timestamp out of order" problem described in the FAQ here, but that's OK, as long as the timestamps are correct I can work with that. This is all on one machine, so no remote filesystem problems either.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >