Search Results

Search found 5019 results on 201 pages for 'jakarta commons logging'.

Page 13/201 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • What is a good third-party SMTP software for custom logging?

    - by Lorcan O'Neill
    I am currently using IIS 6 SMTP to send out some of our mail and I unfortunately amn't finding the logging performed by it to be verbose enough - nor am I able to customise it to the level I would find verbose enough. I'm wondering if anyone can recommend a piece of third-party SMTP software which would allow for customisability of the logging down to the point where I can inject in my own columns or perform regular expressions on the outgoing message data or headers.

    Read the article

  • Why do I get empty request from the Jakarta Commons HttpClient?

    - by polyurethan
    I have a problem with the Jakarta Commons HttpClient. Before my self-written HttpServer gets the real request there is one request which is completely empty. That's the first problem. The second problem is, sometimes the request data ends after the third or fourth line of the http request: POST / HTTP/1.1 User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:4232 For debugging I am using the Axis TCPMonitor. There every things is fine but the empty request. How I process the stream: StringBuffer requestBuffer = new StringBuffer(); InputStreamReader is = new InputStreamReader(socket.getInputStream(), "UTF-8"); int byteIn = -1; do { byteIn = is.read(); if (byteIn > 0) { requestBuffer.append((char) byteIn); } } while (byteIn != -1 && is.ready()); String requestData = requestBuffer.toString(); How I send the request: client.getParams().setSoTimeout(30000); method = new PostMethod(url.getPath()); method.getParams().setContentCharset("utf-8"); method.setRequestHeader("Content-Type", "application/xml; charset=utf-8"); method.addRequestHeader("Connection", "close"); method.setFollowRedirects(false); byte[] requestXml = getRequestXml(); method.setRequestEntity(new InputStreamRequestEntity(new ByteArrayInputStream(requestXml))); client.executeMethod(method); int statusCode = method.getStatusCode(); Have anyone of you an idea how to solve these problems? Alex

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance update: is for high performance, but isn't released yet. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • ADF Logging In Deployed Apps

    - by Duncan Mills
    Harking back to my series on using the ADF logger and the related  ADF Insider Video, I've had a couple of queries this week about using the logger from Enterprise Manager (EM). I've alluded in those previous materials to how EM can be used but it's evident that folks need a little help.  So in this article, I'll quickly look at how you can switch logging on from the EM console for an application and how you can view the output.  Before we start I'm assuming that you have EM up and running, in my case I have a small test install of Fusion Middleware Patchset 5 with an ADF application deployed to a managed server. Step 1 - Select your Application In the EM navigator select the app you're interested in: At this point you can actually bring up the context ( right mouse click) menu to jump to the logging, but let's do it another way.  Step 2 - Open the Application Deployment Menu At the top of the screen, underneath the application name, you'll find a drop down menu which will take you to the options to view log messages and configure logging, thus: Step 3 - Set your Logging Levels  Just like the log configuration within JDeveloper, we can set up transient or permanent (not recommended!) loggers here. In this case I've filtered the class list down to just oracle.demo, and set the log level to config. You can now go away and do stuff in the app to generate log entries. Step 4 - View the Output  Again from the Application Deployment menu we can jump to the log viewer screen and, as I have here, start to filter down the logging output to the stuff you're interested in.  In this case I've filtered by module name. You'll notice here that you can again look at related log messages. Importantly, you'll also see the name of the log file that holds this message, so it you'd rather analyse the log in more detail offline, through the ODL log analyser in JDeveloper, then you can see which log to download.

    Read the article

  • GlassFish 3: how do you change the (default) logging format?

    - by Kawu
    The question originated from here: http://www.java.net/forum/topic/glassfish/glassfish/configuring-glassfish-logging-format - without an answer. The default GlassFish 3 logging format of is very annoying, much too long. [#|2012-03-02T09:22:03.165+0100|SEVERE|glassfish3.1.2|javax.enterprise.system.std.com.sun.enterprise.server.logging|_ThreadID=113;_ThreadName=AWT-EventQueue-0;| MESSAGE... ] This is just a horrible default IMO. The docs just explain all the fields, but not how to change the format: http://docs.oracle.com/cd/E18930_01/html/821-2416/abluk.html Note, that I deploy SLF4J along with my webapp which should pick up the format as well. How do you change the logging format? FYI: The links here are outdated: Install log formater in glassfish... The question here hasn't been answered: How to configure GlassFish logging to show milliseconds in timestamps?... The posting here resulted in nothing: http://www.java.net/forum/topic/glassfish/glassfish/cant-seem-configure-... It looks like GlassFish logging configuration is an issue of its own. Can anybody help?

    Read the article

  • What is likely cause of Android runtime exception "No suitable Log implementation" related to loggin

    - by M.Bearden
    I am creating an Android app that includes a third party jar. That third party jar utilizes internal logging that is failing to initialize when I run the app, with this error: "org.apache.commons.logging.LogConfigurationException: No suitable Log implementation". The 3rd party jar appears to be using org.apache.commons.logging and to depend on log4j, specifically log4j-1.2.14.jar. I have packaged the log4j jar into the Android app. The third party jar was packaged with a log4j.xml configuration file, which I have tried packaging into the app as an XML resource (and also as a raw resource). The "No suitable Log implementation" error message is not very descriptive, and I have no immediate familiarity with Java logging. So I am looking for likely causes of the problem (what class or configuration resources might I be missing?) or for some debugging technique that will result in a different error message that is more explicit about the problem. I do not have access to source code for the 3rd party jar. Here is the exception stack trace. When I run the app, I get the following exception as soon as one of the third party jar classes attempts to initialize its internal logging. DEBUG/AndroidRuntime(15694): Shutting down VM WARN/dalvikvm(15694): threadid=3: thread exiting with uncaught exception (group=0x4001b180) ERROR/AndroidRuntime(15694): Uncaught handler: thread main exiting due to uncaught exception ERROR/AndroidRuntime(15694): java.lang.ExceptionInInitializerError ERROR/AndroidRuntime(15694): Caused by: org.apache.commons.logging.LogConfigurationException: No suitable Log implementation ERROR/AndroidRuntime(15694): at org.apache.commons.logging.impl.LogFactoryImpl.discoverLogImplementation(LogFactoryImpl.java:842) ERROR/AndroidRuntime(15694): at org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFactoryImpl.java:601) ERROR/AndroidRuntime(15694): at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:333) ERROR/AndroidRuntime(15694): at org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFactoryImpl.java:307) ERROR/AndroidRuntime(15694): at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:645) ERROR/AndroidRuntime(15694): at org.apache.commons.configuration.ConfigurationFactory.<clinit>(ConfigurationFactory.java:77)

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • CentOS centralised logging, syslogd, rsyslog, syslog-ng, logstash sender?

    - by benbradley
    I'm trying to figure out the best way to setup a central place to store and interrogate server logs. syslog, Apache, MySQL etc. I've found a few different options but I'm not sure what would be best. I'm looking for something that is easy to install and keep updated on many virtual machines. I can add it to a VM template going forward but I'd also like it to be easy to install to keep the VM complexity down. The options I've found so far are: syslogd syslog-ng rsyslog syslogd/syslog-ng/rsyslog to logstash/ElasticSearch logstash agent in each log "client" to send to Redis/logstash/ElasticSearch And all sorts of permutations of the above. What's the most resilient and light from the log "client" perspective? I'd like to avoid the situation where log "clients" hang because they are unable to send their logs to the logging server. Also I would still like to keep local logging and the rotation/retention provided by logrotate in place. Any ideas/suggestions or reasons for or against any of the above? Or suggestions of a different structure entirely? Cheers, B

    Read the article

  • Could Python's logging SMTP Handler be freezing my thread for 2 minutes?

    - by Oddthinking
    A rather confusing sequence of events happened, according to my log-file, and I am about to put a lot of the blame on the Python logger, which is a bold claim. I thought I should get some second opinions about whether what I am saying could be true. I am trying to explain why there is are several large gaps in my log file (around two minutes at a time) during stressful periods for my application when it is missing deadlines. I am using Python's logging module on a remote server, and have set-up, with a configuration file, for all logs of severity of ERROR or higher to be emailed to me. Typically, only one error will be sent at a time, but during periods of sustained problems, I might get a dozen in a minute - annoying, but nothing that should stress SMTP. I believe that, after a short spurt of such messages, the Python logging system (or perhaps the SMTP system it is sitting on) is encountering errors or congestion. The call to Python's log is then BLOCKING for two minutes, causing my thread to miss its deadlines. (I was smart enough to move the logging until after the critical path of the application - so I don't care if logging takes me a few seconds, but two minutes is far too long.) This seems like a rather awkward architecture (for both a logging system that can freeze up, and for an SMTP system (Ubuntu, sendmail) that cannot handle dozens of emails in a minute**), so this surprises me, but it exactly fits the symptoms. Has anyone had any experience with this? Can anyone describe how to stop it from blocking? ** EDIT: I actually counted. A little under 4000 short emails in two hours. So far more than I suggested, sorry. But enough to over-fill a Sendmail's buffers?

    Read the article

  • How do I use django settings in my logging.ini file?

    - by slypete
    I have a BASE_DIR setting in my settings.py file: BASE_DIR = os.path.dirname(os.path.abspath(__file__)) I need to use this variable in my logging.ini file to setup my file handler paths. The initialization of logging happens in the same file, the settings.py file, below my BASE_DIR variable: LOG_INIT_DONE=False if not LOG_INIT_DONE: logging.config.fileConfig(LOGGING_INI) LOG_INIT_DONE=True Thanks, Pete

    Read the article

  • What are the latest options in Java logging frameworks?

    - by sanity
    This question gets asked periodically, but I've long felt that existing Java logging frameworks were overcomplicated and over-engineered, and I want to see what's new. I have a more critical issue on my current project as we've standardized on JSON as our human-readable data encoding, and most logging frameworks I've seen require XML. I would really rather avoid using JSON for 95% of my apps configuration, and XML for the rest just because of the logging framework (truth be told, I hate XML used for anything other than text markup, its original intended purpose). Are there any hot new Java logging frameworks that are actively maintained, reasonably powerful, have a maven repo, can be reconfigured without restarting your app, and don't tie you to XML?

    Read the article

  • Problems using dual resolver

    - by user315228
    Hi, I ma using dual resolver and having a problem. Following is what i get when i run through ant in debug and verbose mode([http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]]) [ivy:retrieve] resolved ivy file produced in c:\temp\ivy\[email protected] [ivy:retrieve] :: downloading artifacts :: [ivy:retrieve] [NOT REQUIRED] config#ego;4.3.1!ego.conf [ivy:retrieve] trying [http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]] [ivy:retrieve] tried [http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]] [ivy:retrieve] HTTP response status: 404 url=[http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]] [ivy:retrieve] CLIENT ERROR: Not Found url=[http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]] [ivy:retrieve] ibiblio: resource not reachable for axis2#axis2;working@commons-lang: res=[http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]] [ivy:retrieve] WARN: [NOT FOUND ] axis2#axis2;working@commons-lang!axis2.jar (235ms) [ivy:retrieve] WARN: ==== commons-lang: tried [ivy:retrieve] WARN: ==== ibiblio: tried [ivy:retrieve] WARN: [http://repo1.maven.org/maven2/axis2/axis2/working@commons-lang/[email protected]] [ivy:retrieve] [NOT REQUIRED] axis#axis-saaj;1.4!axis-saaj.jar [ivy:retrieve] [NOT REQUIRED] axis#axis-wsdl4j;1.5.1!axis-wsdl4j.jar Can you please tell me what is wrong with my ivysetting file or wrong with ivy file? Following is excerpt from ivysettings.xml <dual name="dual4" <filesystem name="commons-lang" <ivy pattern="${localRepositoryLocation}/[module]/ivy/ivy.xml"/ </filesystem <ibiblio name="ibiblio" m2compatible="true" usepoms="false" / </dual The problem (may be) is for each and every dependancy that i defined i have seperate ivy.xml. and just one reolver as above? like just for an exampe, for axis2.jar i have two dependancies in another ivy.xml, the dependencies are axis-saaj and axis-wsdl4j. Please help Thanks, Almas

    Read the article

  • Reuse remote ssh connections and reduce command/session logging verbosity?

    - by ewwhite
    I have a number of systems that rely on application-level mirroring to a secondary server. The secondary server pulls data by means of a series of remote SSH commands executed on the primary. The application is a bit of a black box, and I may not be able to make modifications to the scripts that are used. My issue is that the logging in /var/log/secure is absolutely flooded with requests from the service user, admin. These commands occur many times per second and have a corresponding impact on logs. They rely on passphrase-less key exchange. The OS involved is EL5 and EL6. Example below. Is there any way to reduce the amount of logging from these actions. (By user? By source?) Is there a cleaner way for the developers to perform these ssh executions without spawning so many sessions? Seems inefficient. Can I reuse the existing connections? Example log output: Jul 24 19:08:54 Cantaloupe sshd[46367]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46446]: Accepted publickey for admin from 172.30.27.32 port 33526 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46446]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46475]: Accepted publickey for admin from 172.30.27.32 port 33527 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46475]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46504]: Accepted publickey for admin from 172.30.27.32 port 33528 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46504]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46583]: Accepted publickey for admin from 172.30.27.32 port 33529 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46583]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:54 Cantaloupe sshd[46612]: Accepted publickey for admin from 172.30.27.32 port 33530 ssh2 Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:54 Cantaloupe sshd[46612]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46641]: Accepted publickey for admin from 172.30.27.32 port 33531 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46641]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46720]: Accepted publickey for admin from 172.30.27.32 port 33532 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46720]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46749]: Accepted publickey for admin from 172.30.27.32 port 33533 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46749]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46778]: Accepted publickey for admin from 172.30.27.32 port 33534 ssh2 Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session opened for user admin by (uid=0) Jul 24 19:08:55 Cantaloupe sshd[46778]: pam_unix(sshd:session): session closed for user admin Jul 24 19:08:55 Cantaloupe sshd[46857]: Accepted publickey for admin from 172.30.27.32 port 33535 ssh2

    Read the article

  • Concurrency pattern of logger in multithreaded application

    - by Dipan Mehta
    The context: We are working on a multi-threaded (Linux-C) application that follows a pipeline model. Each module has a private thread and encapsulated objects which do processing of data; and each stage has a standard form of exchanging data with next unit. The application is free from memory leak and is threadsafe using locks at the point where they exchange data. Total number of threads is about 15- and each thread can have from 1 to 4 objects. Making about 25 - 30 odd objects which all have some critical logging to do. Most discussion I have seen about different levels as in Log4J and it's other translations. The real big questions is about how the overall logging should really happen? One approach is all local logging does fprintf to stderr. The stderr is redirected to some file. This approach is very bad when logs become too big. If all object instantiate their individual loggers - (about 30-40 of them) there will be too many files. And unlike above, one won't have the idea of true order of events. Timestamping is one possibility - but it is still a mess to collate. If there is a single global logger (singleton) pattern - it indirectly blocks so many threads while one is busy putting up logs. This is unacceptable when processing of the threads are heavy. So what should be the ideal way to structure the logging objects? What are some of the best practices in actual large scale applications? I would also love to learn from some of the real designs of large scale applications to get inspirations from!

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • Logging connection strings

    If you some of the dynamic features of SSIS such as package configurations or property expressions then sometimes trying to work out were your connections are pointing can be a bit confusing. You will work out in the end but it can be useful to explicitly log this information so that when things go wrong you can just review the logs. You may wish to develop this idea further and encapsulate such logging into a custom task, but for now lets keep it simple and use the Script Task. The Script Task code below will raise an Information event showing the name and connection string for a connection. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Get the connection string, we need to know the name of the connection Dim connectionName As String = "My OLE-DB Connection" Dim connectionString As String = Dts.Connections(connectionName).ConnectionString ' Format the message and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connectionName, connectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Dts.TaskResult = Dts.Results.Success End Sub End Class Building on that example it is probably more flexible to log all connections in a package as shown in the next example. Imports System Imports Microsoft.SqlServer.Dts.Runtime Public Class ScriptMain Public Sub Main() Dim fireAgain As Boolean ' Loop through all connections in the package For Each connection As ConnectionManager In Dts.Connections ' Get the connection string and log it via an information event Dim message As String = String.Format("Connection ""{0}"" has a connection string of ""{1}"".", _ connection.Name, connection.ConnectionString) Dts.Events.FireInformation(0, "Information", message, Nothing, 0, fireAgain) Next Dts.TaskResult = Dts.Results.Success End Sub End Class By using the Information event it makes it readily available in the designer, for example the Visual Studio Output window (Ctrl+Alt+O) or the package designer Execution Results tab, and also allows you to readily control the logging by choosing which events to log in the normal way. Now before somebody starts commenting that this is a security risk, I would like to highlight good practice for building connection managers. Firstly the Password property, or any other similar sensitive property is always defined as write-only, and secondly the connection string property only uses the public properties to assemble the connection string value when requested. In other words the connection string will never contain the password. I have seen a couple of cases where this is not true, but that was just bad development by third-parties, you won’t find anything like that in the box from Microsoft.   Whilst writing this code it made me wish that there was a custom log entry that you could just turn on that did this for you, but alas connection managers do not even seem to support custom events. It did however remind me of a very useful event that is often overlooked and fits rather well alongside connection string logging, the Execute SQL Task’s custom ExecuteSQLExecutingQuery event. To quote the help reference Custom Messages for Logging - Provides information about the execution phases of the SQL statement. Log entries are written when the task acquires connection to the database, when the task starts to prepare the SQL statement, and after the execution of the SQL statement is completed. The log entry for the prepare phase includes the SQL statement that the task uses. It is the last part that is so useful, how often have you used an expression to derive a SQL statement and you want to log that to make sure the correct SQL is being returned? You need to turn it one, by default no custom log events are captured, but I’ll refer you to a walkthrough on setting up the logging for ExecuteSQLExecutingQuery by Jamie.

    Read the article

  • How can I avoid logging file not founds commonly caused by vulnerability scanners?

    - by agweber
    My apache logs are pretty much full of 'admin.php' not found or unable to stat and similar statements for wp-login.php, default.php, and so on that are often sought after by vulnerability scanners. Can I configure apache to avoid logging these statements for certain files? I don't want to filter out all file not founds as I'd like to fix bad links that I may have put out over the years that no longer correspond to the same files. I can use a tool like fail2ban or denyhosts, but from previous experiences it comes from so many places that those errors are still going to pile up, and the reducing those error messages are what this question is asking about.

    Read the article

  • javax.sql.DataSource.getConnection() locks system

    - by Ryan Elkins
    I'm using the Apache Commons DBCP library for connection pooling in a desktop application. I've done this before and never had a problem but the latest application has started sometimes locking up on the call to getConnection() on my DataSource. The application just hangs after that call. I'm closing up my resources when I'm done with them. Is there any known reason why this might happen? I'm not even sure where to being troubleshooting this now that I've got it narrowed down to this method. It doesn't always hang - sometimes it happens fairly quickly, sometimes it takes a long time. Sometimes it doesn't happen at all, although lately I can get it to happen within a few minutes.

    Read the article

  • Analyze log files from many languages using a single tool. And recommendations of logging frameworks

    - by Binary255
    We have a system build on lots of languages. The ones we are interested in logging, in order of priority, are: C/C++ PHP C# Bash Java Wish list: If it is possible, we would like logging to be achieved from the above languages in such a way that we may use a single log viewing tool for all of them. Ideally they would be in the same format, but next to that in as few formats as possible and readable from as many log file viewers as possible. If it is possible logging to a single log file or a set of log files would be nice. With a possibility to filter based on the source language that is being logged. We would like to copy the log files (or should be log to a database and copy it instead?) from multiple servers to a single location. So that we can analyze the log files from many servers at the same time (to see if any of our servers execute a certain piece legacy code for example). Being able to change logging level at runtime would be nice. Thank you for reading! It's quite a complex problem, I hope someone has wrestled with it before and has some valuable information!

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >