Search Results

Search found 18758 results on 751 pages for 'log rotate'.

Page 98/751 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • How do I programatically determine which port a SQL Server is running on?

    - by Ralph Willgoss
    How do I programatically determine which port a SQL Server is running on?/*Wrapper script for xp_readerrorlogAuthor: Ralph WillgossDate: 2nd Oct 2012This script cycles through all logs files, looking for the listening port.Normally you have to specify the log file one by one, the script removes the need for that.Param ref for: xp_readerrorlog1. Value of error log file you want to read: 0 = current, 1 = Archive #1, 2 = Archive #2, etc...2. Log file type: 1 or NULL = error log, 2 = SQL Agent log3. Search string 1: String one you want to search for4. Search string 2: String two you want to search for to further refine the results5. Search from start time6. Search to end time7. Sort order for results: N'asc' = ascending, N'desc' = descending*/USE MasterGO--  Get log countDECLARE @logcount intDROP TABLE #ResultCREATE TABLE #Result (ArchiveNo int, Date datetime, Size int)INSERT INTO #ResultEXEC xp_enumerrorlogsSET @logcount = (SELECT COUNT(*) FROM #Result)-- Search the available logsDECLARE @counter intSET @counter = 0WHILE @counter <= @logcountBEGIN   EXEC xp_readerrorlog @counter, 1, N'Server is listening on', 'any', NULL, NULL, N'asc'   SET @counter = @counter + 1ENDGO

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • Wise settings for Git

    - by Marko Apfel
    These settings reflecting my Git-environment. It a result of reading and trying several ideas of input from others. Must-Haves Aliases [alias] ci = commit st = status co = checkout oneline = log --pretty=oneline br = branch la = log --pretty=\"format:%ad %h (%an): %s\" --date=short df = diff dc = diff --cached lg = log -p lol = log --graph --decorate --pretty=oneline --abbrev-commit lola = log --graph --decorate --pretty=oneline --abbrev-commit --all ls = ls-files ign = ls-files -o -i --exclude-standard Colors [color] ui = auto [color "branch"] current = yellow reverse local = yellow remote = green [color "diff"] meta = yellow bold frag = magenta bold old = red bold new = green bold whitespace = red reverse [color "status"] added = green changed = red untracked = cyan Core [core] autocrlf = true excludesfile = c:/Users/<user>/.gitignore editor = 'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession –noPlugin Nice to have Merge and Diff [merge] tool = kdiff3 [mergetool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [mergetool "p4merge"] path = c:/Program Files (x86)/Perforce Merge/p4merge.exe cmd = p4merge \"$BASE\" \"$LOCAL\" \"$REMOTE\" \"$MERGED\" keepTemporaries = false trustExitCode = false keepBackup = false [diff] guitool = kdiff3 [difftool "kdiff3"] path = c:/Program Files (x86)/KDiff3/kdiff3.exe [difftool "p4merge"] path = C:/Users/<user>/My Applications/Perforce Merge/p4merge.exe cmd = \"p4merge.exe $LOCAL $REMOTE\" .

    Read the article

  • What is the runtime of this program

    - by grebwerd
    I was wondering what the run time of this small program would be? #include <stdio.h> int main(int argc, char* argv[]) { int i; int j; int inputSize; int sum = 0; if(argc == 1) inputSize = 16; else inputSize = atoi(argv[i]); for(i = 1; i <= inputSize; i++){ for(j = i; j < inputSize; j *=2 ){ printf("The value of sum is %d\n",++sum); } } } n S floor(log n - log (n-i)) = ? i =1 and that each summation would be the floor value between log(n) - log(n-i). Would the run time be n log n?

    Read the article

  • run .profile function as cron job

    - by Don
    In the .profile file of the root user I have defined a function, e.g. function printDate() { date } I want to run this function every minute and append the output to cron.log. I tried adding the following crontab entry: * * * * * printDate > $HOME/cron.log 2>&1 But it doesn't work. The cron.log file gets created, but it's empty. I guess this is because the .profile isn't read by cron, so any functions/aliases defined therein are unavailable to it. So I tried changing the crontab entry to: * * * * * source $HOME/.profile;printDate >> $HOME/cron.log 2>&1 But this doesn't work either. It seems cron still doesn't have access to the printDate function because I see the following in cron.log /bin/sh: printDate: not found

    Read the article

  • Error when trying to activate FGLRX graphic propietary control ATI/AMD

    - by Gastón
    I'm new to Ubuntu. I still don't understand a lot of features. As far as I know, I can install the ATI drivers through the Additional drivers When I press 'Activate' in the FGLRX graphic propietary control ATI/AMD it comes up with an error. Lo sentimos, la instalación de este controlador falló. Revise el archivo de registro para ver más detalles: /var/log/jockey.log English translation: Sorry, this driver installation failed. Check the log file for more details: /var/log/jockey.log The problem is that i'm trying to make Compiz work. And since i have no video card drivers i cant modify any of the interface features (Like 3d cube and that stuff).

    Read the article

  • nagios3 Error: Could not read object configuration data!

    - by user1493730
    I have a brand new install of nagios3 on ubuntu 12.04. After I log in to the web interface and click any link I get the error: Error: Could not read object configuration data! Here are some things you should check in order to resolve this error: Verify configuration options using the -v command-line option to check for errors. Check the Nagios log file for messages relating to startup or status data errors. I ran it with the -v option and it reported no errors: Total Warnings: 0 Total Errors: 0 Things look okay - No serious problems were detected during the pre-flight check The nagios log and apache error log and debug log all have nothing regarding this. Does anyone know how to turn on logging that will give me some kind of useful error? Or if anyone knows how to fix this specific problem without additional logging, I guess that's okay too. Thanks!

    Read the article

  • What is the Big-O time complexity of this algorithm

    - by grebwerd
    I was wondering what the run time of this small program would be? #include <stdio.h> int main(int argc, char* argv[]) { int i; int j; int inputSize; int sum = 0; if(argc == 1) inputSize = 16; else inputSize = atoi(argv[i]); for(i = 1; i <= inputSize; i++){ for(j = i; j < inputSize; j *=2 ){ printf("The value of sum is %d\n",++sum); } } } n S floor(log n - log (n-i)) = ? i =1 and that each summation would be the floor value between log(n) - log(n-i). Would the run time be n log n?

    Read the article

  • How do I allow mysqld to use more than 24.9% of my cpu?

    - by Joseph Yancey
    I have a Web server running on RHEL that is running Apache and MySQL. It has a Quad core 3.2Ghz Xeon CPU and 8 Gigs of RAM Most of the time, we don't have any issues at all. Our web application is very database intensive. When our usage gets pretty heavy MySQL will peg out at using 24.9% of the cpu. Most of the time, it hangs around below 5%. I have speculated that it is only using one core of the CPU and it is pegging out that core but TOP shows me in the cpu column that mysqld changes cores even while the usage stays at 24.9%. When it does this MySQL gets painfully slow as it is queuing up queries Is there some magic configuration that will tell mysql to use more cpu when it needs to? Also, any other advice on my configuration would be helpful. We run two applications on this server. One that runs Innodb but doesn't get much usage (it has been replaced by the other app), and one that runs MyIsam and gets lots of use. Overall, our whole mysql data directory is something like 13Gigs if that matters at all. Here is my config: [root@ProductionLinux root]# cat /etc/my.cnf [mysqld] server-id = 71 log-bin = /var/log/mysql/mysql-bin.log binlog-do-db = oldapplication binlog-do-db = newapplication binlog-do-db = support thread_cache_size = 30 key_buffer_size = 256M table_cache = 256 sort_buffer_size = 4M read_buffer_size = 1M skip-name-resolve innodb_data_home_dir = /usr/local/mysql/data/ innodb_data_file_path = InnoDB:100M:autoextend set-variable = innodb_buffer_pool_size=70M set-variable = innodb_additional_mem_pool_size=10M set-variable = max_connections=500 innodb_log_group_home_dir = /usr/local/mysql/data innodb_log_arch_dir = /usr/local/mysql/data set-variable = innodb_log_file_size=20M set-variable = innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit = 1 log-queries-not-using-indexes log-error = /var/log/mysql/mysql-error.log mysql show variables; +---------------------------------+-----------------------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+-----------------------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/local/mysql-standard-5.0.18-linux-x86_64-glibc23/ | | binlog_cache_size | 32768 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/local/mysql-standard-5.0.18-linux-x86_64-glibc23/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 5 | | datadir | /usr/local/mysql/data/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | | group_concat_max_len | 1024 | | have_archive | YES | | have_bdb | NO | | have_blackhole_engine | NO | | have_compress | YES | | have_crypt | YES | | have_csv | NO | | have_example_engine | NO | | have_federated_engine | NO | | have_geometry | YES | | have_innodb | YES | | have_isam | NO | | have_ndbcluster | NO | | have_openssl | NO | | have_query_cache | YES | | have_raid | NO | | have_rtree_keys | YES | | have_symlink | YES | | init_connect | | | init_file | | | init_slave | | | innodb_additional_mem_pool_size | 10485760 | | innodb_autoextend_increment | 8 | | innodb_buffer_pool_awe_mem_mb | 0 | | innodb_buffer_pool_size | 73400320 | | innodb_checksums | ON | | innodb_commit_concurrency | 0 | | innodb_concurrency_tickets | 500 | | innodb_data_file_path | InnoDB:100M:autoextend | | innodb_data_home_dir | /usr/local/mysql/data/ | | innodb_doublewrite | ON | | innodb_fast_shutdown | 1 | | innodb_file_io_threads | 4 | | innodb_file_per_table | OFF | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method | | | innodb_force_recovery | 0 | | innodb_lock_wait_timeout | 50 | | innodb_locks_unsafe_for_binlog | OFF | | innodb_log_arch_dir | /usr/local/mysql/data | | innodb_log_archive | OFF | | innodb_log_buffer_size | 8388608 | | innodb_log_file_size | 20971520 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | /usr/local/mysql/data | | innodb_max_dirty_pages_pct | 90 | | innodb_max_purge_lag | 0 | | innodb_mirrored_log_groups | 1 | | innodb_open_files | 300 | | innodb_support_xa | ON | | innodb_sync_spin_loops | 20 | | innodb_table_locks | ON | | innodb_thread_concurrency | 20 | | innodb_thread_sleep_delay | 10000 | | interactive_timeout | 28800 | | join_buffer_size | 131072 | | key_buffer_size | 268435456 | | key_cache_age_threshold | 300 | | key_cache_block_size | 1024 | | key_cache_division_limit | 100 | | language | /usr/local/mysql-standard-5.0.18-linux-x86_64-glibc23/share/mysql/english/ | | large_files_support | ON | | large_page_size | 0 | | large_pages | OFF | | license | GPL | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_bin | ON | | log_bin_trust_function_creators | OFF | | log_error | /var/log/mysql/mysql-error.log | | log_slave_updates | OFF | | log_slow_queries | OFF | | log_warnings | 1 | | long_query_time | 10 | | low_priority_updates | OFF | | lower_case_file_system | OFF | | lower_case_table_names | 0 | | max_allowed_packet | 1048576 | | max_binlog_cache_size | 18446744073709551615 | | max_binlog_size | 1073741824 | | max_connect_errors | 10 | | max_connections | 500 | | max_delayed_threads | 20 | | max_error_count | 64 | | max_heap_table_size | 16777216 | | max_insert_delayed_threads | 20 | | max_join_size | 18446744073709551615 | | max_length_for_sort_data | 1024 | | max_relay_log_size | 0 | | max_seeks_for_key | 18446744073709551615 | | max_sort_length | 1024 | | max_sp_recursion_depth | 0 | | max_tmp_tables | 32 | | max_user_connections | 0 | | max_write_lock_count | 18446744073709551615 | | multi_range_count | 256 | | myisam_data_pointer_size | 6 | | myisam_max_sort_file_size | 9223372036854775807 | | myisam_recover_options | OFF | | myisam_repair_threads | 1 | | myisam_sort_buffer_size | 8388608 | | myisam_stats_method | nulls_unequal | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | new | OFF | | old_passwords | OFF | | open_files_limit | 2510 | | optimizer_prune_level | 1 | | optimizer_search_depth | 62 | | pid_file | /usr/local/mysql/data/ProductionLinux.pid | | port | 3306 | | preload_buffer_size | 32768 | | protocol_version | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 0 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | | range_alloc_block_size | 2048 | | read_buffer_size | 1044480 | | read_only | OFF | | read_rnd_buffer_size | 262144 | | relay_log_purge | ON | | relay_log_space_limit | 0 | | rpl_recovery_rank | 0 | | secure_auth | OFF | | server_id | 71 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slave_compressed_protocol | OFF | | slave_load_tmpdir | /tmp/ | | slave_net_timeout | 3600 | | slave_skip_errors | OFF | | slave_transaction_retries | 10 | | slow_launch_time | 2 | | socket | /tmp/mysql.sock | | sort_buffer_size | 4194296 | | sql_mode | | | sql_notes | ON | | sql_warnings | ON | | storage_engine | MyISAM | | sync_binlog | 0 | | sync_frm | ON | | sync_replication | 0 | | sync_replication_slave_id | 0 | | sync_replication_timeout | 10 | | system_time_zone | CST | | table_cache | 256 | | table_lock_wait_timeout | 50 | | table_type | MyISAM | | thread_cache_size | 30 | | thread_stack | 262144 | | time_format | %H:%i:%s | | time_zone | SYSTEM | | timed_mutexes | OFF | | tmp_table_size | 33554432 | | tmpdir | | | transaction_alloc_block_size | 8192 | | transaction_prealloc_size | 4096 | | tx_isolation | REPEATABLE-READ | | updatable_views_with_limit | YES | | version | 5.0.18-standard-log | | version_comment | MySQL Community Edition - Standard (GPL) | | version_compile_machine | x86_64 | | version_compile_os | unknown-linux-gnu | | wait_timeout | 28800 | +---------------------------------+-----------------------------------------------------------------------------+ 210 rows in set (0.00 sec)

    Read the article

  • From the Coalface - 2 - Work as hard as you can to be as lazy as you can

    - by TATWORTH
    On one project, the standard build involved building the database from scratch producing a build log that was about 100,000 lines long.  Manually paging throught this log took hours and it was very easy to miss any errors. The solution was to write a filter that read the log file and output a summary file with each output line pepended with the line number in the original file. Sucessive iterations eliminated more noise lines. Diagnostic displays required escaping. The final output was just a few hundred lines long and the errors were easy to spot. The filter program took less time to to write than one manual pass reading the log file.  The utility digested the log file in about 1 second. Now database build errors could be identified quickly instead of after hours of manual examination or lengthy system testing. A few minutes consideration of the task led to a tremendous improvement in quality.

    Read the article

  • BIND no longer responds to AXFR Requests

    - by djsumdog
    Recently we moved our primary external DNS server. It has three caching DNS slaves in front of it provided by our ISP. They've told us they've started getting access denied requests when doing zone transfers (AXFR). If I add in my own IPs to the allow-transfer list, I also get a transfer failed when using dig with the AXFR argument. Here is what my bind configuration looks like: options { directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; zone-statistics yes; statistics-file "/var/log/named.stats"; listen-on-v6 { any; }; notify-source 10.19.0.68 port 53; querylog yes; notify yes; allow-transfer { 127.0.0.1; //localhost 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; also-notify { 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; include "/etc/named.d/forwarders.conf"; }; logging { channel simple_log { file "/var/log/bind.log" versions 10 size 3m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ simple_log; }; channel log_zone_transfers { file "/var/log/axfr.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category xfer-out { log_zone_transfers; }; channel log_notify { file "/var/log/notify.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category notify { log_notify; }; channel queries { file "/var/log/queries.log" versions 10 size 30m; print-time yes; severity info; print-category yes; print-severity yes; }; category queries { queries; }; }; zone "." in { type hint; file "root.hint"; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; include "/etc/named.conf.include"; zone "example.net " { type master; file "/var/lib/named/master/example.net.hosts"; }; zone "example.com " { type master; file "/var/lib/named/master/example.com.hosts"; }; ## -- other master files -- And the errors in the xfer log look like the following: 29-Oct-2012 14:20:02.806 xfer-out: info: client 1.1.1.1#59069: bad zone transfer request: 'example.com./IN': non-authoritative zone (NOTAUTH) I've tried adding allow-transfer parameters directly on the zone files and still get failed transfers. Any idea what I'm doing wrong?

    Read the article

  • GI ????

    - by Allen Gao
    Normal 0 7.8 ? 0 2 false false false MicrosoftInternetExplorer4 classid="clsid:38481807-CA0E-42D2-BF39-B33AF135CC4D" id=ieooui st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} ??????????11gR2 GI ?????????,??????GI????????????????????? ????????GI???????3???,ohasd??,??????,??????? ??,ohasd??? 1. /etc/inittab?????? h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null ???,??????? root 4865 1 0 Dec02 ? 00:01:01 /bin/sh /etc/init.d/init.ohasd run ??????????????,???? +init.ohasd ????????? + os????????? + ??S* ohasd????, ??S96ohasd + GI????????(crsctl enable crs) ??,ohasd.bin ??????,????OLR????,??,??ohasd.bin??????,?????OLR??????????????OLR???$GRID_HOME/cdata/${HOSTNAME}.olr 2. ohasd.bin????????agents(orarootagent, oraagent, cssdagnet ? cssdmonitor) ???????????????,?????agent??????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. ???,??????? 1. Mdnsd ??????(Multicast)???????????????????,??????????????????????????? 2. Gpnpd ????,??????????bootstrap ??,??????????????gpnp profile???,?????mdnsd??????,???????????,?????????????,??gpnp profile (<gi_home>/gpnp/profiles/peer/profile.xml)?????????? 3. Gipcd ????,????????????????(cluster interconnect)?????,???????gpnpd???,??,??????????,?????gpnpd ??????? 4. Ocssd.bin ?????????????gpnp profile?????????(Voting Disk),????gpnpd ??????????,?????????????,??ocssd.bin ??????,?????????? + gpnp profile ?????????? + gpnpd ??????? + ??????asm disk ??????????? + ??????????? 5. ??????????:ora.ctssd, ora.asm, ora.cluster_interconnect.haip, ora.crf, ora.crsd ?? ??:????????????????ocssd.bin, gpnpd.bin ? gipcd.bin ????,??gpnpd.bin????,ocssd.bin ? gipcd.bin ?????????,?gpnpd.bin????????,ocssd.bin ? gipcd.bin ????????gpnp profile?????????? ??,????????????,?????crsd????????? 1. Crsd?????????????OCR,????OCR????ASM?,???? ASM??????,??OCR???ASM??????????OCR???????,???????????????? 2. Crsd ?????agents(orarootagent, oraagent_<rdbms_owner>, oraagent_<gi_owner> )???agent????,??????????$GRID_HOME/bin ???????????,??,?????????,??corruption. 3. ????????  ora.net1.network : ????,?????????????,scanvip, vip, listener?????????????,??????????,vip, scanvip ?listener ??offline,?????????????? ora.<scan_name>.vip:scan???vip??,?????3?? ora.<node_name>.vip : ?????vip ?? ora.<listener_name>.lsnr: ???????????????,?11gR2??,listener.ora???????,????????? ora.LISTENER_SCAN<n>.lsnr: scan ????? ora.<????>.dg: ASM ????????????????mount???,dismount???? ora.<????>.db: ???????11gR2????????????,??????????rac ????????,??????????,???????“USR_ORA_INST_NAME@SERVERNAME(<node name> )”???????,??????????ASM???,???????????????????,??dependency?????????,??????????????????,???dependancy???????,??????(crsctl modify res ……)? ora.<???>.svc:?????????11gR2 ??,?????????,???10gR2??,???????????,srv ?cs ????? ora.cvu :?????11.2.0.2???,???????cluvfy??,???????????????? ora.ons : ONS??,????????,????? ??,?????GI??????????????????? $GRID_HOME/log/<node_name>/ocssd <== ocssd.bin ?? $GRID_HOME/log/<node_name>/gpnpd <== gpnpd.bin ?? $GRID_HOME/log/<node_name>/gipcd <== gipcd.bin ?? $GRID_HOME/log/<node_name>/agent/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/agent/ohasd <== ohasd.bin ?? $GRID_HOME/log/<node_name>/mdnsd <== mdnsd.bin ?? $GRID_HOME/log/<node_name>/client <== ????GI ??(ocrdump, crsctl, ocrcheck, gpnptool??)??????????? $GRID_HOME/log/<node_name>/ctssd <== ctssd.bin ?? $GRID_HOME/log/<node_name>/crsd <== crsd.bin ?? $GRID_HOME/log/<node_name>/cvu <== cluvfy ????????? $GRID_HOME/bin/diagcollection.sh <== ????????????????? ??,????????(/var/tmp/.oracle ? /tmp/.oracle),??????????????????ipc???,??,?????????????????????,???GI?????????????????????,??????????GI??????????????

    Read the article

  • 11gR2 ??????????

    - by Allen Gao
    ???????11gR2 GI????????????????????,?10g????,???????GI?????????????1.Ocssd.bin:????????10g??????????,???????(Node Monitoring)????(Group Management)?????????????“??????????”????????2.Cssdagent.bin/Cssdmonitor.bin:?2????11gR2??????????????ocssd.bin??????(Local HeartBeat),??????1??????????????????ocssd.bin???????????,????????ocssd.bin????????,??????,???????????10g??oclsomon/oclsvmon(?????????????)?oprocd????,????11gR2???????—rebootless restart,?????????11.2.0.2????????????,????????????(????????)??????ocssd.bin?????,??????????????,??????????GI stack?????,??GI stack??????????(short disk I/O timeout)??graceful shutdown,????????,??,????????????????????????11gR2 ??????????????1.Ocssd.log2.Cssdagent ? cssdmonitor logs<GI_home>/log/<node_name>/agent/ohasd/oracssdagent_root/oracssdagent_root.log<GI_home>/log/<node_name>/agent/ohasd/oracssdmonitor_root_root/oracssdmonitor_root.log3.Cluster alert log<GI_home>/log/<node_name>/alert<node_name>.log4.OS log5.OSW ?? CHM ????,??????????????????1.???????????????????????????????,??????10g???????????????????????????GI alert log ??,?????node2?2012-08-15 16:30:06.554 [cssd(11011) ]CRS-1612:Network communication with node node1 (1) missing for 50% of timeout interval.  Removal of this node from cluster in 14.510 seconds2012-08-15 16:30:13.586 [cssd(11011) ]CRS-1611:Network communication with node node1 (1) missing for 75% of timeout interval.  Removal of this node from cluster in 7.470 seconds2012-08-15 16:30:18.606 [cssd(11011) ]CRS-1610:Network communication with node node1 (1) missing for 90% of timeout interval.  Removal of this node from cluster in 2.450 seconds2012-08-15 16:30:21.073 [cssd(11011) ]CRS-1632:Node node1 is being removed from the cluster in cluster incarnation 2363798322012-08-15 16:30:21.086 [cssd(11011) ]CRS-1601:CSSD Reconfiguration complete. Active nodes are node2 .?????????????node1?????????????????,???????, node2?? node1 ?????????node1 ???,???node1 ???????????????(????os log ??OSW ????),??node1 ???????node2??node1?????????,????node1??????????,???reconfiguration,????????????,????????????,?11.2.0.2??,??rebootless restart???,node eviction ????????GI stack??,????????????,???node2?node1?????????,node1?ocssd.bin??????(????ocssd.log??)??node1???????????????,??node1??????GI node eviction????2.???????????????,?????10g???????,???????????3.??ocssd.bin ????Cssdagent/Cssdmonitor.bin????????????,??????,????,????oracssdagent_root.log ?oracssdmonitor_root.log ????????2012-07-23 14:09:58.506: [ USRTHRD][1095805248] (:CLSN00111: )clsnproc_needreboot: Impending reboot at 75% of limit 28030; disk timeout 28030, network timeout 26380, last heartbeat from CSSD at epoch seconds 1343023777.410, 21091 milliseconds ago based on invariant clock 269251595; now polling at 100 ms……2012-07-23 14:10:02.704: [ USRTHRD][1095805248] (:CLSN00111: )clsnproc_needreboot: Impending reboot at 90% of limit 28030; disk timeout 28030, network timeout 26380, last heartbeat from CSSD at epoch seconds 1343023777.410, 25291 milliseconds ago based on invariant clock 269251595; now polling at 100 ms……???????????????timeout???28 ???(misscount – reboot time)?4.?????????????????? ??????????????????????,????ocssd.bin??????,?????????????,?????????????ocssd.bin??,????????os???????????OSW??,???? ??????? cpu ???Linux OSWbb v5.0 node1SNAP_INTERVAL 30CPU_COUNT 8OSWBB_ARCHIVE_DEST /osw/archiveprocs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa……zzz ***Mon Aug 30 17:55:21 CST 2012158  6 4200956 923940   7664 19088464    0    0  1296  3574 11153 231579  0 100  0  0  0zzz ***Mon Aug 30 17:55:53 CST 2012135  4 4200956 923760   7812 19089344    0    0     4    45  570 14563  0 100  0  0  0zzz ***Mon Aug 30 17:56:53 CST 2012126  2 4200956 923784   8396 19083620    0    0   196  1121  651 15941  2 98  0  0  0?????????????,????10g??????11gR2????????????????,??????,????????Note 1050693.1 : Troubleshooting 11.2 Clusterware Node Evictions (Reboots)

    Read the article

  • A plugin is preventing Eclipse from starting up

    - by Mahmoud Hossam
    It just gives me a blank window, and the splash screen doesn't go away. I tried running it in a terminal, turns out it's a problematic plugin. Is there a way to disable that plugin without the GUI? There's the error log: [org.eclipse.contribution.weaving.jdt] error at org/eclipse/contribution/jdt/IsWovenTester.aj::0 class 'org.eclipse.contribution.jdt.IsWovenTester' is already woven and has not been built in reweavable mode [org.eclipse.contribution.weaving.jdt] error at org/eclipse/contribution/jdt/IsWovenTester.aj::0 class 'org.eclipse.contribution.jdt.IsWovenTester$WeavingMarker' is already woven and has not been built in reweavable mode [org.eclipse.jdt.core] warning at org/eclipse/contribution/jdt/sourceprovider/SourceTransformerAspect.aj:106::0 does not match because declaring type is org.eclipse.jdt.core.IOpenable, if match desired use target(org.eclipse.jdt.core.ICompilationUnit) [Xlint:unmatchedSuperTypeInCall] see also: org/eclipse/jdt/internal/core/SourceRefElement.java:198::0 [org.eclipse.jdt.ui] warning at org/eclipse/contribution/jdt/sourceprovider/SourceTransformerAspect.aj:106::0 does not match because declaring type is org.eclipse.jdt.core.ITypeRoot, if match desired use target(org.eclipse.jdt.core.ICompilationUnit) [Xlint:unmatchedSuperTypeInCall] see also: org/eclipse/jdt/internal/ui/javaeditor/ASTProvider.java:572::0 [org.eclipse.contribution.weaving.jdt] error at org/eclipse/contribution/jdt/sourceprovider/SourceTransformerAspect.aj::0 class 'org.eclipse.contribution.jdt.sourceprovider.SourceTransformerAspect' is already woven and has not been built in reweavable mode [org.eclipse.contribution.weaving.jdt] error at org/eclipse/contribution/jdt/cuprovider/CompilationUnitProviderAspect.aj::0 class 'org.eclipse.contribution.jdt.cuprovider.CompilationUnitProviderAspect' is already woven and has not been built in reweavable mode [ScalaPlugin] [scalaLibBundle] Found 0 bundles: LogFilter.isLoggable threw a non-fatal unchecked exception as follows: java.lang.NullPointerException at org.eclipse.core.internal.runtime.Log.isLoggable(Log.java:101) at org.eclipse.equinox.log.internal.ExtendedLogReaderServiceFactory.safeIsLoggable(ExtendedLogReaderServiceFactory.java:59) at org.eclipse.equinox.log.internal.ExtendedLogReaderServiceFactory.logPrivileged(ExtendedLogReaderServiceFactory.java:164) at org.eclipse.equinox.log.internal.ExtendedLogReaderServiceFactory.log(ExtendedLogReaderServiceFactory.java:150) at org.eclipse.equinox.log.internal.ExtendedLogServiceFactory.log(ExtendedLogServiceFactory.java:65) at org.eclipse.equinox.log.internal.ExtendedLogServiceImpl.log(ExtendedLogServiceImpl.java:87) at org.eclipse.equinox.log.internal.LoggerImpl.log(LoggerImpl.java:54) at org.eclipse.core.internal.runtime.Log.log(Log.java:60) at scala.tools.eclipse.util.DefaultLogger.warning(DefaultLogger.scala:46) at scala.tools.eclipse.ScalaPlugin$$anonfun$3.apply(ScalaPlugin.scala:131) at scala.tools.eclipse.ScalaPlugin$$anonfun$3.apply(ScalaPlugin.scala:130) at scala.Option.getOrElse(Option.scala:108) at scala.tools.eclipse.ScalaPlugin.<init>(ScalaPlugin.scala:130) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:525) at java.lang.Class.newInstance0(Class.java:372) at java.lang.Class.newInstance(Class.java:325) at org.eclipse.osgi.framework.internal.core.AbstractBundle.loadBundleActivator(AbstractBundle.java:166) at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:679) at org.eclipse.osgi.framework.internal.core.BundleHost.startWorker(BundleHost.java:381) at org.eclipse.osgi.framework.internal.core.AbstractBundle.start(AbstractBundle.java:299) at org.eclipse.osgi.framework.util.SecureAction.start(SecureAction.java:440) at org.eclipse.osgi.internal.loader.BundleLoader.setLazyTrigger(BundleLoader.java:268) at org.eclipse.core.runtime.internal.adaptor.EclipseLazyStarter.postFindLocalClass(EclipseLazyStarter.java:107) at org.eclipse.osgi.baseadaptor.loader.ClasspathManager.findLocalClass(ClasspathManager.java:462) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.findLocalClass(DefaultClassLoader.java:216) at org.eclipse.osgi.internal.loader.BundleLoader.findLocalClass(BundleLoader.java:400) at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:476) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:429) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:417) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at java.lang.ClassLoader.loadClass(ClassLoader.java:356) at org.eclipse.osgi.internal.loader.BundleLoader.loadClass(BundleLoader.java:345) at org.eclipse.osgi.framework.internal.core.BundleHost.loadClass(BundleHost.java:229) at org.eclipse.osgi.framework.internal.core.AbstractBundle.loadClass(AbstractBundle.java:1207) at org.eclipse.core.internal.registry.osgi.RegistryStrategyOSGI.createExecutableExtension(RegistryStrategyOSGI.java:174) at org.eclipse.core.internal.registry.ExtensionRegistry.createExecutableExtension(ExtensionRegistry.java:905) at org.eclipse.core.internal.registry.ConfigurationElement.createExecutableExtension(ConfigurationElement.java:243) at org.eclipse.core.internal.registry.ConfigurationElementHandle.createExecutableExtension(ConfigurationElementHandle.java:55) at org.eclipse.contribution.jdt.cuprovider.CompilationUnitProviderRegistry.registerProviders(CompilationUnitProviderRegistry.java:69) at org.eclipse.contribution.jdt.cuprovider.CompilationUnitProviderRegistry.getProvider(CompilationUnitProviderRegistry.java:46) at org.eclipse.contribution.jdt.cuprovider.CompilationUnitProviderAspect.ajc$inlineAccessMethod$org_eclipse_contribution_jdt_cuprovider_CompilationUnitProviderAspect$org_eclipse_contribution_jdt_cuprovider_CompilationUnitProviderRegistry$getProvider(CompilationUnitProviderAspect.aj:1) at org.eclipse.jdt.internal.core.PackageFragment.init$_aroundBody7$advice(PackageFragment.java:47) at org.eclipse.jdt.internal.core.PackageFragment.getCompilationUnit(PackageFragment.java:216) at org.eclipse.jdt.internal.core.JavaModelManager.createCompilationUnitFrom(JavaModelManager.java:962) at org.eclipse.jdt.internal.core.JavaModelManager.create(JavaModelManager.java:871) at org.eclipse.jdt.core.JavaCore.create(JavaCore.java:2622) at org.eclipse.jdt.internal.ui.javaeditor.CompilationUnitDocumentProvider.createCompilationUnit(CompilationUnitDocumentProvider.java:941) at org.eclipse.jdt.internal.ui.javaeditor.CompilationUnitDocumentProvider.createFileInfo(CompilationUnitDocumentProvider.java:974) at org.eclipse.ui.editors.text.TextFileDocumentProvider.connect(TextFileDocumentProvider.java:478) at org.eclipse.jdt.internal.ui.javaeditor.CompilationUnitDocumentProvider.connect(CompilationUnitDocumentProvider.java:1243) at org.eclipse.ui.texteditor.AbstractTextEditor.doSetInput(AbstractTextEditor.java:4213) at org.eclipse.ui.texteditor.StatusTextEditor.doSetInput(StatusTextEditor.java:237) at org.eclipse.ui.texteditor.AbstractDecoratedTextEditor.doSetInput(AbstractDecoratedTextEditor.java:1451) at org.eclipse.jdt.internal.ui.javaeditor.JavaEditor.internalDoSetInput(JavaEditor.java:2563) at org.eclipse.jdt.internal.ui.javaeditor.JavaEditor.doSetInput(JavaEditor.java:2536) at org.eclipse.jdt.internal.ui.javaeditor.CompilationUnitEditor.doSetInput(CompilationUnitEditor.java:1395) at org.eclipse.ui.texteditor.AbstractTextEditor$19.run(AbstractTextEditor.java:3200) at org.eclipse.jface.operation.ModalContext.runInCurrentThread(ModalContext.java:464) at org.eclipse.jface.operation.ModalContext.run(ModalContext.java:372) at org.eclipse.jface.window.ApplicationWindow$1.run(ApplicationWindow.java:759) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.jface.window.ApplicationWindow.run(ApplicationWindow.java:756) at org.eclipse.ui.internal.WorkbenchWindow.run(WorkbenchWindow.java:2642) at org.eclipse.ui.texteditor.AbstractTextEditor.internalInit(AbstractTextEditor.java:3218) at org.eclipse.ui.texteditor.AbstractTextEditor.init(AbstractTextEditor.java:3245) at org.eclipse.ui.internal.EditorManager.createSite(EditorManager.java:828) at org.eclipse.ui.internal.EditorReference.createPartHelper(EditorReference.java:647) at org.eclipse.ui.internal.EditorReference.createPart(EditorReference.java:465) at org.eclipse.ui.internal.WorkbenchPartReference.getPart(WorkbenchPartReference.java:595) at org.eclipse.ui.internal.EditorAreaHelper.setVisibleEditor(EditorAreaHelper.java:271) at org.eclipse.ui.internal.EditorManager.setVisibleEditor(EditorManager.java:1459) at org.eclipse.ui.internal.EditorManager$5.runWithException(EditorManager.java:972) at org.eclipse.ui.internal.StartupThreading$StartupRunnable.run(StartupThreading.java:31) at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:135) at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:3563) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3212) at org.eclipse.ui.application.WorkbenchAdvisor.openWindows(WorkbenchAdvisor.java:803) at org.eclipse.ui.internal.Workbench$33.runWithException(Workbench.java:1595) at org.eclipse.ui.internal.StartupThreading$StartupRunnable.run(StartupThreading.java:31) at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:135) at org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:3563) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3212) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2604) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2494) at org.eclipse.ui.internal.Workbench$7.run(Workbench.java:674) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:667) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:123) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:344) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:622) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:577) at org.eclipse.equinox.launcher.Main.run(Main.java:1410) at org.eclipse.equinox.launcher.Main.main(Main.java:1386) [StartupDiagnostics$] startup diagnostics: previous version = 2.0.0.rc01-2_09-201111091447-ce49e0a [StartupDiagnostics$] startup diagnostics: CURRENT version = 2.0.0.rc01-2_09-201111091447-ce49e0a [ScalaPlugin] Scala compiler bundle: reference:file:plugins/org.scala-ide.scala.compiler_2.9.2.r25964-b20111108034957.jar [org.eclipse.jdt.core] warning at org/eclipse/contribution/jdt/sourceprovider/SourceTransformerAspect.aj:106::0 does not match because declaring type is org.eclipse.jdt.core.IOpenable, if match desired use target(org.eclipse.jdt.core.ICompilationUnit) [Xlint:unmatchedSuperTypeInCall] see also: org/eclipse/jdt/internal/core/LocalVariable.java:363::0 [org.eclipse.contribution.weaving.jdt] error at org/eclipse/contribution/jdt/imagedescriptor/ImageDescriptorSelectorAspect.aj::0 class 'org.eclipse.contribution.jdt.imagedescriptor.ImageDescriptorSelectorAspect' is already woven and has not been built in reweavable mode [org.eclipse.jdt.ui] warning at org/eclipse/contribution/jdt/sourceprovider/SourceTransformerAspect.aj:106::0 does not match because declaring type is org.eclipse.jdt.core.IOpenable, if match desired use target(org.eclipse.jdt.core.ICompilationUnit) [Xlint:unmatchedSuperTypeInCall] see also: org/eclipse/jdt/internal/ui/text/java/hover/JavadocHover.java:630::0 [org.eclipse.contribution.weaving.jdt] error at org/eclipse/contribution/jdt/itdawareness/ITDAwarenessAspect.aj::0 class 'org.eclipse.contribution.jdt.itdawareness.ITDAwarenessAspect' is already woven and has not been built in reweavable mode [ScalaPlugin] open Ride.java

    Read the article

  • Embedded Glassfish - logging

    - by Walter White
    Hi all, I have migrated from log4j to logback and also am transitioning to Glassfish from Jetty. I haven't updated my logback configuration from what I had used with Jetty and consequently am not seeing any logs being written. What logging provider should I use? Should I just do my configuration with the Glassfish loggers in domain.xml? <access-log rotation-interval-in-minutes="15" rotation-suffix="yyyy-MM-dd"/> <log-service file="${com.sun.aas.instanceRoot}/logs/server.log" log-rotation-limit-in-bytes="2000000"> <module-log-levels/> </log-service> These are the defaults in domain.xml. I'd like to split the longs up into several files as well as control log level for each package. I think I can figure out how to configure them, but should I use Glassfish logging or can I use logback? Walter

    Read the article

  • Log4j Grouping application logs

    - by mhanda
    Hi, I am trying to group logs of multiple related applications to a single log file. For example I have 3 applications A1.esb, A2.esb, A3.esb. I want all the logs from these 3 applications get logged to a single log file called A.log. Similarly, I want B.log for B1.esb, B2.esb and B3.esb. I am using log4j in JBoss application server. I have tried to use TCLFilter but I only succeeded in getting individual applications logging to individual log files. As in, A1.esb logging to A1.log, A2.esb logging to A2.log and so on. But I couldn't figure out a way of grouping these loggings.

    Read the article

  • Javascript: Pass array as optional method args

    - by Dave Paroulek
    console.log takes a string and replaces tokens with values, for example: console.log("My name is %s, and I like %", 'Dave', 'Javascript') would print: My name is Dave, and I like Javascript I'd like to wrap this inside a method like so: function log(msg, values) { if(config.log == true){ console.log(msg, values); } } The 'values' arg might be a single value or several optional args. How can I accomplish this? If I call it like so: log("My name is %s, and I like %s", "Dave", "Javascript"); I get this (it doesn't recognize "Javascript" as a 3rd argument): My name is Dave, and I like %s If I call this: log("My name is %s, and I like %s", ["Dave", "Javascript"]); then it treats the second arg as an array (it doesn't expand to multiple args). What trick am I missing to get it to expand the optional args?

    Read the article

  • Performance Tricks for C# Logging

    - by Charles
    I am looking into C# logging and I do not want my log messages to spend any time processing if the message is below the logging threshold. The best I can see log4net does is a threshold check AFTER evaluating the log parameters. Example: _logger.Debug( "My complicated log message " + thisFunctionTakesALongTime() + " will take a long time" ) Even if the threshold is above Debug, thisFunctionTakesALongTime will still be evaluated. In log4net you are supposed to use _logger.isDebugEnabled so you end up with if( _logger.isDebugEnabled ) _logger.Debug( "Much faster" ) I want to know if there is a better solution for .net logging that does not involve a check each time I want to log. In C++ I am allowed to do LOG_DEBUG( "My complicated log message " + thisFunctionTakesALongTime() + " will take no time" ) since my LOG_DEBUG macro does the log level check itself. This frees me to have a 1 line log message throughout my app which I greatly prefer. Anyone know of a way to replicate this behavior in C#?

    Read the article

  • B-Tree Revision

    - by stan
    Hi, If we are looking for line intersections (horizontal and vertical lines only) and we have n lines with half of them vertical and no intersections then Sorting the list of line end points on y value will take N log N using mergesort Each insert delete and search of our data structue (assuming its a b-tree) will be < log n so the total search time will be N log N What am i missing here, if the time to sort using mergesort takes a time of N log N and insert and delete takes a time of < log n are we dropping the constant factor to give an overal time of N log N. If not then how comes < log n goes missing in total ONotation run time? Thanks

    Read the article

  • Getting Revisions from CVS repository

    - by Rob
    Hi, I am trying to get somehow all the revision log that were made to a particular file, but I seem to stupid to do that :( To checkout a module I do the following CVSROOT="/home/projects/stuff/" cvs co myworkingdir within myworkingdir I have a testfile called paper.tex and from this I wanna try to get the revisions but I tried the following but nothing works ... CVSROOT="/home/projects/stuff/" cvs log paper.tex cvs log: cannot open CVS/Entries for reading: No such file or directory cvs log: nothing known about paper.tex -bash-3.2$ CVSROOT="/home/projects/stuff/" cvs log myworkingdir/paper.tex cvs [log aborted]: no such directory `myworkingdir' Anyone an idea how I could get the log of the revisions of the paper.tex file in the myworkingdir module? Many thanks for your help! Claus

    Read the article

  • LINQ: How to Use RemoveAll without using For loop with Array

    - by CrimsonX
    I currently have a log object I'd like to remove objects from, based on a LINQ query. I would like to remove all records in the log if the sum of the versions within a program are greater than 60. Currently I'm pretty confident that this'll work, but it seems kludgy: for (int index = 0; index < 4; index++) { Log.RemoveAll(log => (log.Program[index].Version[0].Value + log.Program[index].Version[1].Value + log.Program[index].Version[2].Value ) > 60); } The Program is an array of 4 values and version has an array of 3 values. Is there a more simple way to do this RemoveAll in LINQ without using the for loop? Thanks for any help in advance!

    Read the article

  • How do I configure the Python logging module in Django?

    - by mipadi
    I'm trying to configure logging for a Django app using the Python logging module. I have placed the following bit of configuration code in my Django project's settings.py file: import logging import logging.handlers import os date_fmt = '%m/%d/%Y %H:%M:%S' log_formatter = logging.Formatter(u'[%(asctime)s] %(levelname)-7s: %(message)s (%(filename)s:%(lineno)d)', datefmt=date_fmt) log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app") log_name = os.path.join(log_dir, "nyrb.log") bytes = 1024 * 1024 # 1 MB if not os.path.exists(log_dir): os.makedirs(log_dir) handler = logging.handlers.RotatingFileHandler(log_name, maxBytes=bytes, backupCount=7) handler.setFormatter(log_formatter) handler.setLevel(logging.DEBUG) logging.getLogger().setLevel(logging.DEBUG) logging.getLogger().addHandler(handler) logging.getLogger(__name__).info("Initialized logging subsystem") At startup, I get a couple Django-related messages, as well as the "Initialized logging subsystem", in the log files, but then all the log messages end up going to the web server logs (/var/log/apache2/error.log, since I'm using Apache), and use the standard log format (not the formatter I designated). Am I configuring logging incorrectly?

    Read the article

  • python file manipulation

    - by lakshmipathi
    I have a directory /tmp/dir with two types of file names /tmp/dir/abc-something-server.log /tmp/dir/xyz-something-server.log .. .. and /tmp/dir/something-client.log I need append a few lines (these lines are constant) to files end with "client.log" line 1 line 2 line 3 line 4 append these four lines to files end with "client.log" and For files end with "server.log" I needed to append after a keyword say "After-this". "server.log " file has multiple entries of "After-this" I need to find the first entry of "After-this" and append above said four lines keep the remaining file as it is. Any help will be great appreciated :) Thanks in advance.

    Read the article

  • How can I generate a random human-readable colour from a seed? C#

    - by SLC
    Got a logfile, and it has all kinds of text in it. Currently it is just displayed as one colour, and each entry says something like: Log from section 1: Some text here Log from section 125: Some text here Log from section 17: Some text here Log from section 1: Some text here Log from section 125: Some text here Log from section 1: Some text here Log from section 17: Some text here Now the logfile is displayed in real time, and it would be nice to make the rows with the same section number the same colour. However there could be potentially quite a large range of numbers. What I want to do is create a method that will take a number, and randomly generate a unique colour. The colour must be readable against a black background though, so #000000 is no good, nor is #101010 or anything too dark to read. Ideally two similar numbers will not produce the same colour because in the above examples, the numbers 1 and 17 might be too similar, and some numbers might be in the 10,000 range. Any ideas on this?

    Read the article

  • javascript - switch not working

    - by Fernando SBS
    function FM_log(level, text) { // caso não seja log total escolhe o que loga var log = false; switch (level) { case "addtoprio()":log = true; case "alternaTropas()":log = false; case "sendtroops()":log = false; defalt: log = false; } if ((logTotal == false) && (log == true)) GM_log(horaAtual() + " - "+level+", "+text); else if (logTotal == true) GM_log(horaAtual() + " - "+level+", "+text); } how to do that switch is a way it works?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >