Search Results

Search found 3184 results on 128 pages for 'trace'.

Page 115/128 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Relay Access Denied (State 13) Postfix + Dovecot + Mysql

    - by Pierre Jeptha
    So we have been scratching our heads for quite some time over this relay issue that has presented itself since we re-built our mail-server after a failed Webmin update. We are running Debian Karmic with postfix 2.6.5 and Dovecot 1.1.11, sourcing from a Mysql database and authenticating with SASL2 and PAM. Here are the symptoms of our problem: 1) When users are on our local network they can send and receive 100% perfectly fine. 2) When users are off our local network and try to send to domains not of this mail server (ie. gmail) they get the "Relay Access Denied" error. However users can send to domains of this mail server when off the local network fine. 3) We host several virtual domains on this mailserver, the primary domain being airnet.ca. The rest of our virtual domains (ex. jeptha.ca) cannot receive email from domains not hosted by this mailserver (ie. gmail and such cannot send to them). They receive bounce backs of "Relay Access Denied (State 13)". This is regardless of whether they are on our local network or not, which is why it is so urgent for us to get this solved. Here is our main.cf from postfix: myhostname = mail.airnet.ca mydomain = airnet.ca smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no smtpd_sasl_type = dovecot queue_directory = /var/spool/postfix smtpd_sasl_path = private/auth smtpd_sender_restrictions = permit_mynetworks permit_sasl_authenticated smtp_sasl_auth_enable = yes smtpd_sasl_auth_enable = yes append_dot_mydomain = no readme_directory = no smtp_tls_security_level = may smtpd_tls_security_level = may smtp_tls_note_starttls_offer = yes smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_auth_only = no alias_maps = proxy:mysql:/etc/postfix/mysql/alias.cf hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = mail.airnet.ca, airnet.ca, localhost.$mydomain mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + local_recipient_maps = $alias_maps $virtual_mailbox_maps proxy:unix:passwd.byname home_mailbox = /var/virtual/ mail_spool_directory = /var/spool/mail mailbox_transport = maildrop smtpd_helo_required = yes disable_vrfy_command = yes smtpd_etrn_restrictions = reject smtpd_data_restrictions = reject_unauth_pipelining, permit show_user_unknown_table_name = no proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $relay_domains $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $transport_maps $mynetworks $virtual_mailbox_limit_maps $virtual_uid_maps $virtual_gid_maps virtual_alias_domains = message_size_limit = 20971520 transport_maps = proxy:mysql:/etc/postfix/mysql/vdomain.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql/vmailbox.cf virtual_alias_maps = proxy:mysql:/etc/postfix/mysql/alias.cf hash:/etc/mailman/aliases virtual_uid_maps = proxy:mysql:/etc/postfix/mysql/vuid.cf virtual_gid_maps = proxy:mysql:/etc/postfix/mysql/vgid.cf virtual_mailbox_base = / virtual_mailbox_limit = 209715200 virtual_mailbox_extended = yes virtual_create_maildirsize = yes virtual_mailbox_limit_maps = proxy:mysql:/etc/postfix/mysql/vmlimit.cf virtual_mailbox_limit_override = yes virtual_mailbox_limit_inbox = no virtual_overquote_bounce = yes virtual_minimum_uid = 1 maximal_queue_lifetime = 1d bounce_queue_lifetime = 4h delay_warning_time = 1h append_dot_mydomain = no qmgr_message_active_limit = 500 broken_sasl_auth_clients = yes smtpd_sasl_path = private/auth smtpd_sasl_local_domain = $myhostname smtpd_sasl_security_options = noanonymous smtpd_sasl_authenticated_header = yes smtp_bind_address = 142.46.193.6 relay_domains = $mydestination mynetworks = 127.0.0.0, 142.46.193.0/25 inet_interfaces = all inet_protocols = all And here is the master.cf from postfix: # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - - - - smtpd #submission inet n - - - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #smtps inet n - - - - smtpd # -o smtpd_tls_wrappermode=yes # -o smtpd_sasl_auth_enable=yes # -o smtpd_client_restrictions=permit_sasl_authenticated,reject # -o milter_macro_daemon_name=ORIGINATING #628 inet n - - - - qmqpd pickup fifo n - - 60 1 pickup cleanup unix n - - - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - - 300 1 oqmgr tlsmgr unix - - - 1000? 1 tlsmgr rewrite unix - - - - - trivial-rewrite bounce unix - - - - 0 bounce defer unix - - - - 0 bounce trace unix - - - - 0 bounce verify unix - - - - 1 verify flush unix n - - 1000? 0 flush proxymap unix - - n - - proxymap proxywrite unix - - n - 1 proxymap smtp unix - - - - - smtp # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - - - - smtp -o smtp_fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - - - - showq error unix - - - - - error retry unix - - - - - error discard unix - - - - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - - - - lmtp anvil unix - - - - 1 anvil scache unix - - - - 1 scache maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient} # # See the Postfix UUCP_README file for configuration details. # uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) # # Other external delivery methods. # ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient scalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension} mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user} spfpolicy unix - n n - - spawn user=nobody argv=/usr/bin/perl /usr/sbin/postfix-policyd-spf-perl smtp-amavis unix - - n - 4 smtp -o smtp_data_done_timeout=1200 -o smtp_send_xforward_command=yes -o disable_dns_lookups=yes #127.0.0.1:10025 inet n - n - - smtpd dovecot unix - n n - - pipe flags=DRhu user=dovecot:21pever1lcha0s argv=/usr/lib/dovecot/deliver -d ${recipient Here is Dovecot.conf protocols = imap imaps pop3 pop3s disable_plaintext_auth = no log_path = /etc/dovecot/logs/err info_log_path = /etc/dovecot/logs/info log_timestamp = "%Y-%m-%d %H:%M:%S ". syslog_facility = mail ssl_listen = 142.46.193.6 ssl_disable = no ssl_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem ssl_key_file = /etc/ssl/private/ssl-cert-snakeoil.key mail_location = mbox:~/mail:INBOX=/var/virtual/%d/mail/%u mail_privileged_group = mail mail_debug = yes protocol imap { login_executable = /usr/lib/dovecot/imap-login mail_executable = /usr/lib/dovecot/rawlog /usr/lib/dovecot/imap mail_executable = /usr/lib/dovecot/gdbhelper /usr/lib/dovecot/imap mail_executable = /usr/lib/dovecot/imap imap_max_line_length = 65536 mail_max_userip_connections = 20 mail_plugin_dir = /usr/lib/dovecot/modules/imap login_greeting_capability = yes } protocol pop3 { login_executable = /usr/lib/dovecot/pop3-login mail_executable = /usr/lib/dovecot/pop3 pop3_enable_last = no pop3_uidl_format = %08Xu%08Xv mail_max_userip_connections = 10 mail_plugin_dir = /usr/lib/dovecot/modules/pop3 } protocol managesieve { sieve=~/.dovecot.sieve sieve_storage=~/sieve } mail_plugin_dir = /usr/lib/dovecot/modules/lda auth_executable = /usr/lib/dovecot/dovecot-auth auth_process_size = 256 auth_cache_ttl = 3600 auth_cache_negative_ttl = 3600 auth_username_chars = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890.-_@ auth_verbose = yes auth_debug = yes auth_debug_passwords = yes auth_worker_max_count = 60 auth_failure_delay = 2 auth default { mechanisms = plain login passdb sql { args = /etc/dovecot/dovecot-sql.conf } userdb sql { args = /etc/dovecot/dovecot-sql.conf } socket listen { client { path = /var/spool/postfix/private/auth mode = 0660 user = postfix group = postfix } master { path = /var/run/dovecot/auth-master mode = 0600 } } } Please, if you require anything do not hesistate, I will post it ASAP. Any help or suggestions are greatly appreciated! Thanks, Pierre

    Read the article

  • Connection drops while transferring large files to one server on a network

    - by Charlotte
    My company has two sites, each with their own LAN, using site to site VPN tunnel to connect the two sites. When transferring files (especially larger files) from site1 to site2 server1, the file transfer fails. I don't think this can be a VPN issue because transferring the same files to site2 server2 which is on the same network as server1 works fine. Pings to server1 and server2 at site2 from site1 are about the same, mostly 19/20ms with the odd one up to 50ms. As server1 is DB server with a high load I thought the NIC maybe overloaded, but a transfer from site2 server1 to site2 server2 works fine, and that uses the same NIC on server1 as transfers from site1 to site2 server1. The servers are both Windows Server 2003 VMs with VMXNET 3 NICs. Site2 Server1 route print: IPv4 Route Table =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x10003 ...00 50 56 99 28 9b ...... vmxnet3 Ethernet Adapter #2 0x10004 ...00 50 56 99 18 97 ...... vmxnet3 Ethernet Adapter =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.20.10.1 172.20.10.18 10 10.10.10.0 255.255.255.0 10.10.10.70 10.10.10.70 10 10.10.10.70 255.255.255.255 127.0.0.1 127.0.0.1 10 10.255.255.255 255.255.255.255 10.10.10.70 10.10.10.70 10 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 172.20.10.0 255.255.255.0 172.20.10.18 172.20.10.18 10 172.20.10.18 255.255.255.255 127.0.0.1 127.0.0.1 10 172.20.255.255 255.255.255.255 172.20.10.18 172.20.10.18 10 224.0.0.0 240.0.0.0 10.10.10.70 10.10.10.70 10 224.0.0.0 240.0.0.0 172.20.10.18 172.20.10.18 10 255.255.255.255 255.255.255.255 10.10.10.70 10.10.10.70 1 255.255.255.255 255.255.255.255 172.20.10.18 172.20.10.18 1 Default Gateway: 172.20.10.1 =========================================================================== Persistent Routes: None Site2 Server2 route print IPv4 Route Table =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x10003 ...00 50 56 99 15 00 ...... vmxnet3 Ethernet Adapter =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 172.20.10.1 172.20.10.114 10 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 172.20.10.0 255.255.255.0 172.20.10.114 172.20.10.114 10 172.20.10.114 255.255.255.255 127.0.0.1 127.0.0.1 10 172.20.255.255 255.255.255.255 172.20.10.114 172.20.10.114 10 224.0.0.0 240.0.0.0 172.20.10.114 172.20.10.114 10 255.255.255.255 255.255.255.255 172.20.10.114 172.20.10.114 1 Default Gateway: 172.20.10.1 =========================================================================== Persistent Routes: None Site1 Server route print: =========================================================================== Interface List 14...00 50 56 93 00 0b ......vmxnet3 Ethernet Adapter #2 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.168.1 192.168.168.118 261 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.168.0 255.255.255.0 On-link 192.168.168.118 261 192.168.168.118 255.255.255.255 On-link 192.168.168.118 261 192.168.168.255 255.255.255.255 On-link 192.168.168.118 261 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.168.118 261 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.168.118 261 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 192.168.168.1 Default =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 1 306 ::1/128 On-link 14 261 fe80::/64 On-link 14 261 fe80::3c6b:996f:ef36:ee76/128 On-link 1 306 ff00::/8 On-link 14 261 ff00::/8 On-link =========================================================================== Persistent Routes: None tracert from site1 to site2 server1: Tracing route to server1 [172.20.10.18] over a maximum of 30 hops: 1 19 ms 19 ms 19 ms server1 [172.20.10.18] Trace complete. tracert from site2 server1 to site1: When this was run it went to the external IP of site2, then to a couple of external ips of the isp, then times out. Can anyone suggest any troubleshooting steps? Thanks, Charlotte.

    Read the article

  • Can't connect to smtp (postfix, dovecot) after making a change and trying to change it back

    - by UberBrainChild
    I am using postfix and dovecot along with zpanel and I tried enabling SSL and then turned it off as I did not have SSL configured yet and I realized it was a bit stupid at the time. I am using CentOS 6.4. I get the following error in the mail log. (I changed my host name to "myhostname" and my domain to "mydomain.com") Oct 20 01:49:06 myhostname postfix/smtpd[4714]: connect from mydomain.com[127.0.0.1] Oct 20 01:49:16 myhostname postfix/smtpd[4714]: fatal: no SASL authentication mechanisms Oct 20 01:49:17 myhostname postfix/master[4708]: warning: process /usr/libexec/postfix/smtpd pid 4714 exit status 1 Oct 20 01:49:17 amyhostname postfix/master[4708]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling Reading on forums and similar questions I figured it was just a service that was not running or installed. However I can see that saslauthd is currently up and running on my system and restarting it does not help. Here is my postfix master.cf # # Postfix master process configuration file. For details on the format # of the file, see the Postfix master(5) manual page. # # ***** Unused items removed ***** # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - n - - smtpd # -o content_filter=smtp-amavis:127.0.0.1:10024 # -o receive_override_options=no_address_mappings pickup fifo n - n 60 1 pickup submission inet n - - - - smtpd -o content_filter= -o receive_override_options=no_header_body_checks cleanup unix n - n - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - n 300 1 oqmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap smtp unix - - n - - smtp smtps inet n - - - - smtpd # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - n - - smtp -o fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - n - - showq error unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # ==================================================================== maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/local/bin/maildrop -d ${recipient} uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=foo argv=/usr/local/sbin/bsmtp -f $sender $nexthop $recipient # # spam/virus section # smtp-amavis unix - - y - 2 smtp -o smtp_data_done_timeout=1200 -o disable_dns_lookups=yes -o smtp_send_xforward_command=yes 127.0.0.1:10025 inet n - y - - smtpd -o content_filter= -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks=127.0.0.0/8 -o smtpd_error_sleep_time=0 -o smtpd_soft_error_limit=1001 -o smtpd_hard_error_limit=1000 -o receive_override_options=no_header_body_checks -o smtpd_bind_address=127.0.0.1 -o smtpd_helo_required=no -o smtpd_client_restrictions= -o smtpd_restriction_classes= -o disable_vrfy_command=no -o strict_rfc821_envelopes=yes # # Dovecot LDA dovecot unix - n n - - pipe flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d ${recipient} # # Vacation mail vacation unix - n n - - pipe flags=Rq user=vacation argv=/var/spool/vacation/vacation.pl -f ${sender} -- ${recipient} And here is dovecot ## ## Dovecot config file ## listen = * disable_plaintext_auth = no protocols = imap pop3 lmtp sieve auth_mechanisms = plain login passdb { driver = sql args = /etc/zpanel/configs/dovecot2/dovecot-mysql.conf } userdb { driver = sql } userdb { driver = sql args = /etc/zpanel/configs/dovecot2/dovecot-mysql.conf } mail_location = maildir:/var/zpanel/vmail/%d/%n first_valid_uid = 101 #last_valid_uid = 0 first_valid_gid = 12 #last_valid_gid = 0 #mail_plugins = mailbox_idle_check_interval = 30 secs maildir_copy_with_hardlinks = yes service imap-login { inet_listener imap { port = 143 } } service pop3-login { inet_listener pop3 { port = 110 } } service lmtp { unix_listener lmtp { #mode = 0666 } } service imap { vsz_limit = 256M } service pop3 { } service auth { unix_listener auth-userdb { mode = 0666 user = vmail group = mail } # Postfix smtp-auth unix_listener /var/spool/postfix/private/auth { mode = 0666 user = postfix group = postfix } } service auth-worker { } service dict { unix_listener dict { mode = 0666 user = vmail group = mail } } service managesieve-login { inet_listener sieve { port = 4190 } service_count = 1 process_min_avail = 0 vsz_limit = 64M } service managesieve { } lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes protocol lda { mail_plugins = quota sieve postmaster_address = [email protected] } protocol imap { mail_plugins = quota imap_quota trash imap_client_workarounds = delay-newmail } lmtp_save_to_detail_mailbox = yes protocol lmtp { mail_plugins = quota sieve } protocol pop3 { mail_plugins = quota pop3_client_workarounds = outlook-no-nuls oe-ns-eoh } protocol sieve { managesieve_max_line_length = 65536 managesieve_implementation_string = Dovecot Pigeonhole managesieve_max_compile_errors = 5 } dict { quotadict = mysql:/etc/zpanel/configs/dovecot2/dovecot-dict-quota.conf } plugin { # quota = dict:User quota::proxy::quotadict quota = maildir:User quota acl = vfile:/etc/dovecot/acls trash = /etc/zpanel/configs/dovecot2/dovecot-trash.conf sieve_global_path = /var/zpanel/sieve/globalfilter.sieve sieve = ~/dovecot.sieve sieve_dir = ~/sieve sieve_global_dir = /var/zpanel/sieve/ #sieve_extensions = +notify +imapflags sieve_max_script_size = 1M #sieve_max_actions = 32 #sieve_max_redirects = 4 } log_path = /var/log/dovecot.log info_log_path = /var/log/dovecot-info.log debug_log_path = /var/log/dovecot-debug.log mail_debug=yes ssl = no Does anyone have any ideas or tips on what I can try to get this working? Thanks for all the help EDIT: Output of postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 delay_warning_time = 4 disable_vrfy_command = yes html_directory = no inet_interfaces = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = localhost.$mydomain, localhost mydomain = control.yourdomain.com myhostname = control.yourdomain.com mynetworks = all newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.2.2/README_FILES recipient_delimiter = + relay_domains = proxy:mysql:/etc/zpanel/configs/postfix/mysql-relay_domains_maps.cf sample_directory = /usr/share/doc/postfix-2.2.2/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_use_tls = no smtpd_client_restrictions = smtpd_data_restrictions = reject_unauth_pipelining smtpd_helo_required = yes smtpd_helo_restrictions = smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_recipient_domain smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sender_restrictions = smtpd_use_tls = no soft_bounce = yes transport_maps = hash:/etc/postfix/transport unknown_local_recipient_reject_code = 550 virtual_alias_maps = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_alias_maps.cf, regexp:/etc/zpanel/configs/postfix/virtual_regexp virtual_gid_maps = static:12 virtual_mailbox_base = /var/zpanel/vmail virtual_mailbox_domains = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_mailbox_maps.cf virtual_minimum_uid = 101 virtual_transport = dovecot virtual_uid_maps = static:101

    Read the article

  • Can't launch Oneiric x64 instance on Eucalyptus

    - by Bruno Reis
    EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. More details in the end. I didn't manage to fix it, and I will file a bug. EDIT 2: I managed to fix it, it apparently works. I have a 4-machine cluster running Ubuntu Server Natty (11.04) x64. I've installed "Ubuntu Enterprise Cloud" from the installtion CD (then updated it) on each of these machines. The cloud seems to work fine, I have lots of virtual machines running Natty servers on them. Now I'd like to run Oneiric in a virtual machine, but somehow I can't. I downloaded Oneiric's (x64) image from http://cloud-images.ubuntu.com/oneiric/current/, published it (uec-publish-tarball oneiric-server-cloudimg-amd64.tar.gz oneiric-server-cloudimg-amd64) exactly as I did with Natty, then tried to launch an instance (euca-run-instances -n 1 -k my-key -t m1.small -z my-cloud emi-XXXXXXXX) using Oneiric's image, but the instance is not able to boot. With euca-get-console-output I get the following: [ 0.461269] VFS: Cannot open root device "sda1" or unknown-block(0,0) [ 0.462388] Please append a correct "root=" boot option; here are the available partitions: [ 0.463855] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 0.465331] Pid: 1, comm: swapper Not tainted 3.0.0-13-generic #22-Ubuntu [ 0.466526] Call Trace: [ 0.466989] [<ffffffff815d3ee5>] panic+0x91/0x194 [ 0.467860] [<ffffffff81ad1031>] mount_block_root+0xdc/0x18e [ 0.468891] [<ffffffff81ad126a>] mount_root+0x54/0x59 [ 0.469829] [<ffffffff81ad13dc>] prepare_namespace+0x16d/0x1a7 [ 0.470883] [<ffffffff81ad0d76>] kernel_init+0x140/0x145 [ 0.471837] [<ffffffff815f38e4>] kernel_thread_helper+0x4/0x10 [ 0.472889] [<ffffffff81ad0c36>] ? start_kernel+0x3df/0x3df [ 0.473884] [<ffffffff815f38e0>] ? gs_change+0x13/0x13 The filesystem is labeled "cloudimg-rootfs", inside the image both /etc/fstab and /boot/grub/grub.cfg always refer to the image by the label, everything seems to be correct, yet the kernel says it can't find the root file system. I've spent many hours googling, but nothing came out. I've asked on #ubuntu-server, but nobody knew what to do. I've asked on #eucalyptus but got no answer at all. Any ideas on why this is happening and how to solve it? Thanks EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. The first problem is that the Kernel in the image is a -generic kernel, while I suppose it should be a -virtual one. I chrooted into the image, removed the -generic packages, replaced it with the -virtual ones. Then I extracted the new kernel (and replaced the original one (-generic) that came with the tarball) because I need it when I publish and launch an image with Eucalyptus. The problem described above was solved. But then, the console started showing this: mount: mount point ext4 does not exist If you check the /etc/fstab file in the image, it says: LABEL=cloudimg-rootfs ext4 defaults 0 1 Damnt, where's my mount point? Note that it is missing /proc as well. Well, when you think it is over, you will notice that your instance will have no network connectivity. Let's check /etc/network/interface: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback Oh my! It is missing eth0... here I stopped. I can't take no more. I give up. Looks like Canonical has just forgotten to properly set up this image. At first, I though: "have I downloaded a server image by mistake?", but no, I double checked. It is really the cloud image, it has even "cloud-init" installed (which is not, by default, on server images). They just forgot to prepare it. I will file a bug (and reference it here once this is done), and hope they fix it soon! EDIT 2: it looks like the network configuration was the last thing missing. I decided to test it with the fixes above, and it booted properly! However, I haven't got the slightest idea if the image is now good to go...

    Read the article

  • Deployment Workbench no longer available after PXE boot

    - by Patrick
    Our build process revolves around windows Deployment Workbench. Unfortunately this was setup by someone who is no longer with the company, and no-one has ever dared/needed to make any changes. The other day it stopped working. It turns out that one of our build guys started thinking about changing some stuff in it, clicked something and now it no longer works (He is saying now that he right clicked on the 'LAB' entry in 'Deployment points' and hit 'Update', which took some time to run through apparently). The job has fallen on me to resolve and frankly I'm not sure what I'm doing. I was wondering if someone with more experience than me can provide some pointers as to troubleshooting cos I'm feeling quite a lot in the dark here. On the server I have Deployment Workbench up and running (MMC snapin) version 3.0. There is a WDS service that appears to be running ok, as does the tFTPd service. Nothing specific to this in event logs. From the client side; PXE boot works and gets you to the Win PE launch, and it has the correct company logo as the background (proving to me that its loading win PE from the network). WPEINIT runs, and asks for domain credentials, here the team simply put User/Pass/Domain in the boxes and click ok. Normally the build would kick off. Instead they get an error message saying that the \NATBLU01\Distribution$ share isn't available. Checking \NATBLU01\Distribution$ shows that its there and accessible over the network. Security/permissions seem ok, even 'ANONYMOUS LOGON' has read access to that share so I don't see that being a problem. Digging the trace files from C:\MININT\SMSOSD\OSDLOGS\ after an attempt to run the build I can see an error saying much the same - <![LOG[Validating connection to \\NATBLU01\Distribution$]LOG]!><time="16:42:14.000+000" date="03-15-2012" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[FindFile: The file OSDConnectToUNC.exe could not be found in any standard locations.]LOG]!><time="16:42:14.000+000" date="03-15-2012" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[The network location cannot be reached. For information about network troubleshooting, see Windows Help.]LOG]!><time="16:42:24.000+000" date="03-15-2012" component="LiteTouch" context="" type="3" thread="" file="LiteTouch"> <![LOG[ERROR - Unable to map a network drive to \\NATBLU01\Distribution$.]LOG]!><time="16:42:24.000+000" date="03-15-2012" component="LiteTouch" context="" type="3" thread="" file="LiteTouch"> BDD.LOG shows much the same. Full copies of the .LOG files can both files be found here : BDD.LOG LITETOUCH.LOG I can get to a command prompt from the Win PE that boots from PXE, however there isn't any network stuff there. IPCONFIG returns nothing so none of the tests I would usually run resolve anything. I'm at a loss frankly. I did wonder if I could perhaps start a new build process but if the change to the DeploymentWorkbench has knocked it offline I don't think I'm going to be able to create a new deployment. Failing that; we do have a deployment point labeled type 'Media' which appears to be a DVD ISO image of one of the builds, but its dated 2008, is it possible to export the network build to .ISO and build from DVD? We are looking at new hardware to run this from anyway (for the impending Windows 7 roll out) so a temporary work round isn't going to be too much of a problem. All assistance is appreciated! EDIT : OK. Got it working again. Solution was close to Newmanth's idea. The problem was that our PE image didn't appear to be connecting the network. I had an older copy of the PE boot.WIM on a stick that I had been using for other purposes. I booted that and correctly got a network connection. Showed a correct internal IP and could ping out etc etc. However I was still getting the same errors in all the logs and in when wpeinit was running. What I did seperately was to update the PE image that DeploymentWorkbench was pushing out to display a different back ground. I wanted to prove that I was working in the correct place. Turns out that I wasn't. I went and looked at the other deployment stuff we had on this machine, Windows Deployment Services was installed and although all the install images are off line the boot image was online, so I uploaded the copy from my stick to that. Booted straight off. And fixed. Working. Yay! For anyone stumbling across this in the future you may find that although your deployment images are located in the DeploymentWorkbench, the Win PE boot image you are launching from is located in the associated Windows Deployment Services images.

    Read the article

  • Server hung with "blocked for more than 120 seconds", diskless

    - by alterpub
    I have server which hung every 2-5 days. dmesg show following situation: kernel: [490894.231753] INFO: task munin-html:10187 blocked for more than 120 seconds. kernel: [490894.231799] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kernel: [490894.231843] munin-html D 0000000000000000 0 10187 9796 0x00000000 kernel: [490894.231878] ffff88063174b968 0000000000000082 ffff88063174b8d8 0000000000015b80 kernel: [490894.231930] ffff88063174bfd8 0000000000015b80 ffff88063174bfd8 ffff88062fe644d0 kernel: [490894.231982] 0000000000015b80 0000000000015b80 ffff88063174bfd8 0000000000015b80 kernel: [490894.232033] Call Trace: kernel: [490894.232059] [<ffffffff8117fce0>] ? sync_buffer+0x0/0x50 kernel: [490894.232089] [<ffffffff815a1f13>] io_schedule+0x73/0xc0 kernel: [490894.232115] [<ffffffff8117fd25>] sync_buffer+0x45/0x50 kernel: [490894.232143] [<ffffffff815a258f>] __wait_on_bit+0x5f/0x90 kernel: [490894.232170] [<ffffffff8117fce0>] ? sync_buffer+0x0/0x50 kernel: [490894.232197] [<ffffffff815a2638>] out_of_line_wait_on_bit+0x78/0x90 kernel: [490894.232227] [<ffffffff81080250>] ? wake_bit_function+0x0/0x40 kernel: [490894.232255] [<ffffffff8117fcd6>] __wait_on_buffer+0x26/0x30 kernel: [490894.232288] [<ffffffffa00131be>] squashfs_read_data+0x1be/0x520 [squashfs] kernel: [490894.232320] [<ffffffff8114f0f1>] ? __mem_cgroup_try_charge+0x71/0x450 kernel: [490894.232350] [<ffffffffa0013963>] squashfs_cache_get+0x1c3/0x320 [squashfs] kernel: [490894.232381] [<ffffffffa00136eb>] ? squashfs_copy_data+0x10b/0x130 [squashfs] kernel: [490894.232426] [<ffffffff815a3dbe>] ? _raw_spin_lock+0xe/0x20 kernel: [490894.232454] [<ffffffffa0013b68>] ? squashfs_read_metadata+0x48/0xf0 [squashfs] kernel: [490894.232499] [<ffffffffa0013ae1>] squashfs_get_datablock+0x21/0x30 [squashfs] kernel: [490894.232544] [<ffffffffa0015026>] squashfs_readpage+0x436/0x4a0 [squashfs] kernel: [490894.232575] [<ffffffff8111a375>] ? __inc_zone_page_state+0x35/0x40 kernel: [490894.232606] [<ffffffff8110d072>] __do_page_cache_readahead+0x172/0x210 kernel: [490894.232636] [<ffffffff8110d131>] ra_submit+0x21/0x30 kernel: [490894.232662] [<ffffffff811045f3>] filemap_fault+0x3f3/0x450 kernel: [490894.232691] [<ffffffff812bd156>] ? prio_tree_insert+0x256/0x2b0 kernel: [490894.232726] [<ffffffffa009225d>] aufs_fault+0x11d/0x170 [aufs] kernel: [490894.232755] [<ffffffff8111f6d4>] __do_fault+0x54/0x560 kernel: [490894.232782] [<ffffffff81122f39>] handle_mm_fault+0x1b9/0x440 kernel: [490894.232811] [<ffffffff811286f5>] ? do_mmap_pgoff+0x335/0x380 kernel: [490894.232840] [<ffffffff815a7af5>] do_page_fault+0x125/0x350 kernel: [490894.232867] [<ffffffff815a4675>] page_fault+0x25/0x30 Os info cat /etc/issue.net Ubuntu 10.04.2 LTS uname -a Linux Shard1Host3 2.6.35-32-server #68~lucid1-Ubuntu SMP Wed Mar 28 18:33:00 UTC 2012 x86_64 GNU/Linux This system load via ltsp and hasn't harddrives also it has a lot of memory 24Gb. free -m total used free shared buffers cached Mem: 24152 17090 7061 0 50 494 -/+ buffers/cache: 16545 7607 Swap: 0 0 0 I put vmstat info here but I think it won't give results after reboot vmstat 1 30 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 7231156 52196 506400 0 0 0 0 172 143 7 0 92 0 1 0 0 7231024 52196 506400 0 0 0 0 7859 16233 5 0 94 0 0 0 0 7231024 52196 506400 0 0 0 0 7870 16446 2 0 98 0 0 0 0 7230900 52196 506400 0 0 0 0 7308 15661 5 0 95 0 0 0 0 7231100 52196 506400 0 0 0 0 7960 16543 6 0 94 0 0 0 0 7231100 52196 506400 0 0 0 0 7542 16047 5 1 94 0 3 0 0 7231100 52196 506400 0 0 0 0 7709 16621 3 0 96 0 0 0 0 7231220 52196 506400 0 0 0 0 7857 16552 4 0 96 0 0 0 0 7231220 52196 506400 0 0 0 0 7192 15491 6 0 94 0 0 0 0 7231220 52196 506400 0 0 0 0 7423 15792 5 1 94 0 1 0 0 7231260 52196 506404 0 0 0 0 7686 16296 2 0 98 0 0 0 0 7231260 52196 506404 0 0 0 0 6976 15183 5 0 95 0 0 0 0 7231260 52196 506404 0 0 0 0 7303 15600 4 0 95 0 0 0 0 7231320 52196 506404 0 0 0 0 7967 16241 1 0 98 0 0 0 0 7231444 52196 506404 0 0 0 0 6948 15113 6 0 94 0 0 0 0 7231444 52196 506404 0 0 0 0 7931 16181 6 0 94 0 1 0 0 7231516 52196 506404 0 0 0 0 7715 15829 6 0 94 0 0 0 0 7231516 52196 506404 0 0 0 0 7771 16036 2 0 97 0 0 0 0 7231268 52196 506404 0 0 0 0 7782 16202 6 0 94 0 1 0 0 7231212 52196 506404 0 0 0 0 7457 15622 4 0 96 0 0 0 0 7231212 52196 506404 0 0 0 0 7573 16045 2 0 98 0 2 0 0 7231216 52196 506404 0 0 0 0 7689 16076 6 0 94 0 0 0 0 7231424 52196 506404 0 0 0 0 7429 15650 4 0 95 0 3 0 0 7231424 52196 506404 0 0 0 0 7534 16168 3 0 97 0 1 0 0 7230548 52196 506404 0 0 0 0 8559 15926 7 1 92 0 0 0 0 7230672 52196 506404 0 0 0 0 7720 15905 2 0 98 0 0 0 0 7230548 52196 506404 0 0 0 0 7677 16313 5 0 95 0 1 0 0 7230676 52196 506404 0 0 0 0 7209 15432 5 0 95 0 0 0 0 7230800 52196 506404 0 0 0 0 7522 15861 2 0 98 0 0 0 0 7230552 52196 506404 0 0 0 0 7760 16661 5 0 95 0 In the munin(monitoring system) graphs I see(before server hung): Disk(nbd0) IOs per device: read: 289m but avg by week 2.09m Disk(nbd0) throughput per device: read: 4.73k but avg by week 108.76 Disk(nbd0) utilization per device: 100% but avg by week 1.2% Eth0 traffic was low: in/out only 2Mbps Number of threads increased to 566 usually 392 Fork rate 1.08 but usually 2.82 VMStat(processes state) Increased to 17.77(from 0 as far as I could see in the graph) CPU usage iowait 880.46% when usually 6.67% It'll be great if somebody help me to understand what's up.

    Read the article

  • Twitter 2 for Android crash every time I try uploading multi photos [closed]

    - by Hazz
    Hello, I'm using the new Twitter 2 on Android 2.1. Whenever I hit the button which enables me to upload multiple photos in a single tweet, I always get the error "The application Camera (process com.sonyericsson.camera) has stopped unexpectidly. Please try again". However, uploading a single photo using the camera button in Twitter have no problem, it works. My phone is Sony Ericsson x10 mini pro. I tried signing out and back in, same result. Anything I can do to fix this? This is the log info I got using Log Collector: 02-23 15:05:57.328 I/ActivityManager( 1240): Starting activity: Intent { act=com.twitter.android.post.status cmp=com.twitter.android/.PostActivity } 02-23 15:05:57.338 D/PhoneWindow(15095): couldn't save which view has focus because the focused view com.android.internal.policy.impl.PhoneWindow$DecorView@45726938 has no id. 02-23 15:05:57.688 I/ActivityManager( 1240): Displayed activity com.twitter.android/.PostActivity: 340 ms (total 340 ms) 02-23 15:05:59.018 I/ActivityManager( 1240): Starting activity: Intent { act=android.intent.action.PICK typ=vnd.android.cursor.dir/image cmp=com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity } 02-23 15:05:59.038 I/ActivityManager( 1240): Start proc com.sonyericsson.camera for activity com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity: pid=15113 uid=10057 gids={1006, 1015, 3003} 02-23 15:05:59.128 I/dalvikvm(15113): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=38) 02-23 15:05:59.158 I/dalvikvm(15113): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=50) 02-23 15:05:59.448 I/ActivityManager( 1240): Displayed activity com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity: 423 ms (total 423 ms) 02-23 15:05:59.458 W/dalvikvm(15113): threadid=15: thread exiting with uncaught exception (group=0x4001e160) 02-23 15:05:59.458 E/AndroidRuntime(15113): Uncaught handler: thread AsyncTask #1 exiting due to uncaught exception 02-23 15:05:59.468 E/AndroidRuntime(15113): java.lang.RuntimeException: An error occured while executing doInBackground() 02-23 15:05:59.468 E/AndroidRuntime(15113): at android.os.AsyncTask$3.done(AsyncTask.java:200) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask$Sync.innerSetException(FutureTask.java:273) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask.setException(FutureTask.java:124) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:307) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask.run(FutureTask.java:137) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1068) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:561) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.lang.Thread.run(Thread.java:1096) 02-23 15:05:59.468 E/AndroidRuntime(15113): Caused by: java.lang.IllegalArgumentException: Unsupported MIME type. 02-23 15:05:59.468 E/AndroidRuntime(15113): at com.sonyericsson.album.grid.GridActivity$AlbumTask.doInBackground(GridActivity.java:202) 02-23 15:05:59.468 E/AndroidRuntime(15113): at com.sonyericsson.album.grid.GridActivity$AlbumTask.doInBackground(GridActivity.java:124) 02-23 15:05:59.468 E/AndroidRuntime(15113): at android.os.AsyncTask$2.call(AsyncTask.java:185) 02-23 15:05:59.468 E/AndroidRuntime(15113): at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:305) 02-23 15:05:59.468 E/AndroidRuntime(15113): ... 4 more 02-23 15:05:59.628 E/SemcCheckin(15113): Get crash dump level : java.io.FileNotFoundException: /data/semc-checkin/crashdump 02-23 15:05:59.628 W/ActivityManager( 1240): Unable to start service Intent { act=com.sonyericsson.android.jcrashcatcher.action.BUGREPORT_AUTO cmp=com.sonyericsson.android.jcrashcatcher/.JCrashCatcherService (has extras) }: not found 02-23 15:05:59.648 I/Process ( 1240): Sending signal. PID: 15113 SIG: 3 02-23 15:05:59.648 I/dalvikvm(15113): threadid=7: reacting to signal 3 02-23 15:05:59.778 I/dalvikvm(15113): Wrote stack trace to '/data/anr/traces.txt' 02-23 15:06:00.388 E/SemcCheckin( 1673): Get Crash Level : java.io.FileNotFoundException: /data/semc-checkin/crashdump 02-23 15:06:01.708 I/DumpStateReceiver( 1240): Added state dump to 1 crashes 02-23 15:06:02.008 D/iddd-events( 1117): Registering event com.sonyericsson.idd.probe.android.devicemonitor::ApplicationCrash with 4314 bytes payload. 02-23 15:06:06.968 D/dalvikvm( 1673): GC freed 661 objects / 126704 bytes in 124ms 02-23 15:06:11.928 D/dalvikvm( 1379): GC freed 19753 objects / 858832 bytes in 84ms 02-23 15:06:13.038 I/Process (15113): Sending signal. PID: 15113 SIG: 9 02-23 15:06:13.048 I/WindowManager( 1240): WIN DEATH: Window{4596ecc0 com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity paused=false} 02-23 15:06:13.048 I/ActivityManager( 1240): Process com.sonyericsson.camera (pid 15113) has died. 02-23 15:06:13.048 I/WindowManager( 1240): WIN DEATH: Window{459db5e8 com.sonyericsson.camera/com.sonyericsson.album.grid.GridActivity paused=false} 02-23 15:06:13.078 I/UsageStats( 1240): Unexpected resume of com.twitter.android while already resumed in com.sonyericsson.camera 02-23 15:06:13.098 W/InputManagerService( 1240): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy@456e7168 02-23 15:06:21.278 D/dalvikvm( 1745): GC freed 2032 objects / 410848 bytes in 60ms

    Read the article

  • setting up Ubuntu 10.10 as paravirtualized guest in Xen on RHEL5 host - what kernel?

    - by kostmo
    I've discovered the tool ubuntu-vm-builder, which I've installed and then invoked on an Ubuntu workstation as: sudo vmbuilder xen ubuntu --suite maverick --flavour virtual --arch amd64 --mem=512 --rootsize 8192 This workstation is not the intended target host of the virtual machine, however; I would like to host the guest on a Red Hat Enterprise Linux 5 machine that is running Xen 3.0.3. The output of this command appears to be a folder named ubuntu-xen containing three files: tmpXXXXXX, a very large file which I assume is the root partition image tmpYYYYYY, a somewhat large file which I assume is the swap partition image xen.conf, a text file I have copied the xen.conf file to the RHEL server's /etc/xen directory under the new name newvm, adjusting the paths of tempXXXXXX and tempYYYYYYin the file after also copying them from my local workstation to the RHEL server. When I launch the Virtual Machine Manager virt-manager, I can see the newvm virtual machine listed underneath the Dom0 machine. When I try to start newvm, I get the error: Error starting domain: virDomainCreate() failed POST operation failed: (xend.err 'Error creating domain: Kernel image does not exist: None') Indeed, there exists an entry kernel = 'None' in the xen.conf file. How do I find out what the path of the kernel should be? Is this path supposed to be to a kernel stored on the local filesystem of the RHEL5 host, or is it supposed to be a path inside the guest image? I see that the vmbuilder command provides for a --xen-kernel option, along with a --xen-ramdisk option, but I'm not sure what to use for either. I think I should be able to get this to work, since Ubuntu is said to be supported as a Xen guest, even though the Xen 4.0.1 docs state support for only a limited set of distributions, Ubuntu excluded. Update 1 When running vmbuilder on my local workstation, I did observe an output line saying: Calling hook: install_kernel and later, output lines saying: update-initramfs: Generating /boot/initrd.img-2.6.35-23-virtual [...] run-parts: executing /etc/kernel/postinst.d/initramfs-tools 2.6.35-23-virtual /boot/vmlinuz-2.6.35-23-virtual So in the xen.conf file, I tried setting the lines: kernel = '/boot/vmlinuz-2.6.35-23-virtual' ramdisk = '/boot/initrd.img-2.6.35-23-virtual' When trying to start the VM, I got an error similar to last time: Error starting domain: virDomainCreate() failed POST operation failed: (xend.err 'Error creating domain: Kernel image does not exist: /boot/vmlinuz-2.6.35-23-virtual') This makes me think that the RHEL5 machine is looking for local files, rather than a file within the binary guest disk image. After running sudo updatedb on my workstation, neither of those files were found. If the vmbuilder tool had tried to install them, it must have failed. Update 2 I was able to extract the kernel and initrd images from the guest disk binary by mounting it: mkdir mnt_tmp sudo mount ubuntu-xen/tmpXXXXXX mnt_tmp/ -o loop cp mnt_tmp/boot/vmlinuz-2.6.35-23-virtual virtual_kernel_ubuntu cp mnt_tmp/boot/initrd.img-2.6.35-23-virtual virtual_initrd_ubuntu These two files I copied to the RHEL5 server, and edited the xen.conf file to point to them as kernel and ramdisk. With this done, I could "run" the newvm virtual machine from within virt-manager, but was met with the message Console Not Configured For Guest when I double clicked the entry to open the Virtual Machine Console. As suggested by a forum, I then added the line vfb = [ 'type=vnc' ] to the configuration file, recreated the virtual machine (a ~10 min process), and this time got the message: Connecting to console for guest This remained indefinitely; after selecting View - Serial Console, I found a kernel panic: [5442621.272173] Kernel panic - not syncing: Attempted to kill the idle task! [5442621.272179] Pid: 0, comm: swapper Tainted: G D 2.6.35-23-virtual #41-Ubuntu [5442621.272184] Call Trace: [5442621.272191] [<ffffffff815a1b81>] panic+0x90/0x111 [5442621.272199] [<ffffffff810652ee>] do_exit+0x3be/0x3f0 [5442621.272204] [<ffffffff815a5e20>] oops_end+0xb0/0xf0 [5442621.272211] [<ffffffff8100ddeb>] die+0x5b/0x90 [5442621.272216] [<ffffffff815a56c4>] do_trap+0xc4/0x170 [5442621.272221] [<ffffffff8100ba35>] do_invalid_op+0x95/0xb0 [5442621.272227] [<ffffffff8130851c>] ? intel_idle+0xac/0x180 [5442621.272232] [<ffffffff810072bf>] ? xen_restore_fl_direct_end+0x0/0x1 [5442621.272239] [<ffffffff815a48fe>] ? _raw_spin_unlock_irqrestore+0x1e/0x30 [5442621.272247] [<ffffffff8108dfb7>] ? tick_broadcast_oneshot_control+0xc7/0x120 [5442621.272253] [<ffffffff8100ad5b>] invalid_op+0x1b/0x20 [5442621.272259] [<ffffffff8130851c>] ? intel_idle+0xac/0x180 [5442621.272264] [<ffffffff813084e0>] ? intel_idle+0x70/0x180 [5442621.272269] [<ffffffff810072bf>] ? xen_restore_fl_direct_end+0x0/0x1 [5442621.272275] [<ffffffff8148a147>] cpuidle_idle_call+0xa7/0x140 [5442621.272281] [<ffffffff81008d93>] cpu_idle+0xb3/0x110 [5442621.272286] [<ffffffff815873aa>] rest_init+0x8a/0x90 [5442621.272291] [<ffffffff81b04c9d>] start_kernel+0x387/0x390 [5442621.272297] [<ffffffff81b04341>] x86_64_start_reservations+0x12c/0x130 [5442621.272303] [<ffffffff81b08002>] xen_start_kernel+0x55d/0x561 Update 3 I tried an i386 architecture instead of amd64, but got the same kernel panic. Also, it seems the Virtual Machine Manager pays attention to the format of the filename of the kernel; for the same kernel binary, I tried simply naming it vmlinuz-virtual, which threw out an error box about an invalid kernel. When I named it vmlinuz-2.6.35-23-virtual, it did not throw the error, but it did still result in the kernel panic shortly thereafter.

    Read the article

  • xen domUs crashes or unavailability

    - by Rush
    I've xen server with 8 domU. Server is Xeon E31270 with 16gb ram. I think it is enough for 8 machines. Sometimes domU's crashes and i can't figure out the reason. After crash i can connect to console and there is somthing like this: Oct 8 22:20:49 server kernel: [30892.320780] lowmem_reserve[]: 0 0 0 0 Oct 8 22:20:49 server kernel: [30892.320790] Node 0 DMA: 10*4kB 3*8kB 13*16kB 10*32kB 7*64kB 3*128kB 2*256kB 2*512kB 1*1024kB 2*2048kB 0*4096kB = 8080kB Oct 8 22:20:49 server kernel: [30892.320817] Node 0 DMA32: 648*4kB 2*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 5760kB Oct 8 22:20:49 server kernel: [30892.320842] 1491 total pagecache pages Oct 8 22:20:49 server kernel: [30892.320847] 0 pages in swap cache Oct 8 22:20:49 server kernel: [30892.320852] Swap cache stats: add 0, delete 0, find 0/0 Oct 8 22:20:49 server kernel: [30892.320858] Free swap = 0kB Oct 8 22:20:49 server kernel: [30892.320862] Total swap = 0kB Oct 8 22:20:49 server kernel: [30892.324024] 524288 pages RAM Oct 8 22:20:49 server kernel: [30892.324024] 11010 pages reserved Oct 8 22:20:49 server kernel: [30892.324024] 424467 pages shared Oct 8 22:20:49 server kernel: [30892.324024] 503538 pages non-shared Oct 8 22:20:49 server kernel: [30892.330308] apache2 invoked oom-killer: gfp_mask=0x200da, order=0, oom_adj=0 Oct 8 22:20:49 server kernel: [30892.330322] apache2 cpuset=/ mems_allowed=0 Oct 8 22:20:49 server kernel: [30892.330330] Pid: 23938, comm: apache2 Not tainted 2.6.32-5-xen-amd64 #1 Oct 8 22:20:49 server kernel: [30892.330337] Call Trace: Oct 8 22:20:49 server kernel: [30892.330349] [<ffffffff810b7180>] ? oom_kill_process+0x7f/0x23f Oct 8 22:20:49 server kernel: [30892.330358] [<ffffffff810b76a4>] ? __out_of_memory+0x12a/0x141 Oct 8 22:20:49 server kernel: [30892.330367] [<ffffffff810b77fb>] ? out_of_memory+0x140/0x172 Oct 8 22:20:49 server kernel: [30892.330376] [<ffffffff810bb59c>] ? __alloc_pages_nodemask+0x4e5/0x5f5 Oct 8 22:20:49 server kernel: [30892.330385] [<ffffffff810cc224>] ? do_wp_page+0x386/0x707 Oct 8 22:20:49 server kernel: [30892.330395] [<ffffffff8100c3a5>] ? __raw_callee_save_xen_pud_val+0x11/0x1e Oct 8 22:20:49 server kernel: [30892.330404] [<ffffffff8100c369>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e Oct 8 22:20:49 server kernel: [30892.330412] [<ffffffff810cdfc7>] ? handle_mm_fault+0x7aa/0x80f Oct 8 22:20:49 server kernel: [30892.330422] [<ffffffff8130f906>] ? do_page_fault+0x2e0/0x2fc Oct 8 22:20:49 server kernel: [30892.330433] [<ffffffff8130d7a5>] ? page_fault+0x25/0x30 Oct 8 22:20:49 server kernel: [30892.330439] Mem-Info: Oct 8 22:20:49 server kernel: [30892.330443] Node 0 DMA per-cpu: Oct 8 22:20:49 server kernel: [30892.330450] CPU 0: hi: 0, btch: 1 usd: 0 Oct 8 22:20:49 server kernel: [30892.330463] CPU 1: hi: 0, btch: 1 usd: 0 Oct 8 22:20:49 server kernel: [30892.330466] Node 0 DMA32 per-cpu: Oct 8 22:20:49 server kernel: [30892.330469] CPU 0: hi: 186, btch: 31 usd: 0 Oct 8 22:20:49 server kernel: [30892.330472] CPU 1: hi: 186, btch: 31 usd: 60 Oct 8 22:20:49 server kernel: [30892.330476] active_anon:342076 inactive_anon:115398 isolated_anon:0 Oct 8 22:20:49 server kernel: [30892.330477] active_file:268 inactive_file:481 isolated_file:0 Oct 8 22:20:49 server kernel: [30892.330477] unevictable:1125 dirty:2 writeback:13 unstable:0 Oct 8 22:20:49 server kernel: [30892.330478] free:3410 slab_reclaimable:1718 slab_unreclaimable:6946 Oct 8 22:20:49 server kernel: [30892.330478] mapped:899 shmem:113 pagetables:35697 bounce:0 Oct 8 22:20:49 server kernel: [30892.330502] Node 0 DMA free:8036kB min:32kB low:40kB high:48kB active_anon:1144kB inactive_anon:1268kB active_file:8kB inactive_file:8kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:11792kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:224kB kernel_stack:16kB pagetables:1228kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Oct 8 22:20:49 server kernel: [30892.330518] lowmem_reserve[]: 0 2004 2004 2004 Oct 8 22:20:49 server kernel: [30892.330523] Node 0 DMA32 free:5604kB min:5708kB low:7132kB high:8560kB active_anon:1367160kB inactive_anon:460324kB active_file:1064kB inactive_file:1916kB unevictable:4500kB isolated(anon):0kB isolated(file):0kB present:2052320kB mlocked:4500kB dirty:8kB writeback:52kB mapped:3600kB shmem:452kB slab_reclaimable:6872kB slab_unreclaimable:27560kB kernel_stack:3528kB pagetables:141560kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:992 all_unreclaimable? no Oct 8 22:20:49 server kernel: [30892.330539] lowmem_reserve[]: 0 0 0 0 Oct 8 22:20:49 server kernel: [30892.330544] Node 0 DMA: 1*4kB 2*8kB 13*16kB 10*32kB 7*64kB 3*128kB 2*256kB 2*512kB 1*1024kB 2*2048kB 0*4096kB = 8036kB Oct 8 22:20:49 server kernel: [30892.330579] Node 0 DMA32: 609*4kB 2*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 5604kB Oct 8 22:20:49 server kernel: [30892.330605] 1522 total pagecache pages Oct 8 22:20:49 server kernel: [30892.330610] 0 pages in swap cache Oct 8 22:20:49 server kernel: [30892.330615] Swap cache stats: add 0, delete 0, find 0/0 Oct 8 22:20:49 server kernel: [30892.330621] Free swap = 0kB Oct 8 22:20:49 server kernel: [30892.330625] Total swap = 0kB Oct 8 22:20:49 server kernel: [30892.333018] 524288 pages RAM Oct 8 22:20:49 server kernel: [30892.333018] 11010 pages reserved Oct 8 22:20:49 server kernel: [30892.333018] 424367 pages shared Oct 8 22:20:49 server kernel: [30892.333018] 503658 pages non-shared Seems like there isn't enough memory for this domU. But there is no any memory problems reported in munin monitoring: As you see system uses around 0.2G and 1G is available. So my question is: Is it xen specific problem, that real memory usage and memory usage that shows munin are different (I've never seen such problems oh real hardware machines)? Or maybe it is just monitoring problem, that can't catch moment when there is unusual high load and domU go down? And how I can to defeat this problem? it is really annoying to catch messages in e-mail that domU went down. Btw, such situation was when domU had 2G memory.

    Read the article

  • Page allocation failures on iSCSI storage

    - by Dave
    We have a CentOS 6.3 iscsi server (16GB RAM) running on Infiniband bus (ipoib). When the load is high I can see multiple errors: Sep 3 23:22:20 stor4 kernel: tgtd: page allocation failure. order:2, mode:0x20 Sep 3 23:22:20 stor4 kernel: Pid: 3637, comm: tgtd Not tainted 2.6.32 #1 Sep 3 23:22:20 stor4 kernel: Call Trace: Sep 3 23:22:20 stor4 kernel: [] ? __alloc_pages_nodemask+0x77f/0x940 Sep 3 23:22:20 stor4 kernel: [] ? kmem_getpages+0x62/0x170 Sep 3 23:22:20 stor4 kernel: [] ? fallback_alloc+0x1ba/0x270 Sep 3 23:22:20 stor4 kernel: [] ? cache_grow+0x2cf/0x320 Sep 3 23:22:20 stor4 kernel: [] ? ____cache_alloc_node+0x99/0x160 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __kmalloc+0x189/0x220 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __pskb_pull_tail+0x2aa/0x360 Sep 3 23:22:20 stor4 kernel: [] ? tcp_init_tso_segs+0x37/0x50 Sep 3 23:22:20 stor4 kernel: [] ? dev_queue_xmit+0x4bb/0x6f0 Sep 3 23:22:20 stor4 kernel: [] ? neigh_connected_output+0xbd/0x100 Sep 3 23:22:20 stor4 kernel: [] ? ip_finish_output+0x237/0x310 Sep 3 23:22:20 stor4 kernel: [] ? ip_output+0xb8/0xc0 Sep 3 23:22:20 stor4 kernel: [] ? __ip_local_out+0x9f/0xb0 Sep 3 23:22:20 stor4 kernel: [] ? ip_local_out+0x25/0x30 Sep 3 23:22:20 stor4 kernel: [] ? ip_queue_xmit+0x190/0x420 Sep 3 23:22:20 stor4 kernel: [] ? sock_aio_write+0x167/0x180 Sep 3 23:22:20 stor4 kernel: [] ? tcp_transmit_skb+0x3fe/0x7b0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_write_xmit+0x1fb/0xa20 Sep 3 23:22:20 stor4 kernel: [] ? __tcp_push_pending_frames+0x30/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_push_pending_frames+0x33/0x40 Sep 3 23:22:20 stor4 kernel: [] ? do_tcp_setsockopt+0x3d6/0x480 Sep 3 23:22:20 stor4 kernel: [] ? tcp_setsockopt+0x2a/0x30 Sep 3 23:22:20 stor4 kernel: [] ? sock_common_setsockopt+0x14/0x20 Sep 3 23:22:20 stor4 kernel: [] ? sys_setsockopt+0x7f/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? system_call_fastpath+0x16/0x1b Sep 3 23:22:20 stor4 kernel: Mem-Info: Sep 3 23:22:20 stor4 kernel: Node 0 DMA per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 23 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 181 Sep 3 23:22:20 stor4 kernel: Node 0 Normal per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 171 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 29 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: active_anon:1875 inactive_anon:2473 isolated_anon:0 Sep 3 23:22:20 stor4 kernel: active_file:1243637 inactive_file:2505055 isolated_file:0 Sep 3 23:22:20 stor4 kernel: unevictable:0 dirty:268338 writeback:0 unstable:0 Sep 3 23:22:20 stor4 kernel: free:86050 slab_reclaimable:132377 slab_unreclaimable:23744 Sep 3 23:22:20 stor4 kernel: mapped:1293 shmem:222 pagetables:720 bounce:0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA free:15732kB min:124kB low:152kB high:184kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15332kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 2172 16060 16060 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 free:107544kB min:18268kB low:22832kB high:27400kB active_anon:468kB inactive_anon:2364kB active_file:566208kB inactive_file:976112kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2224900kB mlocked:0kB dirty:96816kB writeback:0kB mapped:908kB shmem:12kB slab_reclaimable:176940kB slab_unreclaimable:968kB kernel_stack:64kB pagetables:192kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 13887 13887 Sep 3 23:22:20 stor4 kernel: Node 0 Normal free:220924kB min:116772kB low:145964kB high:175156kB active_anon:7032kB inactive_anon:7528kB active_file:4408340kB inactive_file:9044108kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:14220800kB mlocked:0kB dirty:976536kB writeback:0kB mapped:4264kB shmem:876kB slab_reclaimable:352568kB slab_unreclaimable:94008kB kernel_stack:2048kB pagetables:2688kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 0 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA: 1*4kB 0*8kB 1*16kB 1*32kB 1*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15732kB Sep 3 23:22:20 stor4 kernel: Node 0 DMA32: 16305*4kB 4381*8kB 353*16kB 8*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 107900kB Sep 3 23:22:20 stor4 kernel: Node 0 Normal: 14548*4kB 14808*8kB 2420*16kB 31*32kB 5*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 220784kB Sep 3 23:22:20 stor4 kernel: 3748822 total pagecache pages Sep 3 23:22:20 stor4 kernel: 0 pages in swap cache Sep 3 23:22:20 stor4 kernel: Swap cache stats: add 0, delete 0, find 0/0 Sep 3 23:22:20 stor4 kernel: Free swap = 975864kB Sep 3 23:22:20 stor4 kernel: Total swap = 975864kB Sep 3 23:22:20 stor4 kernel: 4194303 pages RAM Sep 3 23:22:20 stor4 kernel: 126915 pages reserved Sep 3 23:22:20 stor4 kernel: 3753534 pages shared Sep 3 23:22:20 stor4 kernel: 213500 pages non-shared TCP stack and VM config: net.core.rmem_max = 83886080 net.core.wmem_max = 83886080 net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.ipv4.tcp_rmem = 40960 1048560 4194304 net.ipv4.tcp_wmem = 40960 196608 4194304 net.ipv4.tcp_mem = 16388608 16388608 16388608 vm.min_free_kbytes=135168 Additional tweaks: /sbin/blockdev --setra 16384 /dev/sdb echo 2048 /sys/block/sdb/queue/nr_requests Where might the problem be? Thank you.

    Read the article

  • Someone tried to hack my Node.js server, need to understand a GET request in the logs

    - by Akay
    Alright, so I left my Node.js server alone for a while and came back to find some really interesting stuff in the logs. Apparently some moron from China or Poland tried to hack my server using directory traversal and what not, while it seems though he did not succeed I am unable understand few entries in the log. This is the output of a "hohup.out" file. The attack starts, apparently he is trying to find out some console entry in my server. All of which fail and return a 404. [90mGET /../../../../../../../../../../../ [31m500 [90m6ms - 2b[0m [90mGET /<script>alert(53416)</script> [33m404 [90m7ms[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m1ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET /pz3yvy3lyzgja41w2sp [33m404 [90m1ms[0m [90mGET /stylesheets/style.css [33m404 [90m0ms[0m [90mGET /index.html [33m404 [90m1ms[0m [90mGET /index.htm [33m404 [90m0ms[0m [90mGET /default.html [33m404 [90m0ms[0m [90mGET /default.htm [33m404 [90m1ms[0m [90mGET /default.asp [33m404 [90m1ms[0m [90mGET /index.php [33m404 [90m0ms[0m [90mGET /default.php [33m404 [90m1ms[0m [90mGET /index.asp [33m404 [90m0ms[0m [90mGET /index.cgi [33m404 [90m0ms[0m [90mGET /index.jsp [33m404 [90m1ms[0m [90mGET /index.php3 [33m404 [90m0ms[0m [90mGET /index.pl [33m404 [90m0ms[0m [90mGET /default.jsp [33m404 [90m0ms[0m [90mGET /default.php3 [33m404 [90m0ms[0m [90mGET /index.html.en [33m404 [90m0ms[0m [90mGET /web.gif [33m404 [90m34ms[0m [90mGET /header.html [33m404 [90m1ms[0m [90mGET /homepage.nsf [33m404 [90m1ms[0m [90mGET /homepage.htm [33m404 [90m1ms[0m [90mGET /homepage.asp [33m404 [90m1ms[0m [90mGET /home.htm [33m404 [90m0ms[0m [90mGET /home.html [33m404 [90m1ms[0m [90mGET /home.asp [33m404 [90m1ms[0m [90mGET /login.asp [33m404 [90m0ms[0m [90mGET /login.html [33m404 [90m0ms[0m [90mGET /login.htm [33m404 [90m1ms[0m [90mGET /login.php [33m404 [90m0ms[0m [90mGET /index.cfm [33m404 [90m0ms[0m [90mGET /main.php [33m404 [90m1ms[0m [90mGET /main.asp [33m404 [90m1ms[0m [90mGET /main.htm [33m404 [90m1ms[0m [90mGET /main.html [33m404 [90m2ms[0m [90mGET /Welcome.html [33m404 [90m1ms[0m [90mGET /welcome.htm [33m404 [90m1ms[0m [90mGET /start.htm [33m404 [90m1ms[0m [90mGET /fleur.png [33m404 [90m0ms[0m [90mGET /level/99/ [33m404 [90m1ms[0m [90mGET /chl.css [33m404 [90m0ms[0m [90mGET /images/ [33m404 [90m0ms[0m [90mGET /robots.txt [33m404 [90m2ms[0m [90mGET /hb1/presign.asp [33m404 [90m1ms[0m [90mGET /NFuse/ASP/login.htm [33m404 [90m0ms[0m [90mGET /CCMAdmin/main.asp [33m404 [90m1ms[0m [90mGET /TiVoConnect?Command=QueryServer [33m404 [90m1ms[0m [90mGET /admin/images/rn_logo.gif [33m404 [90m1ms[0m [90mGET /vncviewer.jar [33m404 [90m1ms[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m7ms - 240b[0m [90mOPTIONS / [32m200 [90m1ms - 3b[0m [90mTRACE / [33m404 [90m0ms[0m [90mPROPFIND / [33m404 [90m0ms[0m [90mGET /\./ [33m404 [90m1ms[0m But here is when things start getting fishy. [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m1ms - 240b[0m [90mGET /robots.txt [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m3ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://37.28.156.211/sprawdza.php [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mHEAD / [32m200 [90m1ms - 240b[0m [90mGET http://www.daydaydata.com/proxy.txt [33m404 [90m19ms[0m [90mHEAD / [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m2ms[0m [90mGET / [32m200 [90m4ms - 240b[0m [90mGET http://www.google.pl/search?q=wp.pl [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mHEAD / [32m200 [90m2ms - 240b[0m [90mGET http://www.google.pl/search?q=onet.pl [33m404 [90m1ms[0m [90mHEAD / [32m200 [90m2ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://www.google.pl/search?q=ostro%C5%82%C4%99ka [33m404 [90m1ms[0m [90mGET http://www.google.pl/search?q=google [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mHEAD / [32m200 [90m2ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET http://www.baidu.com/ [32m200 [90m2ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mPOST /api/login [32m200 [90m1ms - 28b[0m [90mGET /web-console/ServerInfo.jsp [33m404 [90m2ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m10ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://proxyjudge.info [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m2ms - 240b[0m [90mGET / [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m3ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m3ms - 240b[0m [90mGET http://www.baidu.com/ [32m200 [90m1ms - 240b[0m [90mGET /manager/html [33m404 [90m0ms[0m [90mGET /manager/html [33m404 [90m1ms[0m [90mGET http://www.google.com/ [32m200 [90m2ms - 240b[0m [90mHEAD / [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/ [32m200 [90m1ms - 240b[0m [90mGET http://www.google.com/search?tbo=d&source=hp&num=1&btnG=Search&q=niceman [33m404 [90m2ms[0m So my questions are, how come my server is returning a "200" OK for root level domains? How did the hacker even manage to send a GET request to my server such that "http://www.google.com" shows up in the log while my server is simply an API that works on relative URLs such as "/api/login". And, while I looked up the OPTIONS, TRACE and PROPFIND HTTP requests that my server has logged it would be great if someone could explain what exactly was the hacker trying to achieve by using these verbs? Also what in the world does "[90m [32m [90m1ms - 240b[0m" mean? The "ms" makes sense, probably milliseconds for the request, rest I am unable to understand. Thank you!

    Read the article

  • What needs updating when moving a bootable Windows 7 (or Vista) partition?

    - by SuperTempel
    When I move a bootable NTFS partition with Windows on it to a different block offset, what needs updating to make it bootable again? In particular, here's what I tried: I have a disk with several partitions, one of which is the NTFS partition with Windows on it, and the disk uses the plain old MBR block 0 for the partitions layout (no more than 4 partitions). Now I format and partition a new, larger, disk. There I make room for the NTFS partition and copy the contents from the old disk's NTFS Windows partition into. And I make the partition "active". However, when I try to boot from this disk, I get a "read error" message immediately and the booting stops, the exact text is: A disk read error occurred Press Ctrl+Alt+Del to restart I verified that both disks have the same boot sector code in block 0. It seems to me that something else might need updating. I guess that somewhere there's a absolute block reference that I need to update, probably pointing to the next level loader or to the NT kernel. Update: I found this article going quite into the depth of what I want to know. However, it says to modify boot.ini, but I have Windows 7 installed here, where such things appear to have changed: No boot.ini but a folder called System Volume Information with GUID and other data in it that sounds related to my problem. Going to keep digging... Update 2: Thanks to the terrible looking but very informative website by starman, I was able to figure out the first step: The NTFS boot sector has a field for "hidden" sectors. This feld has to contain the sector number of the boot sector. This solves the "read error" message. Now, however, I get a "BOOTMGR is missing" error instead. Looks like there's another place where a block number has to be adjusted, but I can't find anything in the code listing about this. I do find a lot of help sites suggesting Windows tools for fixing this "BOOTMGR is missing" problem, but none seem to know what goes on behind the scenes. Kind of like suggesting to re-install Windows when there's a little problem with it. At least, those fixes seem to work, mostly involving the Bcdedit and Bootrec tools. Now, who knows what they do, especially the latter, in regards to a moved partition? Update 3: After lots of trial-and-error attempts, I believe now that the solution lies in the BCD-Template registry file, residing usually inside \Windows\System32\config. If I get this updated using the "bcdboot" command, Windows starts up from it. I am now in the middle of figuring out what information this registry contains relevant to the above question. Any pointers to the contents of this registry are welcome. Update 4: Turns out that while the BCD-Template file gets rewritten and has different binary contents than its predecessor, the values inside do not change. So it must be something else that bcdboot.exe writes. I had previously already checked if it changes the first 32 boot blocks of the partition, but they appear to remain unchanged. Parititon map doesn't get changed, either. So what is it that bcdboot modifies besides the BCD registry? Any tips on how I can trace that? Are there low level tools that show me what files a program writes to? Update 5: The answer seems to be: c:\Boot\BCD is also changed, and that appears to be the key file for the boot manager's process. I'll investigate this later... Update 6: It seems to be an important detail that I had originally two partitions created when I installed Windows 7: A small partition of 204800 sectors which appears to be a bootstrap partition, followed by the actual, large, partition containing the Windows system (drive C:). When I tried to transfer this installation to a new, larger, disk, I had kept the same two partitions intact on the new drive, although they ended up at a different offset. This alone led to the "BOOTMGR is missing" message. Since then, I've used bcdboot.exe only on the Windows partition, which added the \Boot\BCD file on that partition. That file (and folder) did originally only exist on the smaller partition. Hence, this problem may be more complicated in my case as one partition (the boot strapper) referred to another partition (the one containing the OS), whereas other people may only have to deal with one partition containing both, and maybe there the solution is simpler. Update 7: Found one more detail: The \Boot\BCD file records the MBR's serial number. If that number doesn't match, the system won't boot. Next I'll test if there's also an absolute block reference stored in there.

    Read the article

  • SharePoint 2010 - Access denied during ApplyWebConfigModifications()

    - by tcoalson
    I have SharePoint 2010 installed on a Windows Server 2008 R2 machine which is also hosting SQL Sever 2008 R2. I am attempting to deploy a solution that includes web parts in the 2010 environment that is working fine in MOSS 2007. The Web Part feature has a feature receiver that updates the web.config. When I try to activate the feature through the Site Collection Feature GUI, I receive an access denied message. I am logged on to the server and in SharePoint with the APP Pool account which is also a member of the domain administrator group, local administrator group and SharePoint Farm Admin group. This account is also dbo on SQL Server. This same feature activates fine using the stsadm command. I have dug into this issue at length and here is what I have found: Looking at the Microsoft assemblies in reflector, my error is coming from the SPWebApplication.ApplyWebConfigModifications() method. I can see the trace statements from SPWebConfigFileChanges.RemoveModificationsWebConfigXMLDocument and SPWebConfigFileChanges.ApplyModificationsWebConfigXMLDocument. The next line is a Save(str). Below is the output from the SharePoint logs that pertain to this error: Apply web config modifications to web app 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation General 8grn Medium WebConfigModification: Applying web config modifications to web app in server tw-s1-m4400-007 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 88gw Medium WebConfigModification: Applying web config modifications to file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 887b Medium Removing web config node - Path configuration/system.web/httpModules Node name add[@name='JivePageController'] 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 887b Medium Removing web config node - Path configuration/system.web/httpHandlers Node name add[@path='ScriptResource.axd'] 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 887b Medium Removing web config node - Path configuration/runtime/*[local-name()="assemblyBinding" and namespace-uri()="urn:schemas-microsoft-com:asm.v1"] Node name [local-name()="dependentAssembly"][/@name="System.Web.Extensions.Design"] 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 887b Medium Removing web config node - Path configuration/runtime/*[local-name()="assemblyBinding" and namespace-uri()="urn:schemas-microsoft-com:asm.v1"] Node name [local-name()="dependentAssembly"][/@name="System.Web.Extensions"] 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8gp8 Medium WebConfigModification: Adding web config node - Path - configuration/runtime/*[local-name()="assemblyBinding" and namespace-uri()="urn:schemas-microsoft-com:asm.v1"] Node name - [local-name()="dependentAssembly"][/@name="System.Web.Extensions"] Node value - in web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8gp8 Medium WebConfigModification: Adding web config node - Path - configuration/runtime/*[local-name()="assemblyBinding" and namespace-uri()="urn:schemas-microsoft-com:asm.v1"] Node name - [local-name()="dependentAssembly"][/@name="System.Web.Extensions.Design"] Node value - in web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8gp8 Medium WebConfigModification: Adding web config node - Path - configuration/system.web/httpHandlers Node name - add[@path='ScriptResource.axd'] Node value - in web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8gp8 Medium WebConfigModification: Adding web config node - Path - configuration/system.web/httpModules Node name - add[@name='JivePageController'] Node value - in web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.09 w3wp.exe (0x15C4) 0x1444 SharePoint Foundation Topology e5mb Medium WcfReceiveRequest: LocalAddress: 'http://tw-s1-m4400-007.jivedemo.local:32843/15702467ece1408f881abeabac3b5077/MetadataWebService.svc' Channel: 'System.ServiceModel.Channels.ServiceChannel' Action: xxx MessageId: 'urn:uuid:4e859532-ed7f-4937-8b88-68d3af43d589' 9f403ede-2c94-490b-a05c-e169cc5fe58d 02/24/2010 16:05:41.10 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology f6kh High WebConfigModification: Save of web.config file C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config for applying modifications to web app SharePoint - 2008 failed. Error message - Access to the path 'C:\inetpub\wwwroot\wss\VirtualDirectories\2008\web.config' is denied. 5a817a37-7bf6-4d26-be51-207369e38f5b 02/24/2010 16:05:41.10 w3wp.exe (0x0F64) 0x1034 SharePoint Foundation Topology 8j2o High WebConfigModification: Changes not applied to web application SharePoint - 2008 with Url xxx 5a817a37-7bf6-4d26-be51-207369e38f5b Any help would be appreciated!

    Read the article

  • How to configure multiple WCF binding configurations for the same scheme

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Compiling JS-Test-Driver Plugin and Installing it on Eclipse 3.5.1 Galileo?

    - by leeand00
    I downloaded the source of the js-test-driver from: http://js-test-driver.googlecode.com/svn/tags/1.2 It compiles just fine, but one of the unit tests fails: [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 0.012 sec [junit] Test com.google.jstestdriver.eclipse.ui.views.FailureOnlyViewerFilterTest FAILED I am using: - ANT 1.7.1 - javac 1.6.0_12 And I'm trying to install the js-test-driver plugin on Eclipse 3.5.1 Galileo Despite the failed test I installed the plugin into my C:\eclipse\dropins\js-test-driver directory by copying (exporting from svn) the compiled feature and plugins directories there, to see if it would yield any hints to what the problem is. When I started eclipse, added the plugin to the panel using Window-Show View-Other... Other-JsTestDriver The plugin for the panel is added, but it displays the following error instead of the plugin in the panel: Could not create the view: Plugin com.google.jstestdriver.eclipse.ui was unable to load class com.google.jstestdriver.eclipse.ui.views.JsTestDriverView. And then bellow that I get the following stack trace after clicking Details: java.lang.ClassNotFoundException: com.google.jstestdriver.eclipse.ui.views.JsTestDriverView at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:494) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:398) at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:105) at java.lang.ClassLoader.loadClass(Unknown Source) at org.eclipse.osgi.internal.loader.BundleLoader.loadClass(BundleLoader.java:326) at org.eclipse.osgi.framework.internal.core.BundleHost.loadClass(BundleHost.java:231) at org.eclipse.osgi.framework.internal.core.AbstractBundle.loadClass(AbstractBundle.java:1193) at org.eclipse.core.internal.registry.osgi.RegistryStrategyOSGI.createExecutableExtension(RegistryStrategyOSGI.java:160) at org.eclipse.core.internal.registry.ExtensionRegistry.createExecutableExtension(ExtensionRegistry.java:874) at org.eclipse.core.internal.registry.ConfigurationElement.createExecutableExtension(ConfigurationElement.java:243) at org.eclipse.core.internal.registry.ConfigurationElementHandle.createExecutableExtension(ConfigurationElementHandle.java:51) at org.eclipse.ui.internal.WorkbenchPlugin$1.run(WorkbenchPlugin.java:267) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.ui.internal.WorkbenchPlugin.createExtension(WorkbenchPlugin.java:263) at org.eclipse.ui.internal.registry.ViewDescriptor.createView(ViewDescriptor.java:63) at org.eclipse.ui.internal.ViewReference.createPartHelper(ViewReference.java:324) at org.eclipse.ui.internal.ViewReference.createPart(ViewReference.java:226) at org.eclipse.ui.internal.WorkbenchPartReference.getPart(WorkbenchPartReference.java:595) at org.eclipse.ui.internal.Perspective.showView(Perspective.java:2229) at org.eclipse.ui.internal.WorkbenchPage.busyShowView(WorkbenchPage.java:1067) at org.eclipse.ui.internal.WorkbenchPage$20.run(WorkbenchPage.java:3816) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.ui.internal.WorkbenchPage.showView(WorkbenchPage.java:3813) at org.eclipse.ui.internal.WorkbenchPage.showView(WorkbenchPage.java:3789) at org.eclipse.ui.handlers.ShowViewHandler.openView(ShowViewHandler.java:165) at org.eclipse.ui.handlers.ShowViewHandler.openOther(ShowViewHandler.java:109) at org.eclipse.ui.handlers.ShowViewHandler.execute(ShowViewHandler.java:77) at org.eclipse.ui.internal.handlers.HandlerProxy.execute(HandlerProxy.java:294) at org.eclipse.core.commands.Command.executeWithChecks(Command.java:476) at org.eclipse.core.commands.ParameterizedCommand.executeWithChecks(ParameterizedCommand.java:508) at org.eclipse.ui.internal.handlers.HandlerService.executeCommand(HandlerService.java:169) at org.eclipse.ui.internal.handlers.SlaveHandlerService.executeCommand(SlaveHandlerService.java:241) at org.eclipse.ui.internal.ShowViewMenu$3.run(ShowViewMenu.java:141) at org.eclipse.jface.action.Action.runWithEvent(Action.java:498) at org.eclipse.jface.action.ActionContributionItem.handleWidgetSelection(ActionContributionItem.java:584) at org.eclipse.jface.action.ActionContributionItem.access$2(ActionContributionItem.java:501) at org.eclipse.jface.action.ActionContributionItem$5.handleEvent(ActionContributionItem.java:411) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1003) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3880) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3473) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2405) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2369) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2221) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:500) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:493) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:113) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:194) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:368) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) Additionally, if I go to the settings in Window-Preferences and try to view the JS Test Driver Preferences, I get the following dialog: Problem Occurred Unable to create the selected preference page. com.google.jstestdriver.eclipse.ui.WorkbenchPreferencePage Thank you, Andrew J. Leer

    Read the article

  • How to intercept raw soap request/response (data) from WCF client

    - by JohnIdol
    This question seems to be pretty close to what I am looking for - I was able to setup tracing and I am looking at the log entries for my calls to the service. However I need to see the raw soap request with the data I am sending to the service and I see no way of doing that from the SvcTraceViewer (only log entries are shown but no data sent to the service) - am I just missing configuration? Here's what I got in my web.config: <system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Verbose" propagateActivity="true"> <listeners> <add name="sdt" type="System.Diagnostics.XmlWriterTraceListener" initializeData="App_Data/Logs/WCFTrace.svclog" /> </listeners> </source> </sources> </system.diagnostics> Any help appreciated! UPDATE: this is all I see in my trace: <E2ETraceEvent xmlns="http://schemas.microsoft.com/2004/06/E2ETraceEvent"> <System xmlns="http://schemas.microsoft.com/2004/06/windows/eventlog/system"> <EventID>262163</EventID> <Type>3</Type> <SubType Name="Information">0</SubType> <Level>8</Level> <TimeCreated SystemTime="2010-05-10T13:10:46.6713553Z" /> <Source Name="System.ServiceModel" /> <Correlation ActivityID="{00000000-0000-0000-1501-0080000000f6}" /> <Execution ProcessName="w3wp" ProcessID="3492" ThreadID="23" /> <Channel /> <Computer>MY_COMPUTER_NAME</Computer> </System> <ApplicationData> <TraceData> <DataItem> <TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Information"> <TraceIdentifier>http://msdn.microsoft.com/en-US/library/System.ServiceModel.Channels.MessageSent.aspx</TraceIdentifier> <Description>Sent a message over a channel.</Description> <AppDomain>MY_DOMAIN</AppDomain> <Source>System.ServiceModel.Channels.HttpOutput+WebRequestHttpOutput/50416815</Source> <ExtendedData xmlns="http://schemas.microsoft.com/2006/08/ServiceModel/MessageTraceRecord"> <MessageProperties> <Encoder>text/xml; charset=utf-8</Encoder> <AllowOutputBatching>False</AllowOutputBatching> <Via>http://xxx.xx.xxx.xxx:9080/MyWebService/myService</Via> </MessageProperties> <MessageHeaders></MessageHeaders> </ExtendedData> </TraceRecord> </DataItem> </TraceData> </ApplicationData>

    Read the article

  • "Unable to read data from the transport connection: net_io_connectionclosed." - Windows Vista Busine

    - by John DaCosta
    Unable to test sending email from .NET code in Windows Vista Business. I am writing code which I will migrate to an SSIS Package once it its proven. The code is to send an error message via email to a list of recipients. The code is below, however I am getting an exception when I execute the code. I created a simple class to do the mailing... the design could be better, I am testing functionality before implementing more robust functionality, methods, etc. namespace LabDemos { class Program { static void Main(string[] args) { Mailer m = new Mailer(); m.test(); } } } namespace LabDemos { class MyMailer { List<string> _to = new List<string>(); List<string> _cc = new List<string>(); List<string> _bcc = new List<string>(); String _msgFrom = ""; String _msgSubject = ""; String _msgBody = ""; public void test(){ //create the mail message MailMessage mail = new MailMessage(); //set the addresses mail.From = new MailAddress("[email protected]"); //set the content mail.Subject = "This is an email"; mail.Body = "this is a sample body"; mail.IsBodyHtml = false; //send the message SmtpClient smtp = new SmtpClient(); smtp.Host = "emailservername"; smtp.Port = 25; smtp.UseDefaultCredentials = true; smtp.Send(mail); } } Exception Message Inner Exception {"Unable to read data from the transport connection: net_io_connectionclosed."} Stack Trace " at System.Net.Mail.SmtpReplyReaderFactory.ProcessRead(Byte[] buffer, Int32 offset, Int32 read, Boolean readLine)\r\n at System.Net.Mail.SmtpReplyReaderFactory.ReadLines(SmtpReplyReader caller, Boolean oneLine)\r\n at System.Net.Mail.SmtpReplyReaderFactory.ReadLine(SmtpReplyReader caller)\r\n at System.Net.Mail.SmtpConnection.GetConnection(String host, Int32 port)\r\n at System.Net.Mail.SmtpTransport.GetConnection(String host, Int32 port)\r\n at System.Net.Mail.SmtpClient.GetConnection()\r\n at System.Net.Mail.SmtpClient.Send(MailMessage message)" Outer Exception System.Net.Mail.SmtpException was unhandled Message="Failure sending mail." Source="System" StackTrace: at System.Net.Mail.SmtpClient.Send(MailMessage message) at LabDemos.Mailer.test() in C:\Users\username\Documents\Visual Studio 2008\Projects\LabDemos\LabDemos\Mailer.cs:line 40 at LabDemos.Program.Main(String[] args) in C:\Users\username\Documents\Visual Studio 2008\Projects\LabDemos\LabDemos\Program.cs:line 48 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.nExecuteAssembly(Assembly assembly, String[] args) at System.Runtime.Hosting.ManifestRunner.Run(Boolean checkAptModel) at System.Runtime.Hosting.ManifestRunner.ExecuteAsAssembly() at System.Runtime.Hosting.ApplicationActivator.CreateInstance(ActivationContext activationContext, String[] activationCustomData) at System.Runtime.Hosting.ApplicationActivator.CreateInstance(ActivationContext activationContext) at System.Activator.CreateInstance(ActivationContext activationContext) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssemblyDebugInZone() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.IO.IOException Message="Unable to read data from the transport connection: net_io_connectionclosed." Source="System" StackTrace: at System.Net.Mail.SmtpReplyReaderFactory.ProcessRead(Byte[] buffer, Int32 offset, Int32 read, Boolean readLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLines(SmtpReplyReader caller, Boolean oneLine) at System.Net.Mail.SmtpReplyReaderFactory.ReadLine(SmtpReplyReader caller) at System.Net.Mail.SmtpConnection.GetConnection(String host, Int32 port) at System.Net.Mail.SmtpTransport.GetConnection(String host, Int32 port) at System.Net.Mail.SmtpClient.GetConnection() at System.Net.Mail.SmtpClient.Send(MailMessage message) InnerException:

    Read the article

  • Uninitialized constant Encoding with sqlite3-ruby on windows

    - by Ben Scheirman
    On a new machine, installed ruby with the 1-click installer for windows. Installed rails 2.3.2 and all associated gems, then I installed the sqlite3 binaries (into the c:\ruby\bin folder). Lastly I did gem install sqlite3-ruby -v=1.2.3 (which is apparently the latest version that works with windows) This error happens when I run rake db:migrate or when any ActiveRecord object is touched at runtime. The error looks like this: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! **uninitialized constant Encoding** <---- Any help resolving this error would be greatly appreciated! Trace: C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:443:in `load_missing_constant' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:80:in `const_missing' C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:92:in `const_missing' C:/Ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.3/lib/sqlite3/encoding.rb:9:in `find' C:/Ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.3/lib/sqlite3/database.rb:69:in `initialize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/sqlite3_adapter.rb:13:in `new' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `send' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in `new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `loop' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in `checkout' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/migration.rb:435:in `initialize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/migration.rb:400:in `new' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/migration.rb:400:in `up' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/migration.rb:383:in `migrate' C:/Ruby/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/databases.rake:116 C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_chain' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_chain' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in `invoke_task' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 C:/Ruby/bin/rake:19:in `load' C:/Ruby/bin/rake:19

    Read the article

  • LinkedIn API returns 'Unauthorized' response (PHP OAuth)

    - by Jim Greenleaf
    I've been struggling with this one for a few days now. I've got a test app set up to connect to LinkedIn via OAuth. I want to be able to update a user's status, but at the moment I'm unable to interact with LinkedIn's API at all. I am able to successfully get a requestToken, then an accessToken, but when I issue a request to the API, I see an 'unauthorized' error that looks something like this: object(OAuthException)#2 (8) { ["message:protected"]=> string(73) "Invalid auth/bad request (got a 401, expected HTTP/1.1 20X or a redirect)" ["string:private"]=> string(0) "" ["code:protected"]=> int(401) ["file:protected"]=> string(47) "/home/pmfeorg/public_html/dev/test/linkedin.php" ["line:protected"]=> int(48) ["trace:private"]=> array(1) { [0]=> array(6) { ["file"]=> string(47) "/home/pmfeorg/public_html/dev/test/linkedin.php" ["line"]=> int(48) ["function"]=> string(5) "fetch" ["class"]=> string(5) "OAuth" ["type"]=> string(2) "->" ["args"]=> array(2) { [0]=> string(35) "http://api.linkedin.com/v1/people/~" [1]=> string(3) "GET" } } } ["lastResponse"]=> string(358) " 401 1276375790558 0000 [unauthorized]. OAU:Bhgk3fB4cs9t4oatSdv538tD2X68-1OTCBg-KKL3pFBnGgOEhJZhFOf1n9KtHMMy|48032b2d-bc8c-4744-bb84-4eab53578c11|*01|*01:1276375790:xmc3lWhXJvLSUZh4dxMtrf55VVQ= " ["debugInfo"]=> array(5) { ["sbs"]=> string(329) "GET&http%3A%2F%2Fapi.linkedin.com%2Fv1%2Fpeople%2F~&oauth_consumer_key%3DBhgk3fB4cs9t4oatSdv538tD2X68-1OTCBg-KKL3pFBnGgOEhJZhFOf1n9KtHMMy%26oauth_nonce%3D7068001084c13f2ee6a2117.22312548%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1276375790%26oauth_token%3D48032b2d-bc8c-4744-bb84-4eab53578c11%26oauth_version%3D1.0" ["headers_sent"]=> string(401) "GET /v1/people/~?GET&oauth_consumer_key=Bhgk3fB4cs9t4oatSdv538tD2X68-1OTCBg-KKL3pFBnGgOEhJZhFOf1n9KtHMMy&oauth_signature_method=HMAC-SHA1&oauth_nonce=7068001084c13f2ee6a2117.22312548&oauth_timestamp=1276375790&oauth_version=1.0&oauth_token=48032b2d-bc8c-4744-bb84-4eab53578c11&oauth_signature=xmc3lWhXJvLSUZh4dxMtrf55VVQ%3D HTTP/1.1 User-Agent: PECL-OAuth/1.0-dev Host: api.linkedin.com Accept: */*" ["headers_recv"]=> string(148) "HTTP/1.1 401 Unauthorized Server: Apache-Coyote/1.1 Date: Sat, 12 Jun 2010 20:49:50 GMT Content-Type: text/xml;charset=UTF-8 Content-Length: 358" ["body_recv"]=> string(358) " 401 1276375790558 0000 [unauthorized]. OAU:Bhgk3fB4cs9t4oatSdv538tD2X68-1OTCBg-KKL3pFBnGgOEhJZhFOf1n9KtHMMy|48032b2d-bc8c-4744-bb84-4eab53578c11|*01|*01:1276375790:xmc3lWhXJvLSUZh4dxMtrf55VVQ= " ["info"]=> string(216) "About to connect() to api.linkedin.com port 80 (#0) Trying 64.74.98.83... connected Connected to api.linkedin.com (64.74.98.83) port 80 (#0) Connection #0 to host api.linkedin.com left intact Closing connection #0 " } } My code looks like this (based on the FireEagle example from php.net): $req_url = 'https://api.linkedin.com/uas/oauth/requestToken'; $authurl = 'https://www.linkedin.com/uas/oauth/authenticate'; $acc_url = 'https://api.linkedin.com/uas/oauth/accessToken'; $api_url = 'http://api.linkedin.com/v1/people/~'; $callback = 'http://www.pmfe.org/dev/test/linkedin.php'; $conskey = 'Bhgk3fB4cs9t4oatSdv538tD2X68-1OTCBg-KKL3pFBnGgOEhJZhFOf1n9KtHMMy'; $conssec = '####################SECRET KEY#####################'; session_start(); try { $oauth = new OAuth($conskey,$conssec,OAUTH_SIG_METHOD_HMACSHA1,OAUTH_AUTH_TYPE_URI); $oauth->enableDebug(); if(!isset($_GET['oauth_token'])) { $request_token_info = $oauth->getRequestToken($req_url); $_SESSION['secret'] = $request_token_info['oauth_token_secret']; header('Location: '.$authurl.'?oauth_token='.$request_token_info['oauth_token']); exit; } else { $oauth->setToken($_GET['oauth_token'],$_SESSION['secret']); $access_token_info = $oauth->getAccessToken($acc_url); $_SESSION['token'] = $access_token_info['oauth_token']; $_SESSION['secret'] = $access_token_info['oauth_token_secret']; } $oauth->setToken($_SESSION['token'],$_SESSION['secret']); $oauth->fetch($api_url, OAUTH_HTTP_METHOD_GET); $response = $oauth->getLastResponse(); } catch(OAuthException $E) { var_dump($E); } I've successfully set up a connection to Twitter and one to Facebook using OAuth, but LinkedIn keeps eluding me. If anyone could offer some advice or point me in the right direction, I will be extremely appreciative!

    Read the article

  • C# WCF - Failed to invoke the service.

    - by Keith Barrows
    I am getting the following error when trying to use the WCF Test Client to hit my new web service. What is weird is every once in awhile it will execute once then start popping this error. Failed to invoke the service. Possible causes: The service is offline or inaccessible; the client-side configuration does not match the proxy; the existing proxy is invalid. Refer to the stack trace for more detail. You can try to recover by starting a new proxy, restoring to default configuration, or refreshing the service. My code (interface): [ServiceContract(Namespace = "http://rivworks.com/Services/2010/04/19")] public interface ISync { [OperationContract] bool Execute(long ClientID); } My code (class): public class Sync : ISync { #region ISync Members bool ISync.Execute(long ClientID) { return model.Product(ClientID); } #endregion } My config (EDIT - posted entire serviceModel section): <system.serviceModel> <diagnostics performanceCounters="Default"> <messageLogging logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" /> </diagnostics> <serviceHostingEnvironment aspNetCompatibilityEnabled="false" /> <behaviors> <endpointBehaviors> <behavior name="JsonpServiceBehavior"> <webHttp /> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="SimpleServiceBehavior"> <serviceMetadata httpGetEnabled="True" policyVersion="Policy15"/> </behavior> <behavior name="RivWorks.Web.Service.ServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true"/> </behavior> </serviceBehaviors> </behaviors> <services> <service name="RivWorks.Web.Service.NegotiateService" behaviorConfiguration="SimpleServiceBehavior"> <endpoint address="" binding="customBinding" bindingConfiguration="jsonpBinding" behaviorConfiguration="JsonpServiceBehavior" contract="RivWorks.Web.Service.NegotiateService" /> <!--<host> <baseAddresses> <add baseAddress="http://kab.rivworks.com/services"/> </baseAddresses> </host> <endpoint address="" binding="wsHttpBinding" contract="RivWorks.Web.Service.NegotiateService" />--> <endpoint address="mex" binding="mexHttpBinding" contract="RivWorks.Web.Service.NegotiateService" /> </service> <service name="RivWorks.Web.Service.Sync" behaviorConfiguration="RivWorks.Web.Service.ServiceBehavior"> <endpoint address="" binding="wsHttpBinding" contract="RivWorks.Web.Service.ISync" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <extensions> <bindingElementExtensions> <add name="jsonpMessageEncoding" type="RivWorks.Web.Service.JSONPBindingExtension, RivWorks.Web.Service, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </bindingElementExtensions> </extensions> <bindings> <customBinding> <binding name="jsonpBinding" > <jsonpMessageEncoding /> <httpTransport manualAddressing="true"/> </binding> </customBinding> </bindings> </system.serviceModel> 2 questions: What am I missing that causes this error? How can I increase the time out for the service? TIA!

    Read the article

  • protobuf-net: incorrect wire-type exception deserializing Guid properties

    - by Paul Smith
    I'm having issues deserializing certain Guid properties of ORM-generated entities using protobuf-net. Here's a simplified example of the code (reproduces most elements of the scenario, but doesn't reproduce the behavior; I can't expose our internal entities, so I'm looking for clues to account for the exception). Say I have a class, Account with an AccountID read-only guid, and an AccountName read-write string. I serialize & immediately deserialize a clone. Deserializing throws an Incorrect wire-type deserializing Guid exception while deserializing. Here's example usage... Account acct = new Account() { AccountName = "Bob's Checking" }; Debug.WriteLine(acct.AccountID.ToString()); using (MemoryStream ms = new MemoryStream()) { ProtoBuf.Serializer.Serialize<Account>(ms, acct); Debug.WriteLine(Encoding.UTF8.GetString(ms.GetBuffer())); ms.Position = 0; Account clone = ProtoBuf.Serializer.Deserialize<Account>(ms); Debug.WriteLine(clone.AccountID.ToString()); } And here's an example ORM'd class (simplified, but demonstrates the relevant semantics I can think of). Uses a shell game to deserialize read-only properties by exposing the backing field ("can't write" essentially becomes "shouldn't write," but we can scan code for instances of assigning to these fields, so the hack works for our purposes). Again, this does not reproduce the exception behavior; I'm looking for clues as to what could: [DataContract()] [Serializable()] public partial class Account { public Account() { _accountID = Guid.NewGuid(); } [XmlAttribute("AccountID")] [DataMember(Name = "AccountID", Order = 1)] public Guid _accountID; /// <summary> /// A read-only property; XML, JSON and DataContract serializers all seem /// to correctly recognize the public backing field when deserializing: /// </summary> [IgnoreDataMember] [XmlIgnore] public Guid AccountID { get { return this._accountID; } } [IgnoreDataMember] protected string _accountName; [DataMember(Name = "AccountName", Order = 2)] [XmlAttribute] public string AccountName { get { return this._accountName; } set { this._accountName = value; } } } XML, JSON and DataContract serializers all seem to serialize / deserialize these object graphs just fine, so the attribute arrangement basically works. I've tried protobuf-net with lists vs. single instances, different prefix styles, etc., but still always get the 'incorrect wire-type ... Guid' exception when deserializing. So the specific questions is, is there any known explanation / workaround for this? I'm at a loss trying to trace what circumstances (in the real code but not the example) could be causing it. We hope not to have to create a protobuf dependency directly in the entity layer; if that's the case, we'll probably create proxy DTO entities with all public properties having protobuf attributes. (This is a subjective issue I have with all declarative serialization models; it's a ubiquitous pattern & I understand why it arose, but IMO, if we can put a man on the moon, then "normal" should be to have objects and serialization contracts decoupled. ;-) ) Thanks!

    Read the article

  • Configuring multiple WCF binding configurations for the same scheme doesn't work

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" bindingConfiguration="public" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" bindingConfiguration="public" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

  • Android: ActivityThread.performLaunchActivity error

    - by fordays
    Hi, I'm getting an ActivityThread.performLaunchActivity(ActivityThread$ActivityRecord,Intent) error each time I boot up my program in the debugger. The program won't even start up! Any help would be greatly appreciated! I'm very new to this environment. Let me know if you need anymore information/code to help me out. Here is my logcat: 06-09 11:16:26.848: ERROR/vold(27): Error opening switch name path '/sys/class/switch/test2' (No such file or directory) 06-09 11:16:26.848: ERROR/vold(27): Error bootstrapping switch '/sys/class/switch/test2' (No such file or directory) 06-09 11:16:26.848: ERROR/vold(27): Error opening switch name path '/sys/class/switch/test' (No such file or directory) 06-09 11:16:26.848: ERROR/vold(27): Error bootstrapping switch '/sys/class/switch/test' (No such file or directory) 06-09 11:16:37.887: ERROR/MemoryHeapBase(53): error opening /dev/pmem: No such file or directory 06-09 11:16:37.887: ERROR/SurfaceFlinger(53): Couldn't open /sys/power/wait_for_fb_sleep or /sys/power/wait_for_fb_wake 06-09 11:16:37.927: ERROR/libEGL(53): couldn't load <libhgl.so> library (Cannot load library: load_library[984]: Library 'libhgl.so' not found) 06-09 11:16:38.407: ERROR/libEGL(64): couldn't load <libhgl.so> library (Cannot load library: load_library[984]: Library 'libhgl.so' not found) 06-09 11:16:41.358: ERROR/BatteryService(53): Could not open '/sys/class/power_supply/usb/online' 06-09 11:16:41.367: ERROR/BatteryService(53): Could not open '/sys/class/power_supply/battery/batt_vol' 06-09 11:16:41.367: ERROR/BatteryService(53): Could not open '/sys/class/power_supply/battery/batt_temp' 06-09 11:16:41.667: ERROR/EventHub(53): could not get driver version for /dev/input/mouse0, Not a typewriter 06-09 11:16:41.667: ERROR/EventHub(53): could not get driver version for /dev/input/mice, Not a typewriter 06-09 11:16:41.797: ERROR/System(53): Failure starting core service 06-09 11:16:41.797: ERROR/System(53): java.lang.SecurityException 06-09 11:16:41.797: ERROR/System(53): at android.os.BinderProxy.transact(Native Method) 06-09 11:16:41.797: ERROR/System(53): at android.os.ServiceManagerProxy.addService(ServiceManagerNative.java:146) 06-09 11:16:41.797: ERROR/System(53): at android.os.ServiceManager.addService(ServiceManager.java:72) 06-09 11:16:41.797: ERROR/System(53): at com.android.server.ServerThread.run(SystemServer.java:162) 06-09 11:16:41.797: ERROR/AndroidRuntime(53): Crash logging skipped, no checkin service 06-09 11:16:42.777: ERROR/LockPatternKeyguardView(53): Failed to bind to GLS while checking for account 06-09 11:16:46.557: ERROR/ActivityThread(111): Failed to find provider info for com.google.settings 06-09 11:16:46.577: ERROR/ActivityThread(111): Failed to find provider info for com.google.settings 06-09 11:16:49.087: ERROR/ApplicationContext(53): Couldn't create directory for SharedPreferences file shared_prefs/wallpaper-hints.xml 06-09 11:16:51.146: ERROR/ActivityThread(108): Failed to find provider info for android.server.checkin 06-09 11:16:54.266: ERROR/ActivityThread(108): Failed to find provider info for android.server.checkin 06-09 11:16:54.416: ERROR/ActivityThread(108): Failed to find provider info for android.server.checkin 06-09 11:16:56.336: ERROR/MediaPlayerService(31): Couldn't open fd for content://settings/system/notification_sound 06-09 11:16:56.356: ERROR/MediaPlayer(53): Unable to to create media player 06-09 11:16:56.637: ERROR/AndroidRuntime(201): Uncaught handler: thread main exiting due to uncaught exception 06-09 11:16:56.757: ERROR/AndroidRuntime(201): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.svgeeks.kidneytest/com.svgeeks.kidneytest.KidneyTest}: java.lang.ClassCastException: android.widget.EditText 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2401) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2417) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.access$2100(ActivityThread.java:116) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1794) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.os.Handler.dispatchMessage(Handler.java:99) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.os.Looper.loop(Looper.java:123) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.main(ActivityThread.java:4203) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at java.lang.reflect.Method.invokeNative(Native Method) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at java.lang.reflect.Method.invoke(Method.java:521) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at dalvik.system.NativeStart.main(Native Method) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): Caused by: java.lang.ClassCastException: android.widget.EditText 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at com.svgeeks.kidneytest.KidneyTest.onCreate(KidneyTest.java:57) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1123) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2364) 06-09 11:16:56.757: ERROR/AndroidRuntime(201): ... 11 more 06-09 11:16:56.876: ERROR/dalvikvm(201): Unable to open stack trace file '/data/anr/traces.txt': Permission denied

    Read the article

  • Rails Rake Error with XAMPP mysql database

    - by edu222
    I have installed XAAMP on my win7 machine and I have the apache server/mysql running on there. I set up rails to work with XAmpp as described here: XAMPP and RAILS This tutorial advises you to add this code to the XAMPP httpd.connf : Listen 3000 LoadModule rewrite_module modules/mod_rewrite.so ################################# # RUBY SETUP ################################# <virtualHost *:3000> ServerName rails DocumentRoot "c:/xampp/htdocs/FirstProject/public" <Directory "c:/xampp/htdocs/FirstProject/public/"> Options ExecCGI FollowSymLinks AllowOverride all Allow from all Order allow,deny AddHandler cgi-script .cgi AddHandler fastcgi-script .fcgi </Directory> </VirtualHost> ################################# # RUBY SETUP ################################# Xampp runs on the default localhost and mysql remains unchanged without a pw. I created a rails app with a mysql database like this: rails -d mysql C:/xampp/htdocs/FirstProject Then I started the ruby script/server from within the FirstProject location The localhost:3000/ shows the classic rails welcome I then ran a basic scaffold command: ruby script/generate scaffold FirstProject name:string email:string <br/> When I run the rake db:migrate command I get the following error: C:\xampp\htdocs\FirstProject>rake db:migrate --trace (in C:/xampp/htdocs/FirstProject) ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! undefined method `init' for Mysql:Class C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/mysql_adapter.rb:70:in `mysql_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:223:in `send' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:223:in `new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:245:in `checkout_new_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:188:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:184:in `loop' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:184:in `checkout' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:183:in `checkout' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:98:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_pool.rb:326:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_specification.rb:123:in `retrieve_connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_a dapters/abstract/connection_specification.rb:115:in `connection' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :435:in `initialize' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :400:in `new' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :400:in `up' C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb :383:in `migrate' C:/Ruby/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/tasks/databases.rake:116 C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `call' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in `execute' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in `invoke_with_call_c hain' C:/Ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in `invoke_with_call_c hain' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:in `invoke' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in `invoke_task' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `each' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exceptio n_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:in `top_level' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exceptio n_handling' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' C:/Ruby/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 C:/Ruby/bin/rake:19:in `load' C:/Ruby/bin/rake:19 Any idea on how to fix this? Thanks in advance

    Read the article

  • How to configurie multiple distinct WCF binding configurations for the same scheme

    - by Sandor Drieënhuizen
    I have a set of IIS7-hosted net.tcp WCF services that serve my ASP.NET MVC web application. The web application is accessed over the internet. WCF Services (IIS7) <--> ASP.NET MVC Application <--> Client Browser The services are username authenticated, the account that a client (of my web application) uses to logon ends up as the current principal on the host. I want one of the services to be authenticated differently, because it serves the view model for my logon view. When it's called, the client is obviously not logged on yet. I figure Windows authentication serves best or perhaps just certificate based security (which in fact I should use for the authenticated services as well) if the services are hosted on a machine that is not in the same domain as the web application. That's not the point here though. Using multiple TCP bindings is what's giving me trouble. I tried setting it up like this in my client configuration: <bindings> <netTcpBinding> <binding> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint contract="Server.IService1" binding="netTcpBinding" address="net.tcp://localhost:8081/Service1.svc"/> <endpoint contract="Server.IService2" binding="netTcpBinding" address="net.tcp://localhost:8081/Service2.svc"/> </client> The server configuration is this: <bindings> <netTcpBinding> <binding portSharingEnabled="true"> <security mode="TransportWithMessageCredential"> <message clientCredentialType="UserName"/> </security> </binding> <binding name="public"> <security mode="Transport"> <message clientCredentialType="Windows"/> </security> </binding> </netTcpBinding> </bindings> <services> <service name="Service1"> <endpoint contract="Server.IService1, Library" binding="netTcpBinding" address=""/> </service> <service name="Service2"> <endpoint contract="Server.IService2, Library" binding="netTcpBinding" address=""/> </service> </services> <serviceHostingEnvironment> <serviceActivations> <add relativeAddress="Service1.svc" service="Server.Service1"/> <add relativeAddress="Service2.svc" service="Server.Service2"/> </serviceActivations> </serviceHostingEnvironment> The thing is that both bindings don't seem to want live together in my host. When I remove either of them, all's fine but together they produce the following exception on the client: The requested upgrade is not supported by 'net.tcp://localhost:8081/Service2.svc'. This could be due to mismatched bindings (for example security enabled on the client and not on the server). In the server trace log, I find the following exception: Protocol Type application/negotiate was sent to a service that does not support that type of upgrade. Am I looking into the right direction or is there a better way to solve this?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >