Search Results

Search found 16331 results on 654 pages for 'option'.

Page 490/654 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • Testing IPhone code on Windows

    - by steve
    I'm picking up a new Dell laptop. My primary machine is a IMac. I will most likely have to write some IPhone projects for someone in the future. While I do most of my work on the IMac there would be maybe 25% of the time where I work from my laptop. Can anyone tell me if I use objective C / IPhone SDK's if there is a generic objective C compiler I can use to see if my code would in theroy work? Not looking to do hackintosh or anything like that. My other option is to just get a discounted mac mini (Think this is most likely) as well as the Dell. Thanks for any advice

    Read the article

  • How can I "undelete" a set of documents in CouchDB?

    - by radicand
    I have a large set of documents in a CouchDB database that were just accidentally bulk deleted using _deleted:true. I also have a backup for this set of data that includes their last known good revision and metadata. I need to maintain the same _id, so simple restore with a new _id is not an option. Compaction has not been run and I can access any of these documents via the &rev= url parameter as well as their attachments (which are needed). What I need to do is "restore" these documents to the revision I have on file. Surprisingly, I have come up empty with any queries on how to achieve this. Tips or hacks appreciated.

    Read the article

  • JSTL request attribute in c:if

    - by JNPW
    I set an request attribute in my action class as follows: request.setAttribute("xFg",Boolean.TRUE); I want to retrive this in my JSP. I want to retrive them using JSTL tags. I tried this : <c:if test="${requestScope.xFg}"> <c:set var="showlist" value="true" /> </c:if> But c:if didnt work, i mean it didnt goto c:set I tried to print the sameusing c:out but nothing got displayed. What is wrong or How should i test request attribute value. I havent used requestScope so far. Is requestScope the option to get the request value? pls help.Thanks in advance.

    Read the article

  • Removing round corners from the button in jquery mobile

    - by user1435731
    I have following button markup in a single/multiple page jquerymobile page template. <a href="#" data-role="button" data-icon="arrow-r" data-iconpos="right" >About Us</a> I need to disable the round corners of this button using the button option as given in the jquerymobile docs. I have tried $('a').buttonMarkup({ corners: "false" }) in every events such as pagebeforecreate, pageinit, pagecreate and mobileinit I never got it working and have been struggling with it to make it for quite a long time. I dont want to use data attribute data-corners="false" for now. Please suggest any ideas

    Read the article

  • In yii how to access other tables fields

    - by user1636115
    I am creating project in yii framework. I am having table as- Qbquestion QbquestionOption -questionId -optionId -question -questionId -userId -option -isPublished -isAnswer In QbquestionOption controller i want to access Qbquestion tables fields. I had written quesry as- $Question=Qbquestion::model()->findAllByAttributes(array("questionId"=>$number)); where $number is some random number. When i am using its fields as= $Question-isPublished then its giving error as "trying to get access to property of non-object" Statement var_dump($Question) is showing all record and all values of Qbquestion table. So how can i access records? Please help me

    Read the article

  • Openvpn plugin openvpn-auth-ldap does not bind to Active Directory

    - by Selivanov Pavel
    I'm trying to configure OpenVPN with openvpn-auth-ldap plugin to authorize users via Active Directory LDAP. When I use the same server config without plugin option, and add client config with generated client key and cert, connection is successful, so problem is in the plugin. server.conf: plugin /usr/lib/openvpn/openvpn-auth-ldap.so "/etc/openvpn-test/openvpn-auth-ldap.conf" port 1194 proto tcp dev tun keepalive 10 60 topology subnet server 10.0.2.0 255.255.255.0 tls-server ca ca.crt dh dh1024.pem cert server.crt key server.key #crl-verify crl.pem persist-key persist-tun user nobody group nogroup verb 3 mute 20 openvpn-auth-ldap.conf: <LDAP> URL ldap://dc1.domain:389 TLSEnable no BindDN cn=bot_auth,cn=Users,dc=domain Password bot_auth Timeout 15 FollowReferrals yes </LDAP> <Authorization> BaseDN "cn=Users,dc=domain" SearchFilter "(sAMAccountName=%u)" RequireGroup false # <Group> # BaseDN "ou=groups,dc=mycompany,dc=local" # SearchFilter "(|(cn=developers)(cn=artists))" # MemberAttribute uniqueMember # </Group> </Authorization> Top-level domain in AD is used by historical reasons. Analogue configuration is working for Apache 2.2 in mod-authzn-ldap. User and password are correct. client.conf: remote server_name port 1194 proto tcp client pull remote-cert-tls server dev tun resolv-retry infinite nobind ca ca.crt ; with keys - works fine #cert test.crt #key test.key ; without keys - by password auth-user-pass persist-tun verb 3 mute 20 In server log there is string PLUGIN_INIT: POST /usr/lib/openvpn/openvpn-auth-ldap.so '[/usr/lib/openvpn/openvpn-auth-ldap.so] [/etc/openvpn-test/openvpn-auth-ldap.conf]' which indicates, that plugin failed. I can telnet to dc1.domain:389, so this is not network/firewall problem. Later server says TLS Error: TLS object -> incoming plaintext read error TLS handshake failed - without plugin it tryes to do usal key authentification. server log: Tue Nov 22 03:06:20 2011 OpenVPN 2.1.3 i486-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Oct 21 2010 Tue Nov 22 03:06:20 2011 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Tue Nov 22 03:06:20 2011 PLUGIN_INIT: POST /usr/lib/openvpn/openvpn-auth-ldap.so '[/usr/lib/openvpn/openvpn-auth-ldap.so] [/etc/openvpn-test/openvpn-auth-ldap.conf]' intercepted=PLUGIN_AUTH_USER_PASS_VERIFY|PLUGIN_CLIENT_CONNECT|PLUGIN_CLIENT_DISCONNECT Tue Nov 22 03:06:20 2011 Diffie-Hellman initialized with 1024 bit key Tue Nov 22 03:06:20 2011 /usr/bin/openssl-vulnkey -q -b 1024 -m <modulus omitted> Tue Nov 22 03:06:20 2011 Control Channel Authentication: using 'ta.key' as a OpenVPN static key file Tue Nov 22 03:06:20 2011 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Nov 22 03:06:20 2011 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Nov 22 03:06:20 2011 TLS-Auth MTU parms [ L:1543 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Nov 22 03:06:20 2011 Socket Buffers: R=[87380->131072] S=[16384->131072] Tue Nov 22 03:06:20 2011 TUN/TAP device tun1 opened Tue Nov 22 03:06:20 2011 TUN/TAP TX queue length set to 100 Tue Nov 22 03:06:20 2011 /sbin/ifconfig tun1 10.0.2.1 netmask 255.255.255.0 mtu 1500 broadcast 10.0.2.255 Tue Nov 22 03:06:20 2011 Data Channel MTU parms [ L:1543 D:1450 EF:43 EB:4 ET:0 EL:0 ] Tue Nov 22 03:06:20 2011 GID set to nogroup Tue Nov 22 03:06:20 2011 UID set to nobody Tue Nov 22 03:06:20 2011 Listening for incoming TCP connection on [undef] Tue Nov 22 03:06:20 2011 TCPv4_SERVER link local (bound): [undef] Tue Nov 22 03:06:20 2011 TCPv4_SERVER link remote: [undef] Tue Nov 22 03:06:20 2011 MULTI: multi_init called, r=256 v=256 Tue Nov 22 03:06:20 2011 IFCONFIG POOL: base=10.0.2.2 size=252 Tue Nov 22 03:06:20 2011 MULTI: TCP INIT maxclients=1024 maxevents=1028 Tue Nov 22 03:06:20 2011 Initialization Sequence Completed Tue Nov 22 03:07:10 2011 MULTI: multi_create_instance called Tue Nov 22 03:07:10 2011 Re-using SSL/TLS context Tue Nov 22 03:07:10 2011 Control Channel MTU parms [ L:1543 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Nov 22 03:07:10 2011 Data Channel MTU parms [ L:1543 D:1450 EF:43 EB:4 ET:0 EL:0 ] Tue Nov 22 03:07:10 2011 Local Options hash (VER=V4): 'c413e92e' Tue Nov 22 03:07:10 2011 Expected Remote Options hash (VER=V4): 'd8421bb0' Tue Nov 22 03:07:10 2011 TCP connection established with [AF_INET]10.0.0.9:47808 Tue Nov 22 03:07:10 2011 TCPv4_SERVER link local: [undef] Tue Nov 22 03:07:10 2011 TCPv4_SERVER link remote: [AF_INET]10.0.0.9:47808 Tue Nov 22 03:07:11 2011 10.0.0.9:47808 TLS: Initial packet from [AF_INET]10.0.0.9:47808, sid=a2cd4052 84b47108 Tue Nov 22 03:07:11 2011 10.0.0.9:47808 TLS_ERROR: BIO read tls_read_plaintext error: error:140890C7:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:peer did not return a certificate Tue Nov 22 03:07:11 2011 10.0.0.9:47808 TLS Error: TLS object -> incoming plaintext read error Tue Nov 22 03:07:11 2011 10.0.0.9:47808 TLS Error: TLS handshake failed Tue Nov 22 03:07:11 2011 10.0.0.9:47808 Fatal TLS error (check_tls_errors_co), restarting Tue Nov 22 03:07:11 2011 10.0.0.9:47808 SIGUSR1[soft,tls-error] received, client-instance restarting Tue Nov 22 03:07:11 2011 TCP/UDP: Closing socket client log: Tue Nov 22 03:06:18 2011 OpenVPN 2.1.3 x86_64-pc-linux-gnu [SSL] [LZO2] [EPOLL] [PKCS11] [MH] [PF_INET6] [eurephia] built on Oct 22 2010 Enter Auth Username:user Enter Auth Password: Tue Nov 22 03:06:25 2011 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Tue Nov 22 03:06:25 2011 Control Channel Authentication: using 'ta.key' as a OpenVPN static key file Tue Nov 22 03:06:25 2011 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Nov 22 03:06:25 2011 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication Tue Nov 22 03:06:25 2011 Control Channel MTU parms [ L:1543 D:168 EF:68 EB:0 ET:0 EL:0 ] Tue Nov 22 03:06:25 2011 Socket Buffers: R=[87380->131072] S=[16384->131072] Tue Nov 22 03:06:25 2011 Data Channel MTU parms [ L:1543 D:1450 EF:43 EB:4 ET:0 EL:0 ] Tue Nov 22 03:06:25 2011 Local Options hash (VER=V4): 'd8421bb0' Tue Nov 22 03:06:25 2011 Expected Remote Options hash (VER=V4): 'c413e92e' Tue Nov 22 03:06:25 2011 Attempting to establish TCP connection with [AF_INET]10.0.0.2:1194 [nonblock] Tue Nov 22 03:06:26 2011 TCP connection established with [AF_INET]10.0.0.2:1194 Tue Nov 22 03:06:26 2011 TCPv4_CLIENT link local: [undef] Tue Nov 22 03:06:26 2011 TCPv4_CLIENT link remote: [AF_INET]10.0.0.2:1194 Tue Nov 22 03:06:26 2011 TLS: Initial packet from [AF_INET]10.0.0.2:1194, sid=7a3c2a0f bd35bca7 Tue Nov 22 03:06:26 2011 WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this Tue Nov 22 03:06:26 2011 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funston/CN=Fort-Funston_CA/[email protected] Tue Nov 22 03:06:26 2011 Validating certificate key usage Tue Nov 22 03:06:26 2011 ++ Certificate has key usage 00a0, expects 00a0 Tue Nov 22 03:06:26 2011 VERIFY KU OK Tue Nov 22 03:06:26 2011 Validating certificate extended key usage Tue Nov 22 03:06:26 2011 ++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication Tue Nov 22 03:06:26 2011 VERIFY EKU OK Tue Nov 22 03:06:26 2011 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=Fort-Funston/CN=server/[email protected] Tue Nov 22 03:06:26 2011 Connection reset, restarting [0] Tue Nov 22 03:06:26 2011 TCP/UDP: Closing socket Tue Nov 22 03:06:26 2011 SIGUSR1[soft,connection-reset] received, process restarting Tue Nov 22 03:06:26 2011 Restart pause, 5 second(s) ^CTue Nov 22 03:06:27 2011 SIGINT[hard,init_instance] received, process exiting Does anybody know how to get openvpn-auth-ldap wirking?

    Read the article

  • MySQL died during the night on a 12.04.1 Ubuntu

    - by Olivier
    I can't explain why, but somehow during the night, one of my MySQL running on an Ubuntu 12.04.1 box broke. The service is running but I can't login anymore (to SQL), the previous password is not working anymore. It does not looks like the server has been compromised (nothing in /var/auth.log) It looks like some automatic security upgrade (server is configured to perform those) has occured and broke something. The MySQL server has restarted a couple of times in the logs at the time errors started to happen (I get email when CRON task fail). In the logs it complains about an unset root password (I do have cron job running all day using SQL so the password was set & working for months). Anyway I can't login without password either! Do you have any idea of what could have happened? How do I get my databases back? This line looks strange : Nov 6 06:36:12 ns398758 mysqld_safe[6676]: ERROR: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ALTER TABLE user ADD column Show_view_priv enum('N','Y') CHARACTER SET utf8 NOT ' at line 1 Here is the full log below : Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! Nov 6 06:36:06 ns398758 mysqld_safe[6586]: To do so, start the server, then issue the following commands: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: /usr/bin/mysqladmin -u root password 'new-password' Nov 6 06:36:06 ns398758 mysqld_safe[6586]: /usr/bin/mysqladmin -u root -h ns398758.ovh.net password 'new-password' Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Alternatively you can run: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: /usr/bin/mysql_secure_installation Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: which will also give you the option of removing the test Nov 6 06:36:06 ns398758 mysqld_safe[6586]: databases and anonymous user created by default. This is Nov 6 06:36:06 ns398758 mysqld_safe[6586]: strongly recommended for production servers. Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: See the manual for more instructions. Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Please report any problems with the /usr/scripts/mysqlbug script! Nov 6 06:36:06 ns398758 mysqld_safe[6586]: Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 [Note] Plugin 'FEDERATED' is disabled. Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: The InnoDB memory heap is disabled Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: Mutexes and rw_locks use GCC atomic builtins Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: Compressed tables use zlib 1.2.3.4 Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: Initializing buffer pool, size = 128.0M Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: Completed initialization of buffer pool Nov 6 06:36:06 ns398758 mysqld_safe[6632]: 121106 6:36:06 InnoDB: highest supported file format is Barracuda. Nov 6 06:36:07 ns398758 mysqld_safe[6632]: 121106 6:36:07 InnoDB: Waiting for the background threads to start Nov 6 06:36:08 ns398758 mysqld_safe[6632]: 121106 6:36:08 InnoDB: 1.1.8 started; log sequence number 29276459701 Nov 6 06:36:08 ns398758 mysqld_safe[6632]: 121106 6:36:08 InnoDB: Starting shutdown... Nov 6 06:36:09 ns398758 mysqld_safe[6632]: 121106 6:36:09 InnoDB: Shutdown completed; log sequence number 29276459701 Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 [Note] Plugin 'FEDERATED' is disabled. Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: The InnoDB memory heap is disabled Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Mutexes and rw_locks use GCC atomic builtins Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Compressed tables use zlib 1.2.3.4 Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Initializing buffer pool, size = 128.0M Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Completed initialization of buffer pool Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: highest supported file format is Barracuda. Nov 6 06:36:11 ns398758 mysqld_safe[6676]: 121106 6:36:11 InnoDB: Waiting for the background threads to start Nov 6 06:36:12 ns398758 mysqld_safe[6676]: 121106 6:36:12 InnoDB: 1.1.8 started; log sequence number 29276459701 Nov 6 06:36:12 ns398758 mysqld_safe[6676]: ERROR: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'ALTER TABLE user ADD column Show_view_priv enum('N','Y') CHARACTER SET utf8 NOT ' at line 1 Nov 6 06:36:12 ns398758 mysqld_safe[6676]: 121106 6:36:12 [ERROR] Aborting Nov 6 06:36:12 ns398758 mysqld_safe[6676]: Nov 6 06:36:12 ns398758 mysqld_safe[6676]: 121106 6:36:12 InnoDB: Starting shutdown... Nov 6 06:36:13 ns398758 mysqld_safe[6676]: 121106 6:36:13 InnoDB: Shutdown completed; log sequence number 29276459701 Nov 6 06:36:13 ns398758 mysqld_safe[6676]: 121106 6:36:13 [Note] /usr/sbin/mysqld: Shutdown complete Nov 6 06:36:13 ns398758 mysqld_safe[6676]: Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 [Note] Plugin 'FEDERATED' is disabled. Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: The InnoDB memory heap is disabled Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Mutexes and rw_locks use GCC atomic builtins Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Compressed tables use zlib 1.2.3.4 Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Initializing buffer pool, size = 128.0M Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Completed initialization of buffer pool Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: highest supported file format is Barracuda. Nov 6 06:36:13 ns398758 mysqld_safe[6697]: 121106 6:36:13 InnoDB: Waiting for the background threads to start Nov 6 06:36:14 ns398758 mysqld_safe[6697]: 121106 6:36:14 InnoDB: 1.1.8 started; log sequence number 29276459701 Nov 6 06:36:14 ns398758 mysqld_safe[6697]: 121106 6:36:14 InnoDB: Starting shutdown... Nov 6 06:36:15 ns398758 mysqld_safe[6697]: 121106 6:36:15 InnoDB: Shutdown completed; log sequence number 29276459701 Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 [Note] Plugin 'FEDERATED' is disabled. Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: The InnoDB memory heap is disabled Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Mutexes and rw_locks use GCC atomic builtins Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Compressed tables use zlib 1.2.3.4 Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Initializing buffer pool, size = 128.0M Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Completed initialization of buffer pool Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: highest supported file format is Barracuda. Nov 6 06:36:15 ns398758 mysqld_safe[6718]: 121106 6:36:15 InnoDB: Waiting for the background threads to start Nov 6 06:36:16 ns398758 mysqld_safe[6718]: 121106 6:36:16 InnoDB: 1.1.8 started; log sequence number 29276459701 Nov 6 06:36:16 ns398758 mysqld_safe[6718]: ERROR: 1050 Table 'plugin' already exists Nov 6 06:36:16 ns398758 mysqld_safe[6718]: 121106 6:36:16 [ERROR] Aborting Nov 6 06:36:16 ns398758 mysqld_safe[6718]: Nov 6 06:36:16 ns398758 mysqld_safe[6718]: 121106 6:36:16 InnoDB: Starting shutdown... Nov 6 06:36:17 ns398758 mysqld_safe[6718]: 121106 6:36:17 InnoDB: Shutdown completed; log sequence number 29276459701 Nov 6 06:36:17 ns398758 mysqld_safe[6718]: 121106 6:36:17 [Note] /usr/sbin/mysqld: Shutdown complete Nov 6 06:36:17 ns398758 mysqld_safe[6718]: Nov 6 06:36:19 ns398758 /etc/mysql/debian-start[6816]: Upgrading MySQL tables if necessary. Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: Looking for 'mysql' as: /usr/bin/mysql Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: col_digitas.acos OK Nov 6 06:36:20 ns398758 /etc/mysql/debian-start[6819]: col_digitas.aros OK ...

    Read the article

  • Xenserver 5.5 U2 a bit unstable with an unstable W2003 VM

    - by twistedbrain
    In the last week I had to reboot the host system twice and the second one by means of the power button. The system is a Dell PE 6950 (4 Opteron dual core, 2,8 Ghz, 16 GB RAM, 900 GB disk in a RAID 10 array composed by 4 450GB 15000 rpm disks) with XenServer 5.5 U2. We're installing it and at now there are working in production a 2003 32 bit Windows server VM with 2 GB RAM, 3 vcpu and about 200 GB disk in 4 partition (12 GB boot, 20 GB program, 80 GB user data, 80 GB other data). The first time I was compressing many Windows 2003 folders (some tens GB by means of W2003 compressed folder option) from a Windows remote console and some hours before my colleague installed Backup Exec agent that was alredy installed and that required a reboot (that was pending). The console stopped responding, it was no more possible to connect by means of remote console or by means of the console of the XenCentre, it was still possible from the network to use the shared folder of the VM and the programs on it (2 db and a GIS program), but the print server didn't work any more and I couldn't give remote reboot from other domain controller hosts. I couldn't stop the virtual machine neither from the XenCentre, neither from the command line of the host also forcing the reboot. I had to reboot the host server. Yesterday it has been worst. I installed a template of another VM, CentOS 5.3 and then, put the DVD in the drive of the host. Then, before the install and after the boot I checked for defect the DVD and the W2003 VM began to respond slowly (I was connected by means of an administration remote console) the task manager showed only mid or low load, it is, only the first of the 3 vcpu was loaded (about 70%), while the other two were about at 20% and also the disk I/O was not so heavy. Then the users were not so happy because they couldn't any more use the MS word docs on the server. I immediately stopped the check of the DVD (to do that I had to force the stop of such Centos 5.3 VM), then some users could again use their docs, but other had still problems, so I decided to reboot the VM, but it doesn't stopped, neither from XenCenter, neither from command line, neither forcing the reboot. Then I tried to reboot of the host, but it didn't worked, neither from the XenCentre, neither from the host prompt (shutdown -r now as root: it told that it was shutting down, but then it didn't did that). So I had to power off by means of the power button of the server (before I tried some Magic SyS REQ, but I saw that Xen isn't compiled with this option enabled). What could you suggest about my problem, what can I look, search and see? In the W2003 VM logs there are no errors or warning to explain what happened. Some more exciting, amusing and inspiring words of poetry (the facts happened around 11 am): \# egrep -i 'err|warning' xensource.log [20100219 10:32:05.597|debug|culo|6301 unix-RPC|VBD.plug R:c81bcda701f6|xenops] watch: watching xenstore paths: [ /xapi/0/frontend/vbd/51712/hotplug; /local/domain/0/backend/vbd/0/51712/tapdisk-error ] with timeout 1200.000000 seconds [20100219 10:32:05.597|debug|culo|6301 unix-RPC|VBD.plug R:c81bcda701f6|xenops] watch: fired on /local/domain/0/backend/vbd/0/51712/tapdisk-error [20100219 10:32:14.314|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] watch: watching xenstore paths: [ /local/domain/0/backend/vbd/0/51712/shutdown-done; /local/domain/0/error/device/vbd/51712/error ] with timeout 1200.000000 seconds [20100219 10:32:14.337|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] xenstore-rm /local/domain/0/error/backend/vbd/0 [20100219 10:32:14.337|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] xenstore-rm /local/domain/0/error/device/vbd/51712 [20100219 10:32:14.338|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] watch: fired on /local/domain/0/error/device/vbd/51712/error [20100219 10:53:48.903|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|helpers] Ignoring exception: INTERNAL_ERROR: [ Xb.Noent ] while Vmops.destroy_domain: Destroying domid 14 guest session [20100219 10:53:52.048|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.048|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51744 [20100219 10:53:52.085|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.086|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51728 [20100219 10:53:52.122|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.122|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51712 [20100219 10:53:52.127|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vbd/14 [20100219 10:53:52.128|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51760 [20100219 10:53:52.496|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 10:53:52.497|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vif/14 [20100219 10:53:52.497|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vif/0 [20100219 10:53:53.385|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 10:53:53.386|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vif/14 [20100219 10:53:53.386|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vif/1 [20100219 10:53:53.389| warn|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|hotplug] Warning, deleting 'vif' entry from /xapi/14/hotplug/vif/0 [20100219 10:53:53.391| warn|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|hotplug] Warning, deleting 'vif' entry from /xapi/14/hotplug/vif/1 [20100219 11:20:49.766|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|helpers] Ignoring exception: INTERNAL_ERROR: [ Xb.Noent ] while Vmops.destroy_domain: Destroying domid 11 guest session [20100219 11:20:50.339|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/tap/11 [20100219 11:20:50.339|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/832 [20100219 11:20:50.360|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/tap/11 [20100219 11:20:50.360|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/768 [20100219 11:20:50.366|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/vbd/11 [20100219 11:20:50.366|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/5696 [20100219 11:20:50.753|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 11:20:50.754|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/vif/11 [20100219 11:20:50.754|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vif/1 [20100219 11:20:50.757| warn|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|hotplug] Warning, deleting 'vif' entry from /xapi/11/hotplug/vif/1 [20100219 11:28:13.803|debug|culo|6610 inet-RPC|Connection to VM console R:e9f8b76e8975|console] error: INTERNAL_ERROR: [ Unix.Unix_error(63, "connect", "") ]

    Read the article

  • Sendmail to local domain ignoring MX records (part 2)

    - by FractalizeR
    Hello. I have the exact problem, like in this post: http://serverfault.com/questions/25068/sendmail-to-local-domain-ignoring-mx-records I am also using email provider like GMail For Your Domain (which stores your mail and manages it). I am sending mail from my server directly, but receiving mail is done via Yandex (email provider). Since the server hosts forum, I prefer to send mail directly from it because using another mail provider can slow things. Also, when I send 300.000 emails to my subscribers, email provider will surely block me thinking I send spam. My DNS zone now is: ; ; GSMFORUM.RU ; $TTL 1H gsmforum.ru. SOA ns1.hc.ru. support.hc.ru. ( 2009122268 ; Serial 1H ; Refresh 30M ; Retry 1W ; Expire 1H ) ; Minimum gsmforum.ru. NS ns1.hc.ru. gsmforum.ru. NS ns2.hc.ru. @ A 79.174.68.223 *.gsmforum.ru. CNAME @ ns1 A 79.174.68.223 ns2 A 79.174.68.224 @ MX 10 mx.yandex.ru. mail CNAME domain.mail.yandex.net. yamail-xxxxxxxxx CNAME mail.yandex.ru. Server hostname is server.gsmforum.ru. May be this is the cause? Can someone explain the reason of the matter (the rules that make sendmail consider domain to be local)? Can I easily change *.gsmforum.ru. CNAME @ into *.gsmforum.ru. A 79.174.68.224 to solve this problem? [root@server ~]# cat /etc/mail/local-host-names localhost localhost.localdomain This server hosts gsmforum.ru so I cannot put it into another domain like David Mackintosh suggests. Putting domain in mailertable doesn't solve the problem also. sendmail -bt still shows, that address is local. DontProbeInterfaces is also set to true at sendmail config. M4 file follows: divert(-1)dnl dnl # dnl # This is the sendmail macro config file for m4. If you make changes to dnl # /etc/mail/sendmail.mc, you will need to regenerate the dnl # /etc/mail/sendmail.cf file by confirming that the sendmail-cf package is dnl # installed and then performing a dnl # dnl # make -C /etc/mail dnl # include(`/usr/share/sendmail-cf/m4/cf.m4')dnl VERSIONID(`setup for linux')dnl OSTYPE(`linux')dnl dnl # dnl # Do not advertize sendmail version. dnl # dnl define(`confSMTP_LOGIN_MSG', `$j Sendmail; $b')dnl dnl # dnl # default logging level is 9, you might want to set it higher to dnl # debug the configuration dnl # dnl define(`confLOG_LEVEL', `9')dnl dnl # dnl # Uncomment and edit the following line if your outgoing mail needs to dnl # be sent out through an external mail server: dnl # dnl define(`SMART_HOST', `smtp.your.provider')dnl dnl # define(`confDEF_USER_ID', ``8:12'')dnl dnl define(`confAUTO_REBUILD')dnl define(`confTO_CONNECT', `1m')dnl define(`confTRY_NULL_MX_LIST', `True')dnl define(`confDONT_PROBE_INTERFACES',`True') define(`PROCMAIL_MAILER_PATH', `/usr/bin/procmail')dnl define(`ALIAS_FILE', `/etc/aliases')dnl define(`STATUS_FILE', `/var/log/mail/statistics')dnl define(`UUCP_MAILER_MAX', `2000000')dnl define(`confUSERDB_SPEC', `/etc/mail/userdb.db')dnl define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl define(`confAUTH_OPTIONS', `A')dnl dnl # dnl # The following allows relaying if the user authenticates, and disallows dnl # plaintext authentication (PLAIN/LOGIN) on non-TLS links dnl # dnl define(`confAUTH_OPTIONS', `A p')dnl dnl # dnl # PLAIN is the preferred plaintext authentication method and used by dnl # Mozilla Mail and Evolution, though Outlook Express and other MUAs do dnl # use LOGIN. Other mechanisms should be used if the connection is not dnl # guaranteed secure. dnl # Please remember that saslauthd needs to be running for AUTH. dnl # dnl TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl # dnl # Rudimentary information on creating certificates for sendmail TLS: dnl # cd /usr/share/ssl/certs; make sendmail.pem dnl # Complete usage: dnl # make -C /usr/share/ssl/certs usage dnl # dnl define(`confCACERT_PATH', `/etc/pki/tls/certs')dnl dnl define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.crt')dnl dnl define(`confSERVER_CERT', `/etc/pki/tls/certs/sendmail.pem')dnl dnl define(`confSERVER_KEY', `/etc/pki/tls/certs/sendmail.pem')dnl dnl # dnl # This allows sendmail to use a keyfile that is shared with OpenLDAP's dnl # slapd, which requires the file to be readble by group ldap dnl # dnl define(`confDONT_BLAME_SENDMAIL', `groupreadablekeyfile')dnl dnl # dnl define(`confTO_QUEUEWARN', `4h')dnl dnl define(`confTO_QUEUERETURN', `5d')dnl dnl define(`confQUEUE_LA', `12')dnl dnl define(`confREFUSE_LA', `18')dnl define(`confTO_IDENT', `0')dnl dnl FEATURE(delay_checks)dnl FEATURE(`no_default_msa', `dnl')dnl FEATURE(`smrsh', `/usr/sbin/smrsh')dnl FEATURE(`mailertable', `hash -o /etc/mail/mailertable.db')dnl FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl FEATURE(redirect)dnl FEATURE(always_add_domain)dnl FEATURE(use_cw_file)dnl FEATURE(use_ct_file)dnl dnl # dnl # The following limits the number of processes sendmail can fork to accept dnl # incoming messages or process its message queues to 20.) sendmail refuses dnl # to accept connections once it has reached its quota of child processes. dnl # dnl define(`confMAX_DAEMON_CHILDREN', `20')dnl dnl # dnl # Limits the number of new connections per second. This caps the overhead dnl # incurred due to forking new sendmail processes. May be useful against dnl # DoS attacks or barrages of spam. (As mentioned below, a per-IP address dnl # limit would be useful but is not available as an option at this writing.) dnl # dnl define(`confCONNECTION_RATE_THROTTLE', `3')dnl dnl # dnl # The -t option will retry delivery if e.g. the user runs over his quota. dnl # FEATURE(local_procmail, `', `procmail -t -Y -a $h -d $u')dnl FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl FEATURE(`blacklist_recipients')dnl EXPOSED_USER(`root')dnl dnl # dnl # For using Cyrus-IMAPd as POP3/IMAP server through LMTP delivery uncomment dnl # the following 2 definitions and activate below in the MAILER section the dnl # cyrusv2 mailer. dnl # dnl define(`confLOCAL_MAILER', `cyrusv2')dnl dnl define(`CYRUSV2_MAILER_ARGS', `FILE /var/lib/imap/socket/lmtp')dnl dnl # dnl # The following causes sendmail to only listen on the IPv4 loopback address dnl # 127.0.0.1 and not on any other network devices. Remove the loopback dnl # address restriction to accept email from the internet or intranet. dnl # DAEMON_OPTIONS(`Name=MTA,Port=smtp') dnl # dnl # The following causes sendmail to additionally listen to port 587 for dnl # mail from MUAs that authenticate. Roaming users who can't reach their dnl # preferred sendmail daemon due to port 25 being blocked or redirected find dnl # this useful. dnl # dnl DAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea')dnl dnl # dnl # The following causes sendmail to additionally listen to port 465, but dnl # starting immediately in TLS mode upon connecting. Port 25 or 587 followed dnl # by STARTTLS is preferred, but roaming clients using Outlook Express can't dnl # do STARTTLS on ports other than 25. Mozilla Mail can ONLY use STARTTLS dnl # and doesn't support the deprecated smtps; Evolution <1.1.1 uses smtps dnl # when SSL is enabled-- STARTTLS support is available in version 1.1.1. dnl # dnl # For this to work your OpenSSL certificates must be configured. dnl # dnl DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl dnl # dnl # The following causes sendmail to additionally listen on the IPv6 loopback dnl # device. Remove the loopback address restriction listen to the network. dnl # dnl DAEMON_OPTIONS(`port=smtp,Addr=::1, Name=MTA-v6, Family=inet6')dnl dnl # dnl # enable both ipv6 and ipv4 in sendmail: dnl # dnl DAEMON_OPTIONS(`Name=MTA-v4, Family=inet, Name=MTA-v6, Family=inet6') dnl # dnl # We strongly recommend not accepting unresolvable domains if you want to dnl # protect yourself from spam. However, the laptop and users on computers dnl # that do not have 24x7 DNS do need this. dnl # FEATURE(`accept_unresolvable_domains')dnl dnl # dnl FEATURE(`relay_based_on_MX')dnl dnl # dnl # Also accept email sent to "localhost.localdomain" as local email. dnl # LOCAL_DOMAIN(`localhost.localdomain')dnl dnl # dnl # The following example makes mail from this host and any additional dnl # specified domains appear to be sent from mydomain.com dnl # dnl MASQUERADE_AS(`mydomain.com')dnl dnl # dnl # masquerade not just the headers, but the envelope as well dnl # dnl FEATURE(masquerade_envelope)dnl dnl # dnl # masquerade not just @mydomainalias.com, but @*.mydomainalias.com as well dnl # dnl FEATURE(masquerade_entire_domain)dnl dnl # dnl MASQUERADE_DOMAIN(localhost)dnl dnl MASQUERADE_DOMAIN(localhost.localdomain)dnl dnl MASQUERADE_DOMAIN(mydomainalias.com)dnl dnl MASQUERADE_DOMAIN(mydomain.lan)dnl MAILER(smtp)dnl MAILER(procmail)dnl dnl MAILER(cyrusv2)dnl FEATURE(`dnsbl',`zen.spamhaus.org',`Rejected - your IP is blacklisted by http://www.spamhaus.org')

    Read the article

  • IRQ problem with 2.6.32/2.6.39 kernel on Debian Squeeze x86_64

    - by MasterM
    I recently assembled a new computer so that all hardware is pretty new. Since then I've been experiencing some problem with IRQs when running Debian 6.0. On random occasions, usually after an hour or so of running I hear a beep and this shows up in dmesg: [ 3537.762795] irq 16: nobody cared (try booting with the "irqpoll" option) [ 3537.762797] Pid: 0, comm: swapper Tainted: P W O 2.6.39-2-amd64 #1 [ 3537.762798] Call Trace: [ 3537.762799] <IRQ> [<ffffffff810924d4>] ? __report_bad_irq+0x3a/0xa2 [ 3537.762803] [<ffffffff810926a4>] ? note_interrupt+0x168/0x1da [ 3537.762805] [<ffffffff81090dd4>] ? handle_irq_event_percpu+0x171/0x18f [ 3537.762807] [<ffffffff8100e0e2>] ? read_tsc+0x5/0x16 [ 3537.762809] [<ffffffff8106b8a2>] ? update_ts_time_stats+0x32/0x6b [ 3537.762810] [<ffffffff81090e26>] ? handle_irq_event+0x34/0x52 [ 3537.762812] [<ffffffff81063fb7>] ? sched_clock_idle_wakeup_event+0x12/0x1c [ 3537.762813] [<ffffffff81092df2>] ? handle_fasteoi_irq+0x82/0xa4 [ 3537.762815] [<ffffffff8100aadb>] ? handle_irq+0x1a/0x23 [ 3537.762816] [<ffffffff8100a384>] ? do_IRQ+0x45/0xaa [ 3537.762818] [<ffffffff81332c93>] ? common_interrupt+0x13/0x13 [ 3537.762818] <EOI> [<ffffffff81332c8e>] ? common_interrupt+0xe/0x13 [ 3537.762821] [<ffffffff81026800>] ? native_safe_halt+0x2/0x3 [ 3537.762829] [<ffffffffa016ed58>] ? acpi_idle_do_entry+0x39/0x62 [processor] [ 3537.762831] [<ffffffffa016edde>] ? acpi_idle_enter_c1+0x5d/0xad [processor] [ 3537.762834] [<ffffffff81261033>] ? cpuidle_idle_call+0x11f/0x1cc [ 3537.762835] [<ffffffff81008dd2>] ? cpu_idle+0xab/0xe1 [ 3537.762837] [<ffffffff8169fc60>] ? start_kernel+0x3e0/0x3eb [ 3537.762838] [<ffffffff8169f3c8>] ? x86_64_start_kernel+0x102/0x10f [ 3537.762839] handlers: [ 3537.762840] [<ffffffffa0358d5a>] (rtl8169_interrupt+0x0/0x2d7 [r8169]) [ 3537.762842] [<ffffffffa08ff2ca>] (nv_kern_isr+0x0/0x54 [nvidia]) [ 3537.762902] Disabling IRQ #16 After that Xorg either hogs on CPU or is unstable (up to hanging the system completely). When I restart Xorg everything is fine again and the problem doesn't occur until next reboot. I tried to upgrade the kernel from stock 2.6.32 to 2.6.39 from unstable repository but that didn't help. Booting with irqpoll option only seems to prolong the initial time period after which the problem occurs. I'm using latest NVIDIA drivers and Realtek firmware from firmware-realtek package. I have two GTX 560Ti that run in SLI. Disabling SLI or taking out one card completely doesn't solve the problem either. Output of uname -a is: Linux whitestar 2.6.39-2-amd64 #1 SMP Wed Jun 8 11:01:04 UTC 2011 x86_64 GNU/Linux Output of lspci is: 00:00.0 Host bridge: Intel Corporation Sandy Bridge DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Sandy Bridge PCI Express Root Port (rev 09) 00:01.1 PCI bridge: Intel Corporation Sandy Bridge PCI Express Root Port (rev 09) 00:16.0 Communication controller: Intel Corporation Cougar Point HECI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 05) 00:1a.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation Cougar Point High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 2 (rev b5) 00:1c.2 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 3 (rev b5) 00:1c.4 PCI bridge: Intel Corporation Cougar Point PCI Express Root Port 5 (rev b5) 00:1c.6 PCI bridge: Intel Corporation 82801 PCI Bridge (rev b5) 00:1d.0 USB Controller: Intel Corporation Cougar Point USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation Cougar Point LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation Cougar Point 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation Cougar Point SMBus Controller (rev 05) 01:00.0 VGA compatible controller: nVidia Corporation Device 1200 (rev a1) 01:00.1 Audio device: nVidia Corporation Device 0e0c (rev a1) 02:00.0 VGA compatible controller: nVidia Corporation Device 1200 (rev a1) 02:00.1 Audio device: nVidia Corporation Device 0e0c (rev a1) 04:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) 06:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) 07:00.0 PCI bridge: Device 1b21:1080 (rev 01) 08:02.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8110SC/8169SC Gigabit Ethernet (rev 10) 08:03.0 FireWire (IEEE 1394): VIA Technologies, Inc. VT6306/7/8 [Fire II(M)] IEEE 1394 OHCI Controller (rev c0) Contents of /proc/interrupts: CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 0: 77 0 0 0 0 0 0 0 IO-APIC-edge timer 1: 2 0 0 0 0 0 0 0 IO-APIC-edge i8042 8: 1 0 0 0 0 0 0 0 IO-APIC-edge rtc0 9: 0 0 0 0 0 0 0 0 IO-APIC-fasteoi acpi 12: 4 0 0 0 0 0 0 0 IO-APIC-edge i8042 16: 699083 0 0 0 0 0 0 0 IO-APIC-fasteoi nvidia, eth0 17: 87810 0 0 0 0 0 0 0 IO-APIC-fasteoi firewire_ohci, hda_intel, nvidia 18: 242 0 0 0 0 0 0 0 IO-APIC-fasteoi hda_intel 23: 85925 0 0 0 0 0 0 0 IO-APIC-fasteoi ehci_hcd:usb5, ehci_hcd:usb6 40: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 41: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 42: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 43: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 44: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 45: 0 0 0 0 0 0 0 0 PCI-MSI-edge PCIe PME 46: 79853 0 0 0 0 0 0 0 PCI-MSI-edge ahci 48: 1 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 49: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 50: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 51: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 52: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 53: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 54: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 55: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 56: 1 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 57: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 58: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 59: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 60: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 61: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 62: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 63: 0 0 0 0 0 0 0 0 PCI-MSI-edge xhci_hcd 64: 173506 0 0 0 0 0 0 0 PCI-MSI-edge hda_intel NMI: 482 89 25 13 277 24 11 10 Non-maskable interrupts LOC: 783857 194752 114133 70577 372438 179065 117179 162016 Local timer interrupts SPU: 0 0 0 0 0 0 0 0 Spurious interrupts PMI: 482 89 25 13 277 24 11 10 Performance monitoring interrupts IWI: 0 0 0 0 0 0 0 0 IRQ work interrupts RES: 131917 46750 7432 3291 150003 9576 3435 3067 Rescheduling interrupts CAL: 2759 6563 7150 6997 5387 7140 7269 6678 Function call interrupts TLB: 4396 2038 1336 492 5434 1896 1121 606 TLB shootdowns TRM: 0 0 0 0 0 0 0 0 Thermal event interrupts THR: 0 0 0 0 0 0 0 0 Threshold APIC interrupts MCE: 0 0 0 0 0 0 0 0 Machine check exceptions MCP: 37 37 37 37 37 37 37 37 Machine check polls ERR: 0 MIS: 0 Last but not least, right after boot-up those lines are usually present in dmesg: [ 18.367094] hda-intel: IRQ timing workaround is activated for card #1. Suggest a bigger bdl_pos_adj. [ 18.458859] hda-intel: IRQ timing workaround is activated for card #2. Suggest a bigger bdl_pos_adj. I'm not sure if it's related or a symptom of a bigger problem so I'm posting it just in case. I don't really know what other information might be of relevance here. Don't hesitate to ask for more in the comments.

    Read the article

  • Make errors when compiling HPL-2.1 on MOSIX-clustered Debian server

    - by tlake
    I'm trying to compile HPL 2.1 on a MOSIX-clustered Debian server, but the make process terminates with errors as seen below. Included are my makefile and two versions of output: one from a standard execution, and one from an execution run with the debug flag. Any help and guidance would be very much appreciated! The makefile: # ---------------------------------------------------------------------- # - shell -------------------------------------------------------------- # ---------------------------------------------------------------------- # SHELL = /bin/bash # CD = cd CP = cp LN_S = ln -s MKDIR = mkdir RM = /bin/rm -f TOUCH = touch # # ---------------------------------------------------------------------- # - Platform identifier ------------------------------------------------ # ---------------------------------------------------------------------- # ARCH = Linux_PII_CBLAS # # ---------------------------------------------------------------------- # - HPL Directory Structure / HPL library ------------------------------ # ---------------------------------------------------------------------- # TOPdir = $(HOME)/hpl-2.1 INCdir = $(TOPdir)/include BINdir = $(TOPdir)/bin/$(ARCH) LIBdir = $(TOPdir)/lib/$(ARCH) # HPLlib = $(LIBdir)/libhpl.a # # ---------------------------------------------------------------------- # - Message Passing library (MPI) -------------------------------------- # ---------------------------------------------------------------------- # MPinc tells the C compiler where to find the Message Passing library # header files, MPlib is defined to be the name of the library to be # used. The variable MPdir is only used for defining MPinc and MPlib. # MPdir = /usr/local MPinc = -I$(MPdir)/include MPlib = $(MPdir)/lib/libmpi.so # # ---------------------------------------------------------------------- # - Linear Algebra library (BLAS or VSIPL) ----------------------------- # ---------------------------------------------------------------------- # LAinc tells the C compiler where to find the Linear Algebra library # header files, LAlib is defined to be the name of the library to be # used. The variable LAdir is only used for defining LAinc and LAlib. # LAdir = $(HOME)/CBLAS/lib LAinc = LAlib = $(LAdir)/cblas_LINUX.a # # ---------------------------------------------------------------------- # - F77 / C interface -------------------------------------------------- # ---------------------------------------------------------------------- # You can skip this section if and only if you are not planning to use # a BLAS library featuring a Fortran 77 interface. Otherwise, it is # necessary to fill out the F2CDEFS variable with the appropriate # options. **One and only one** option should be chosen in **each** of # the 3 following categories: # # 1) name space (How C calls a Fortran 77 routine) # # -DAdd_ : all lower case and a suffixed underscore (Suns, # Intel, ...), [default] # -DNoChange : all lower case (IBM RS6000), # -DUpCase : all upper case (Cray), # -DAdd__ : the FORTRAN compiler in use is f2c. # # 2) C and Fortran 77 integer mapping # # -DF77_INTEGER=int : Fortran 77 INTEGER is a C int, [default] # -DF77_INTEGER=long : Fortran 77 INTEGER is a C long, # -DF77_INTEGER=short : Fortran 77 INTEGER is a C short. # # 3) Fortran 77 string handling # # -DStringSunStyle : The string address is passed at the string loca- # tion on the stack, and the string length is then # passed as an F77_INTEGER after all explicit # stack arguments, [default] # -DStringStructPtr : The address of a structure is passed by a # Fortran 77 string, and the structure is of the # form: struct {char *cp; F77_INTEGER len;}, # -DStringStructVal : A structure is passed by value for each Fortran # 77 string, and the structure is of the form: # struct {char *cp; F77_INTEGER len;}, # -DStringCrayStyle : Special option for Cray machines, which uses # Cray fcd (fortran character descriptor) for # interoperation. # F2CDEFS = # # ---------------------------------------------------------------------- # - HPL includes / libraries / specifics ------------------------------- # ---------------------------------------------------------------------- # HPL_INCLUDES = -I$(INCdir) -I$(INCdir)/$(ARCH) $(LAinc) $(MPinc) HPL_LIBS = $(HPLlib) $(LAlib) $(MPlib) # # - Compile time options ----------------------------------------------- # # -DHPL_COPY_L force the copy of the panel L before bcast; # -DHPL_CALL_CBLAS call the cblas interface; # -DHPL_CALL_VSIPL call the vsip library; # -DHPL_DETAILED_TIMING enable detailed timers; # # By default HPL will: # *) not copy L before broadcast, # *) call the BLAS Fortran 77 interface, # *) not display detailed timing information. # HPL_OPTS = -DHPL_CALL_CBLAS # # ---------------------------------------------------------------------- # HPL_DEFS = $(F2CDEFS) $(HPL_OPTS) $(HPL_INCLUDES) # # ---------------------------------------------------------------------- # - Compilers / linkers - Optimization flags --------------------------- # ---------------------------------------------------------------------- # CC = /usr/bin/gcc CCNOOPT = $(HPL_DEFS) CCFLAGS = $(HPL_DEFS) -fomit-frame-pointer -O3 -funroll-loops # # On some platforms, it is necessary to use the Fortran linker to find # the Fortran internals used in the BLAS library. # LINKER = ~/BLAS LINKFLAGS = $(CCFLAGS) # ARCHIVER = ar ARFLAGS = r RANLIB = echo # # ---------------------------------------------------------------------- Make output: ~/BLAS -DHPL_CALL_CBLAS -I/homes/laket/hpl-2.1/include -I/homes/laket/hpl-2.1/include/Linux_PII_CBLAS -I/usr/local/include -fomit-frame-pointer -O3 -funroll-loops -o /homes/laket/hpl-2.1/bin/Linux_PII_CBLAS/xhpl HPL_pddriver.o HPL_pdinfo.o HPL_pdtest.o /homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a /homes/laket/CBLAS/lib/cblas_LINUX.a /usr/local/lib/libmpi.so /bin/bash: /homes/laket/BLAS: Is a directory make[2]: *** [dexe.grd] Error 126 make[2]: Target `all' not remade because of errors. make[2]: Leaving directory `/homes/laket/hpl-2.1/testing/ptest/Linux_PII_CBLAS' make[1]: *** [build_tst] Error 2 make[1]: Leaving directory `/homes/laket/hpl-2.1' make: *** [build] Error 2 make: Target `all' not remade because of errors. Make -d output: Considering target file `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Looking for an implicit rule for `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a,v'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/RCS/libhpl.a,v'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/RCS/libhpl.a'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/s.libhpl.a'. Trying pattern rule with stem `libhpl.a'. Trying implicit prerequisite `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/SCCS/s.libhpl.a'. No implicit rule found for `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Finished prerequisites of target file `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. No need to remake target `/homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a'. Finished prerequisites of target file `dexe.grd'. Must remake target `dexe.grd'. ~/BLAS -DHPL_CALL_CBLAS -I/homes/laket/hpl-2.1/include -I/homes/laket/hpl-2.1/include/Linux_PII_CBLAS -I/usr/local/include -fomit-frame-pointer -O3 -funroll-loops -o /homes/laket/hpl-2.1/bin/Linux_PII_CBLAS/xhpl HPL_pddriver.o HPL_pdinfo.o HPL_pdtest.o /homes/laket/hpl-2.1/lib/Linux_PII_CBLAS/libhpl.a /homes/laket/CBLAS/lib/cblas_LINUX.a /usr/local/lib/libmpi.so Putting child 0x0129a2c0 (dexe.grd) PID 24853 on the chain. Live child 0x0129a2c0 (dexe.grd) PID 24853 /bin/bash: /homes/laket/BLAS: Is a directory make[2]: Reaping losing child 0x0129a2c0 PID 24853 *** [dexe.grd] Error 126 Removing child 0x0129a2c0 PID 24853 from chain. Failed to remake target file `dexe.grd'. Finished prerequisites of target file `dexe'. Giving up on target file `dexe'. Finished prerequisites of target file `all'. Giving up on target file `all'. make[2]: Target `all' not remade because of errors. make[2]: Leaving directory `/homes/laket/hpl-2.1/testing/ptest/Linux_PII_CBLAS' Reaping losing child 0x010ce900 PID 24841 make[1]: *** [build_tst] Error 2 Removing child 0x010ce900 PID 24841 from chain. Failed to remake target file `build_tst'. make[1]: Leaving directory `/homes/laket/hpl-2.1' Reaping losing child 0x00d91ae0 PID 24774 make: *** [build] Error 2 Removing child 0x00d91ae0 PID 24774 from chain. Failed to remake target file `build'. Finished prerequisites of target file `install'. make: Target `all' not remade because of errors. Giving up on target file `install'. Finished prerequisites of target file `all'. Giving up on target file `all'. Thanks!

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Xubuntu 13.10 64bit - Slow and buggy "log out" process?

    - by MrKatSwordfish
    I'm a Windows convert who has done only a little bit of dabbling in Ubuntu in the past (back in Dapper Drake a few years back). A lot has changes since then, and I've been yearning to jump back into linux again! So, having just bought a new SSD, I felt that this would be as good of a time as any to set up a dual-boot system again. I've messed around with Ubuntu 13.10 a bit, and while Unity has its issues, I think that it still needs some time to develop. I looked into XFCE and liked it a lot, so I went with Xubuntu. I've installed Xubuntu, and for the most part it's running smoothly and it a pleasure to work with. The customization is great and the minimalistic look and feel is really nice! But here's my problem, whenever I select the "Log Out" option from either the application menu, or the user profiles menu, my PC comes to a crawl, and the dialog box with all the options (shut down, restart, log out, etc.) takes maybe a minute or more to appear. I click the log out button, my PC is brought to a snail's pace, and I have to wait for what seems like an eternity for the logout options to appear! If i try to open something else (even a terminal window) while it's loading the logout options, that other program won't finish loading until the logout screen finally appears. Keep in mind, this is a pretty much vanilla install of Xubuntu 13.10 64bit, on a PC with an intel i7, an SSD, 6gb DDR3 RAM, and a new AMD 7770 gpu (drivers haven't been installed yet, though). Everything else runs fast, most applications open near-instantly! It must be an issue with how the logout options screen initializes or something, but I'm not sure exactly how I can fix it.. Edit - Extra Info: This problem is very consistent when using the "Log Out" buttons in Xubuntu. However, I've found that I'm able to reboot and shutdown much more quickly by going through the "Switch User" screen, and using the reboot or shutdown buttons on that screen. I'm nearly certain that it has something to do with the little Log Out options screen that appears when I select Log Out from the menu, and not the actual process of shutting down.. So what should I do? I really like XFCE so far, and I've never tried a non-ubuntu based distro before, but should I just switch to something else? Is there any known fix for this issue? Are there any work-arounds for logging out/shutting down/rebooting via the terminal so that I can avoid this irritating bug? Is there any that I can monitor the progress of the log out via terminal, allowing me to see which parts are causing the slow-down? What is the best way to report this bug to someone?

    Read the article

  • Sending mail with Gmail Account using System.Net.Mail in ASP.NET

    - by Jalpesh P. Vadgama
    Any web application is in complete without mail functionality you should have to write send mail functionality. Like if there is shopping cart application for example then when a order created on the shopping cart you need to send an email to administrator of website for Order notification and for customer you need to send an email of receipt of order. So any web application is not complete without sending email. This post is also all about sending email. In post I will explain that how we can send emails from our Gmail Account without purchasing any smtp server etc. There are some limitations for sending email from Gmail Account. Please note following things. Gmail will have fixed number of quota for sending emails per day. So you can not send more then that emails for the day. Your from email address always will be your account email address which you are using for sending email. You can not send an email to unlimited numbers of people. Gmail ant spamming policy will restrict this. Gmail provide both Popup and SMTP settings both should be active in your account where you testing. You can enable that via clicking on setting link in gmail account and go to Forwarding and POP/Imap. So if you are using mail functionality for limited emails then Gmail is Best option. But if you are sending thousand of email daily then it will not be Good Idea. Here is the code for sending mail from Gmail Account. using System.Net.Mail; namespace Experiement { public partial class WebForm1 : System.Web.UI.Page { protected void Page_Load(object sender,System.EventArgs e) { MailMessage mailMessage = new MailMessage(new MailAddress("[email protected]") ,new MailAddress("[email protected]")); mailMessage.Subject = "Sending mail through gmail account"; mailMessage.IsBodyHtml = true; mailMessage.Body = "<B>Sending mail thorugh gmail from asp.net</B>"; System.Net.NetworkCredential networkCredentials = new System.Net.NetworkCredential("[email protected]", "yourpassword"); SmtpClient smtpClient = new SmtpClient(); smtpClient.EnableSsl = true; smtpClient.UseDefaultCredentials = false; smtpClient.Credentials = networkCredentials; smtpClient.Host = "smtp.gmail.com"; smtpClient.Port = 587; smtpClient.Send(mailMessage); Response.Write("Mail Successfully sent"); } } } That’s run this application and you will get like below in your account. Technorati Tags: Gmail,System.NET.Mail,ASP.NET

    Read the article

  • SQL Developer Database Diff – Compare Objects From Multiple Schemas

    - by thatjeffsmith
    Ever wonder why Database Diff isn’t called Schema Diff? One reason is because SQL Developer allows you select objects from more than one schema in the ‘Source’ connection for the compare. Simply use the ‘More’ dialog view and select as many tables from as many different schemas as you require Now, before you get around to testing this – as you should never believe what I say, trust but verify – two things you need to know: I’m using SQL Developer version 3.2 On the initial screen you need to use the ‘Maintain’ option Maintain tells SQL Developer to use the schema designation in the source connection to find the same corresponding object in the destination schema. Choose ‘maintain’ if you want to compare objects in the same schema in the destination but don’t have the user login for that schema. So after you’ve selected your databases, your diff preferences, and your objects – you’re ready to perform the compare and review your results. The DIFF Report Notice the highlighted text, SQL Developer is ‘maintaining’ the Schema context from the two databases. Short and sweet. That’s pretty much all there is to doing a compare with SQL Developer with multiple schemas involved. You may have noticed in some posts lately that my editor screenshots had a ‘green screen’ look and feel to them. What’s with the black background in your editors? In the SQL Developer preferences, you can set your editor color schemes. I started with the ‘Twilight’ scheme (team Jacob in case you’re wondering) and then customized it further by going with a default green font color. You could go pretty crazy in here, and I’m assuming 90% of you could care less and will just stick with the original. But for those of you who are particular about your IDE styling – go crazy! SQL Developer Editor Display Preferences

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Slides of my HOL on MySQL Cluster

    - by user13819847
    Hi!Thanks everyone who attended my hands-on lab on MySQL Cluster at MySQL Connect last Saturday.The following are the links for the slides, the HOL instructions, and the code examples.I'll try to summarize my HOL below.Aim of the HOL was to help attendees to familiarize with MySQL Cluster. In particular, by learning: the basics of MySQL Cluster Architecture the basics of MySQL Cluster Configuration and Administration how to start a new Cluster for evaluation purposes and how to connect to it We started by introducing MySQL Cluster. MySQL Cluster is a proven technology that today is successfully servicing the most performance-intensive workloads. MySQL Cluster is deployed across telecom networks and is powering mission-critical web applications. Without trading off use of commodity hardware, transactional consistency and use of complex queries, MySQL Cluster provides: Web Scalability (web-scale performance on both reads and writes) Carrier Grade Availability (99.999%) Developer Agility (freedom to use SQL or NoSQL access methods) MySQL Cluster implements: an Auto-Sharding, Multi-Master, Shared-nothing Architecture, where independent nodes can scale horizontally on commodity hardware with no shared disks, no shared memory, no single point of failure In the architecture of MySQL Cluster it is possible to find three types of nodes: management nodes: responsible for reading the configuration files, maintaining logs, and providing an interface to the administration of the entire cluster data nodes: where data and indexes are stored api nodes: provide the external connectivity (e.g. the NDB engine of the MySQL Server, APIs, Connectors) MySQL Cluster is recommended in the situations where: it is crucial to reduce service downtime, because this produces a heavy impact on business sharding the database to scale write performance higly impacts development of application (in MySQL Cluster the sharding is automatic and transparent to the application) there are real time needs there are unpredictable scalability demands it is important to have data-access flexibility (SQL & NoSQL) MySQL Cluster is available in two Editions: Community Edition (Open Source, freely downloadable from mysql.com) Carrier Grade Edition (Commercial Edition, can be downloaded from eDelivery for evaluation purposes) MySQL Carrier Grade Edition adds on the top of the Community Edition: Commercial Extensions (MySQL Cluster Manager, MySQL Enterprise Monitor, MySQL Cluster Installer) Oracle's Premium Support Services (largest team of MySQL experts backed by MySQL developers, forward compatible hot fixes, multi-language support, and more) We concluded talking about the MySQL Cluster vision: MySQL Cluster is the default database for anyone deploying rapidly evolving, realtime transactional services at web-scale, where downtime is simply not an option. From a practical point of view the HOL's steps were: MySQL Cluster installation start & monitoring of the MySQL Cluster processes client connection to the Management Server and to an SQL Node connection using the NoSQL NDB API and the Connector J In the hope that this blog post can help you get started with MySQL Cluster, I take the opportunity to thank you for the questions you made both during the HOL and at the MySQL Cluster booth. Slides are also on SlideShares: Santo Leto - MySQL Connect 2012 - Getting Started with Mysql Cluster Happy Clustering!

    Read the article

  • How to Specify AssemblyKeyFile Attribute in .NET Assembly and Issues

    How to specify strong key file in assembly? Answer: You can specify snk file information using following line [assembly: AssemblyKeyFile(@"c:\Key2.snk")] Where to specify an strong key file (snk file)? Answer: You have two options to specify the AssemblyKeyFile infromation. 1. In class 2. In AssemblyInfo.cs [assembly: AssemblyKeyFile(@"c:\Key2.snk")] 1. In Class you must specify above line before defining namespace of the class and after all the imports or usings Example: See Line 7 in bellow sample class using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Reflection;[assembly: AssemblyKeyFile(@"c:\Key1.snk")]namespace Csharp3Part1{ class Person { public string GetName() { return "Smith"; } }}2. In AssemblyInfo.cs You can aslo specify assembly information in AssemblyInfo.cs Example: See Line 16 in bellow sample AssemblyInfo.csusing System.Reflection;using System.Runtime.CompilerServices;using System.Runtime.InteropServices;// General Information about an assembly is controlled through the following// set of attributes. Change these attribute values to modify the information// associated with an assembly.[assembly: AssemblyTitle("Csharp3Part1")][assembly: AssemblyDescription("")][assembly: AssemblyConfiguration("")][assembly: AssemblyCompany("Deloitte")][assembly: AssemblyProduct("Csharp3Part1")][assembly: AssemblyCopyright("Copyright © Deloitte 2009")][assembly: AssemblyTrademark("")][assembly: AssemblyCulture("")][assembly: AssemblyKeyFile(@"c:\Key1.snk")]// Setting ComVisible to false makes the types in this assembly not visible// to COM components. If you need to access a type in this assembly from// COM, set the ComVisible attribute to true on that type.[assembly: ComVisible(false)]// The following GUID is for the ID of the typelib if this project is exposed to COM[assembly: Guid("4350396f-1a5c-4598-a79f-2e1f219654f3")]// Version information for an assembly consists of the following four values://// Major Version// Minor Version// Build Number// Revision//// You can specify all the values or you can default the Build and Revision Numbers// by using the '*' as shown below:// [assembly: AssemblyVersion("1.0.*")][assembly: AssemblyVersion("1.0.0.0")][assembly: AssemblyFileVersion("1.0.0.0")]Issues:You should not sepcify this in following ways. 1. In multiple classes. 2. In both class and AssemblyInfo.cs If you did wrong in either one of the above ways, Visual Studio or C#/VB.NET compilers shows following Error Duplicate 'AssemblyKeyFile' attribute and warning Use command line option '/keyfile' or appropriate project settings instead of 'AssemblyKeyFile' To avoid this, Please specity your keyfile information only one time either only in one class or in AssemblyInfo.cs file. It is suggested to specify this at AssemblyInfo.cs file You might also encounter the errors like Error: type or namespace name 'AssemblyKeyFileAttribute' and 'AssemblyKeyFile' could not be found. Solution. Please find herespan.fullpost {display:none;} span.fullpost {display:none;}

    Read the article

  • Setting up a local AI server - easy with Solaris 11

    - by Stefan Hinker
    Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new. I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning. First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page. With that, building the repository is quick and simple: # zfs create -o mountpoint=/export/repo rpool/ai/repo # zfs create rpool/ai/repo/s11 # mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt # rsync -aP /mnt/repo /export/repo/s11 # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@fcs # pkgrepo info -s /export/repo/sol11/repo PUBLISHER PACKAGES STATUS UPDATED solaris 4292 online 2012-03-12T20:47:15.378639Z That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher: # pkg set-publisher -g file:///export/repo/sol11/repo solaris In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service: # svccfg -s application/pkg/server \ setprop pkg/inst_root=/export/repo/sol11/repo # svccfg -s application/pkg/server setprop pkg/readonly=true # svcadm refresh application/pkg/server # svcadm enable application/pkg/server That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image. Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates: # mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt # pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*' # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@sru4 # zfs set snapdir=visible rpool/ai/repo/sol11 # svcadm restart svc:/application/pkg/server:default The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port. But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first: # zfs create -o mountpoint=/export/install rpool/ai/install # zfs create -o mountpoint=/zones rpool/zones # zonecfg -z ai-server zonecfg:ai-server> create create: Using system default template 'SYSdefault' zonecfg:ai-server> set zonepath=/zones/ai-server zonecfg:ai-server> add dataset zonecfg:ai-server:dataset> set name=rpool/ai/install zonecfg:ai-server:dataset> set alias=install zonecfg:ai-server:dataset> end zonecfg:ai-server> commit zonecfg:ai-server> exit # zoneadm -z ai-server install # zoneadm -z ai-server boot ; zlogin -C ai-server Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now: #zlogin ai-server root@ai-server:~# pkg install install/installadm root@ai-server:~# installadm create-service -n x86-fcs -a i386 \ -s pkg://solaris/install-image/[email protected],5.11-0.175.0.0.0.2.1482 \ -d /export/install/fcs -i 192.168.2.20 -c 3 With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4".  The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option. Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients. The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file. The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use: root@ai-server:~# mkdir -p /export/install/configs/manifests root@ai-server:~# cd /export/install/configs/manifests root@ai-server:~# installadm export -n x86-fcs -m orig_default \ -o orig_default.xml root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml root@ai-server:~# vi s11-fcs.small.local.xml root@ai-server:~# more s11-fcs.small.local.xml <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="S11 Small fcs local"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <destination>  </destination> <source> <publisher name="solaris"> <origin name="http://192.168.2.12/"/> </publisher> </source> <!-- By default the latest build available, in the specified IPS repository, is installed. If another build is required, the build number has to be appended to the 'entire' package in the following form: <name>pkg:/[email protected]#</name> --> <software_data action="install"> <name>pkg:/[email protected],5.11-0.175.0.0.0.2.0</name> <name>pkg:/group/system/solaris-small-server</name> </software_data> </software> </ai_instance> </auto_install> root@ai-server:~# installadm create-manifest -n x86-fcs -d \ -f ./s11-fcs.small.local.xml root@ai-server:~# installadm list -m -n x86-fcs Manifest Status Criteria -------- ------ -------- S11 Small fcs local Default None orig_default Inactive None The major points in this new manifest are: Install "solaris-small-server" Install a few locales less than the default.  I'm not that fluid in French or Japanese... Use my own package service as publisher, running on IP address 192.168.2.12 Install the initial release of Solaris 11:  pkg:/[email protected],5.11-0.175.0.0.0.2.0 Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on: root@ai-server:~# mkdir -p /export/install/configs/profiles root@ai-server:~# cd /export/install/configs/profiles root@ai-server:~# sysconfig create-profile -o default.xml root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml root@ai-server:~# cp default.xml user.xml root@ai-server:~# vi general.xml mars.xml user.xml root@ai-server:~# more general.xml mars.xml user.xml :::::::::::::: general.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="Europe/Berlin"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="vt100"/> </property_group> </instance> </service> <service version="1" type="service" name="network/physical"> <instance enabled="true" name="default"> <property_group type="application" name="netcfg"> <propval type="astring" name="active_ncp" value="DefaultFixed"/> </property_group> </instance> </service> <service version="1" type="service" name="system/name-service/switch"> <property_group type="application" name="config"> <propval type="astring" name="default" value="files"/> <propval type="astring" name="host" value="files dns"/> <propval type="astring" name="printer" value="user files"/> </property_group> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="system/name-service/cache"> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="network/dns/client"> <property_group type="application" name="config"> <property type="net_address" name="nameserver"> <net_address_list> <value_node value="192.168.2.1"/> </net_address_list> </property> </property_group> <instance enabled="true" name="default"/> </service> </service_bundle> :::::::::::::: mars.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="network/install"> <instance enabled="true" name="default"> <property_group type="application" name="install_ipv4_interface"> <propval type="astring" name="address_type" value="static"/> <propval type="net_address_v4" name="static_address" value="192.168.2.100/24"/> <propval type="astring" name="name" value="net0/v4"/> <propval type="net_address_v4" name="default_route" value="192.168.2.1"/> </property_group> <property_group type="application" name="install_ipv6_interface"> <propval type="astring" name="stateful" value="yes"/> <propval type="astring" name="stateless" value="yes"/> <propval type="astring" name="address_type" value="addrconf"/> <propval type="astring" name="name" value="net0/v6"/> </property_group> </instance> </service> <service version="1" type="service" name="system/identity"> <instance enabled="true" name="node"> <property_group type="application" name="config"> <propval type="astring" name="nodename" value="mars"/> </property_group> </instance> </service> </service_bundle> :::::::::::::: user.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="role"/> </property_group> <property_group type="application" name="user_account"> <propval type="astring" name="login" value="stefan"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="normal"/> <propval type="astring" name="description" value="Stefan Hinker"/> <propval type="count" name="uid" value="12345"/> <propval type="count" name="gid" value="10"/> <propval type="astring" name="shell" value="/usr/bin/bash"/> <propval type="astring" name="roles" value="root"/> <propval type="astring" name="profiles" value="System Administrator"/> <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/> </property_group> </instance> </service> </service_bundle> root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \ -c ipv4=192.168.2.100 root@ai-server:~# installadm list -p Service Name Profile ------------ ------- x86-fcs general.xml mars.xml user.xml root@ai-server:~# installadm list -n x86-fcs -p Profile Criteria ------- -------- general.xml None mars.xml ipv4 = 192.168.2.100 user.xml None Here's the idea behind these files: "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same. "user.xml" only contains user definitions.  That is, a root password and a primary user.Both of these profiles will be valid for all clients (for now). "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step: root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs root@ai-server:~# vi /etc/inet/dhcpd4.conf root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf host 080027AA3DB1 { hardware ethernet 08:00:27:AA:3D:B1; fixed-address 192.168.2.100; filename "01080027AA3DB1"; } This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead. root@ai-server:~# cd /etc/netboot root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org root@ai-server:~# vi menu.lst.01080027AA3DB1 root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org 1,2c1,2 < default=1 < timeout=10 --- > default=0 > timeout=30 root@ai-server:~# more menu.lst.01080027AA3DB1 default=1 timeout=10 min_mem64=0 title Oracle Solaris 11 11/11 Text Installer and command line kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre ss=$serverIP:5555 module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive title Oracle Solaris 11 11/11 Automated Install kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst all_svc_address=$serverIP:5555,livemode=text module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.  That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

    Read the article

  • Print SSRS Report / PDF automatically from SQL Server agent or Windows Service

    - by Jeremy Ramos
    Originally posted on: http://geekswithblogs.net/JeremyRamos/archive/2013/10/22/print-ssrs-report--pdf-from-sql-server-agent-or.aspxI have turned the Web upside-down to find a solution to this considering the least components and least maintenance as possible to achieve automated printing of an SSRS report. This is for the reason that we do not have a full software development team to maintain an app and we have to minimize the support overhead for the support team.Here is my setup:SQL Server 2008 R2 in Windows Server 2008 R2PDF format reports generated by SSRS Reports subscriptions to a Windows File ShareNetwork printerColoured reports with logo and brandingI have found and tested the following solutions to no avail:ProsConsCalling Adobe Acrobat Reader exe: "C:\Program Files (x86)\Adobe\Reader 11.0\Reader\acroRd32.exe" /n /s /o /h /t "C:\temp\print.pdf" \\printserver\printername"Very simple optionAdobe Acrobat reader requires to launch the GUI to send a job to a printer. Hence, this option cannot be used when printing from a service.Calling Adobe Acrobat Reader exe as a process from a .NET console appA bit harder than above, but still a simple solutionSame as cons abovePowershell script(Start-Process -FilePath "C:\temp\print.pdf" -Verb Print)Very simple optionUses default PDF client in quiet mode to Print, but also requires an active session.    Foxit ReaderVery simple optionRequires GUI same as Adobe Acrobat Reader Using the Reporting Services Web service to run and stream the report to an image object and then passed to the printerQuite complexThis is what we're trying to avoid  After pulling my hair out for two days, testing and evaluating the above solutions, I ended up learning more about printers (more than ever in my entire life) and how printer drivers work with PostScripts. I then bumped on to a PostScript interpreter called GhostScript (http://www.ghostscript.com/) and then the solution starts to get clearer and clearer.I managed to achieve a solution (maybe not be the simplest but efficient enough to achieve the least-maintenance-least-components goal) in 3-simple steps:Install GhostScript (http://www.ghostscript.com/download/) - this is an open-source PostScript and PDF interpreter. Printing directly using GhostScript only produces grayscale prints using the laserjet generic driver unless you save as BMP image and then interpret the colours using the imageInstall GSView (http://pages.cs.wisc.edu/~ghost/gsview/)- this is a GhostScript add-on to make it easier to directly print to a Windows printer. GSPrint automates the above  PDF -> BMP -> Printer Driver.Run the GSPrint command from SQL Server agent or Windows Service:"C:\Program Files\Ghostgum\gsview\gsprint.exe" -color -landscape -all -printer "printername" "C:\temp\print.pdf"Command line options are here: http://pages.cs.wisc.edu/~ghost/gsview/gsprint.htmAnother lesson learned is, since you are calling the script from the Service Account, it will not necessarily have the Printer mapped in its Windows profile (if it even has one). The workaround to this is by adding a local printer as you normally would and then map this printer to the network printer. Note that you may need to install the Printer Driver locally in the server.So, that's it! There are many ways to achieve a solution. The key thing is how you provide the smartest solution!

    Read the article

  • A couple of nice features when using OracleTextSearch

    - by kyle.hatlestad
    If you have your UCM/URM instance configured to use the Oracle 11g database as the search engine, you can be using OracleTextSearch as the search definition. OracleTextSearch uses the advanced features of Oracle Text for indexing and searching. This includes the ability to specify metadata fields to be optimized for the search index, fast rebuilding, and index optimization. If you are on 10g of UCM, then you'll need to load the OracleTextSearch component that is available in the CS10gR35UpdateBundle component on the support site (patch #6907073). If you are on 11g, no component is needed. Then you specify the search indexer name with the configuration flag of SearchIndexerEngineName=OracleTextSearch. Please see the docs for other configuration settings and setup instructions. So I thought I would highlight a couple of other unique features available with OracleTextSearch. The first is the Drill Down feature. This feature allows you to specify specific metadata fields that will break down the results of that field based on the total results. So in the above graphic, you can see how it broke down the extensions and gives a count for each. Then you just need to click on that link to then drill into that result. This setting is perfect for option list fields and ones with a distinct set of values possible. By default, it will use the fields Type, Security Group, and Account (if enabled). But you can also specify your own fields. In 10g, you can use the following configuration entry: DrillDownFields=xWebsiteObjectType,dExtension,dSecurityGroup,dDocType And in 11g, you can specify it through the Configuration Manager applet. Simply click on the Advanced Search Design, highlight the field to filter, click Edit, and check 'Is a filter category'. The other feature you get with OracleTextSearch are search snippets. These snippets show the occurrence of the search term in context of their usage. This is very similar to how Google displays its results. If you are on 10g, this is enabled by default. If you are on 11g, you need to turn on the feature. The following configuration entry will enable it: OracleTextDisableSearchSnippet=false Once enabled, you can add the snippets to your search results. Go to Change View -> Customize and add a new search result view. In the Available Fields in the Special section, select Snippet and move it to the Main or Additional Information. If you want to include the snippets with the Classic results, you can add the idoc variable of <$srfDocSnippet$> to display them. One caveat is that this can effect search performance on large collections. So plan the infrastructure accordingly.

    Read the article

  • SQLAuthority News – Virtual Launch Event for Office 2010 – Contest – Win MS Office License

    - by pinaldave
    Office products are integral products of any PC. I accept that without Office Suites, I can not survive or make enough leaving. I am blogger and use word to create my blogs. I am SQL Server Trainer  and I use PowerPoint as my presentation tool. I am SQL Server consultant and I use Excel to keep my work log. I can not see my life with Office Tools. Just like any other Microsoft Product there is strong community following Office Tools. Please count me in. The same community is hosting a Virtual Launch Event for Office 2010 on May 25 and 26th. The webcasts is FREE to attend and people can take part either online or by going to the nearest available center. The sessions will be delivered by MVPs. To register please visit: http://www.meraoffice.com. In June, limited cities will be hosting Community Launch Events for Office 2010. At the launch events, attendees will get to see Office 2010 in action and learn how to do their work better with Office 2010.  The details are available on http://office.merawindows.com. To support one of the largest community, I am announcing one contents. It is very easy to take part in the contest. You just have to answer one very simple question. Contest: Choose best option: With which Microsoft Office Product Powerpivot is associated? Options: 1) PowerPoint 2) Excel 3) Word Hint: http://search.sqlauthority.com Rules: Winner will be awarded 1 Office 2007 Home and Student. This will be freely upgradeable to Office 2010 once it releases in June. The winners will be sent emails and they will redeem their awards via microsoftstore.co.in The prizes can only be shipped to India and Indian residents are eligible. Winner will be selected by selected community leaders and MVPs at their sole discretion. Winner will be informed by email about the award. Most creative and informative comment will win the contest. Please spread the words about this contest. SQLAuthority.com will also send SQL Server book to the person who generates the most traffic to this blog post using Twitter, Facebook and other social media. This competition is also open to Indian residents only. I will measure the traffic using my wordpress.com stats plugin. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Office

    Read the article

  • Easy Listening = CRM On Demand Podcasts

    - by Anne
    OK, here's my NEW favorite resource for CRM On Demand info -- podcasts! Specifically, the CRM On Demand Podcast site -- signed, sealed, and delivered with humor and know-how. Yes, I admit, I know the cast of characters. But let's face it, sometimes dealing with software is just soooo dry! Not so when discussed by the two main commentators, Louis Peters and Robert Davidson, whom someone once referred to as CRM On Demand's "Click and Clack." (Thought that was too good not to pass along!) Anyhow, another huge plus about the site is the option to listen OR to read. Out walking my dog or doing the dishes? Just turn up the podcast. Listening to music or watching TV? I'll read Louis's entertaining write-ups to glean great info about CRM On Demand in a very short period of time. So that you get a better understanding of why I like this site so much, here's a sampling of what's discussed: Five Things about Books of Business As Louis Peters put it in his entry, when you see "Five Things" in the title, "you'll know you're going to get some concrete advice that you can put to work right away." Well, Louis and Robert do just that, pointing you in the right direction when using Books of Business to segment data. Moving to Indexed Fields - A Rough Guide (only an article, not a podcast) I've read all about performance and even helped develop material around it. But nowhere have I heard indexed custom fields referred to as "super heroes." Louis and Robert use imaginative language to describe the process for moving your data to indexed fields for optimal performance. Data Access QA from the Forums I think that everyone would admit that data access and visibility is the most difficult topic to understand in CRM On Demand. Following up on their previous podcast on the same topic, Louis and Robert answer a few key questions from the many postings on the Oracle CRM On Demand forums. And I bet that the scenarios match many companies' business requirements...maybe even yours! We Need to Talk About Adoption Another expert, Tim Koehler, joins Louis to talk about how to drive user adoption: aligning product usage with business results, communicating why and how to use the product, getting feedback on usability, and so on. Hope I've made my point -- turn to these podcasts to hear knowledgeable folks discuss CRM On Demand tips and tricks in entertaining ways. One podcast is even called "SaaS Talk"!

    Read the article

  • How To Personalize the Windows Command Prompt

    - by Matthew Guay
    Command line interfaces can be downright boring, and always seem to miss out on the fresh coats of paint liberally applied to the rest of Windows.  Here’s how to add a splash of color to Command Prompt and make it unique. By default, Windows Command Prompt is white text on a black background. It get’s the job done, but maybe you want to add some color to it.   To get an overview of what we can do with the color command, let’s enter: color /? So, to get the color you want, enter color then the option for the background color followed by the font color.  For example, let’s make an old-fashioned green on black look by entering: color 02   There are a bunch of different combinations you can do, like this black background with red text. color 04 You can’t mess it up too much.  The color command won’t let you set both the font and the background to the same color, which would make it unreadable.  Also, if you want to get back to the default settings, just enter: color Now we’re back to plain-old black and white. Personalize Command Prompt Without Commands If you’d prefer to change the color without entering commands, just click on the Command Prompt icon in the top left corner of the window and select Properties. Select the Colors tab, and then choose the color you want for the screen text and background.  You can also enter your own RGB color combination if you want.   Here we entered the RGB values to get a purple background color like Ubuntu 10.04. Back in the Properties dialog, you can also change your Command Prompt font from the font tab.  Choose any font you want, as long as the one you want is one of the three listed here. Customizations you make via the Properties dialog are saved and will be used any time you open Command Prompt, but any customizations you make with the Color command are only for that session. Conclusion Whether you want to make your command prompt bright enough to cause a sunburn or old-style enough to scare a mainframe operator, with these settings, you can make Command Prompt a bit more unique.   Similar Articles Productive Geek Tips Use "Command Prompt Here" in Windows VistaVerify the Integrity of Windows Vista System FilesKeyboard Ninja: Scrolling the Windows Command Prompt With Only the KeyboardRun a Command as Administrator from the Windows 7 / Vista Run boxStart an Application Assigned to a Specific CPU in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app

    Read the article

  • Learn about MySQL with the Authentic MySQL for Beginners course

    - by Antoinette O'Sullivan
    Learn about the MySQL Server and other MySQL products by taking the authentic MySQL for Beginners course. This course covers all the basics from MySQL download and installation, to relational database concepts and database design. This course is your first step to becoming a MySQL administrator. You can take this course through one of the following delivery types: Training-on-Demand: Start the class from your desk, at your base and within 24 hrs of registering. Read Ben Krug on Day 3 of his experience taking the MySQL for Beginners course Training-on-Demand option. Live-Virtual Class: Attend this live class from your own office - no travel required. Choose from a selection of events on the schedule to suit different timezones. Delivery languages include English and German. In-Class event: Attend this class in an education center. Events already on the schedule include:  Location  Date  Delivery Language  Mechelen, Belgium  14 January 2013  English  London, England  5 March 2013  English  Hamburg, Germany  25 March 2013  German  Munich, Germany  3 June 2013  German  Budapest, Hungary  5 February 2013  Hungary  Milan, Italy  11 February 2013  Italian  Rome, Italy  4 March 2013  Italian  Riga, Latvia  18 February 2013  Latvian  Amsterdam, Netherlands  21 May 2013  Dutch  Nieuwegein, Netherlands  18 February 2013  Dutch  Warsaw, Poland  18 February 2013  Polish  Lisbon, Portugal  25 March 2013  European Portugese  Porto, Portugal  25 March 2013  European Portugese  Barcelona, Spain  11 February 2013  Spanish  Madrid, Spain  22 April 2013  Spanish  Nairobi, Kenya  14 January 2013  English  Capetown, South Africa  22 July 2013  English  Pretoria, South Africa  22 April 2013  English  Petaling Jaya, Malaysia  28 January 2013  English  Ottawa, Canada  25 March 2013  English  Toronto, Canada  25 March 2013  English  Montreal, Canada 25 March 2013   English Mexico City, Mexico  14 January 2013   Spanish  San Pedro Garza Garcia, Mexico  5 February 2013  Spanish  Sao Paolo, Brazil  29 January 2013  Brazilian Portugese For more information on this or other courses on the authentic MySQL Curriculum, go to http://oracle.com/education/mysql. Note, many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL will broaden your career path with growing job demand.

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >