Search Results

Search found 1129 results on 46 pages for 'led'.

Page 45/46 | < Previous Page | 41 42 43 44 45 46  | Next Page >

  • Alternate way to create a clone of a UNIX System

    - by Spirit
    THE STORY: (If you don't like to read much, down below is the question :) ) Where I work we have two HP RP2470 servers same hardware same number of hard drives same everything :). One of them is a production server and runs HP-UX 11.00. The poor ba***rd hasn't been turned off for years and now I have to make a clone of it on the other server - just in case, for redundancy. The problem is simple (or not simple) as I have to make the the other server exactly the same. However the old version of OS (UX 11.00 is a history now) and the old software running on it, have made my task almost impossible. On the production server there is also a cloning/recover utility Ignite-UX. I tried many times to create a recovery tape with it. Then when I load the tape on the backup server, it succeeds with the loading of the tape (no errors no warnings) but on the next restart it fails to load the OS :S and drops into HP`s ISL prompt. --- THE QUESTION: Is there an alternate way to create a clone of the Unix System? The environment is: 1. 2x HP RP2470 Servers (non-Intel), same hardware, same number od HDDs (two each of them) same everything. 2. OS running: HP-UX 11.00 The production server has to be cloned without downtime - sadly :( as I hope that they will reconsider on this one For example (like on Windows platforms), if you try to copy an entire HDD with Windows inside on another HDD, and then put that HDD on another PC it will still work, as long as the hardware is the same. Can I do something like that with a Unix system? Can I somehow COPY the contents of the entire HDD, put those on another HDD, and then just load the HDD into the other server? (If you haven't read the story the servers are exactly the same) Will it work? Can it be done with ordinary commands like cp or dump or something like that? Does any one have a similar experience? --- UPDATE: 26.01.2012 NOTE: The update is related to "The Story". If you haven't read that part then you can skip this update. This is just a short update on the recover logs from the Ignite Tape.. someone with more exp. might notice something.. ... --- READING CONTENTS OF THE IGNITE TAPE --- --- OUTPUT OMITED --- ... ... x ./configure3, 413696 bytes, 808 tape blocks x ./monitor_bpr, 20480 bytes, 40 tape blocks * Download_mini-system: Complete * Loading_software: Begin * Installing boot area on disk. * Enabling swap areas. * Backing up LVM configuration for "vg00". * Processing the archive source (Recovery Archive). * Wed Jan 25 15:27:32 EST 2012: Starting archive load of the source (Recovery Archive). * Positioning the tape (/dev/rmt/0mn). * Archive extraction from tape is beginning. Please wait. * Wed Jan 25 15:39:52 EST 2012: Completed archive load of the source (Recovery Archive). * Executing user specified script: "/opt/ignite/data/scripts/os_arch_post_l". * Running in recovery mode (os_arch_post_l). * Running the ioinit command ("/sbin/ioinit -c") * Creating device files via the insf command. insf: Installing special files for sdisk instance 0 address 0/0/1/1.15.0 insf: Installing special files for sdisk instance 1 address 0/0/2/0.1.0 insf: Installing special files for sdisk instance 2 address 0/0/2/1.15.0 insf: Installing special files for stape instance 0 address 0/0/1/0.3.0 insf: Installing special files for btlan instance 0 address 0/0/0/0 insf: Installing special files for btlan instance 1 address 0/2/0/0 insf: Installing special files for pseudo driver dlpi insf: Installing special files for pseudo driver kepd insf: Installing special files for pseudo driver framebuf insf: Installing special files for pseudo driver sad * Running "/opt/upgrade/bin/tlinstall -v" and correcting transition link permissions. * Constructing the bootconf file. * Setting primary boot path to "0/0/1/1.15.0". * Executing: "/var/adm/sw/products/PHSS_20146/pfiles/iux_postload". * Executing: "/var/adm/sw/products/PHSS_25982/pfiles/iux_postload". NOTE: tlinstall is searching filesystem - please be patient NOTE: Successfully completed * Loading_software: Complete * Build_Kernel: Begin NOTE: Since the /stand/vmunix kernel is already in place, the kernel will not be re-built. Note that no mod_kernel directives will be processed. * Build_Kernel: Complete * Boot_From_Client_Disk: Begin * Rebooting machine as expected. NOTE: Rebooting system. sync'ing disks (0 buffers to flush): 0 buffers not flushed 0 buffers still dirty Closing open logical volumes... Done Console reset done. Boot device reset done. ********** VIRTUAL FRONT PANEL ********** System Boot detected ***************************************** LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF OFF ON ON LED State: Running non-OS code. (i.e. Boot or Diagnostics) ... ... ... --- SERVER IS PERFORMING POST SEQUENCE HERE --- --- OUTPUT OMITED --- ... ... ... ***************************************** ************ EARLY BOOT VFP ************* End of early boot detected ***************************************** Firmware Version 43.50 Duplex Console IO Dependent Code (IODC) revision 1 ------------------------------------------------------------------------------ (c) Copyright 1995-2002, Hewlett-Packard Company, All rights reserved ------------------------------------------------------------------------------ Processor Speed State CoProcessor State Cache Size Number State Inst Data --------- -------- --------------------- ----------------- ------------ 0 650 MHz Active Functional 750 KB 1.5 MB 1 650 MHz Idle Functional 750 KB 1.5 MB Central Bus Speed (in MHz) : 120 Available Memory : 2097152 KB Good Memory Required : 16140 KB Primary boot path: 0/0/1/1.15 Alternate boot path: 0/0/2/1.15 Console path: 0/0/4/1.643 Keyboard path: 0/0/4/0.0 Processor is starting autoboot process. To discontinue, press any key within 10 seconds. 10 seconds expired. Proceeding... Trying Primary Boot Path ------------------------ Booting... Boot IO Dependent Code (IODC) revision 1 HARD Booted. ISL Revision A.00.38 OCT 26, 1994 ISL booting hpux ISL>

    Read the article

  • How to prevent Android bluetooth RFCOMM connection from dying immediately after .connect()?

    - by Gilead
    I'm trying to connect to a Zeemote (http://zeemote.com/) gaming controller from Moto Droid running 2.0.1 firmware. The test application below does connect to the device (LED flashes) but connection is dropped immediately after that. I can connect to the device perfectly fine using bluez tools (log attached as well). I'm quite at a loss here, I work on it for so long that I ran out of ideas so any help would be very much appreciated. Thanks, Max =========================================== Code: public class ZeeTest extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); try { test(); } catch (IOException e) { e.printStackTrace(); } } public void test() throws IOException { BluetoothDevice zee = BluetoothAdapter.getDefaultAdapter(). getRemoteDevice("00:1C:4D:02:A6:55"); Log.d("ZeeTest", "++++ Creating socket"); BluetoothSocket sock = zee.createRfcommSocketToServiceRecord( UUID.fromString("8e1f0cf7-508f-4875-b62c-fbb67fd34812")); Log.d("ZeeTest", "++++ Connecting"); sock.connect(); Log.d("ZeeTest", "++++ Connected"); final InputStream in = sock.getInputStream(); new Thread() { @Override public void run() { byte[] buffer = new byte[32]; int bytes = 0; int x = 0; Log.d("ZeeTest", "++++ Listening..."); while (x < 200) { x++; try { bytes = in.read(buffer); Log.d("ZeeTest", "++++ Read "+ bytes +" bytes"); } catch (IOException e) { // java.io.IOException: Software caused connection abort if (x % 50 == 0) { Log.d("ZeeTest", "Tried "+ x +" times ("+ bytes +")"); } try { Thread.sleep(100); } catch (InterruptedException ie) {} } } Log.d("ZeeTest", "++++ Done: thread exit"); } }.start(); Log.d("ZeeTest", "++++ Done: test()"); } } =========================================== Log: I/ActivityManager( 1169): Start proc zee.test for activity zee.test/.ZeeTest: pid=4294 uid=10084 gids={3002, 3001, 3003} I/dalvikvm( 4294): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=38) D/dalvikvm( 4287): LinearAlloc 0x0 used 640700 of 5242880 (12%) I/dalvikvm( 4294): Debugger thread not active, ignoring DDM send (t=0x41504e4d l=20) D/ZeeTest ( 4294): ++++ Creating socket D/ZeeTest ( 4294): ++++ Connecting E/BluetoothEventLoop.cpp( 1169): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1240/hci0/dev_00_1C_4D_02_A6_55 I/usbd ( 1068): process_usb_uevent_message(): buffer = add@/devices/virtual/bluetooth/hci0/hci0:1 I/usbd ( 1068): main(): call select(...) E/BluetoothEventLoop.cpp( 1169): event_filter: Received signal org.bluez.Adapter:DeviceFound from /org/bluez/1240/hci0 V/BluetoothEventRedirector( 1242): Received android.bluetooth.device.action.FOUND V/BluetoothEventRedirector( 1242): Received android.bleutooth.device.action.UUID D/ZeeTest ( 4294): ++++ Connected D/ZeeTest ( 4294): ++++ Done: test() D/ZeeTest ( 4294): ++++ Listening... I/ActivityManager( 1169): Displayed activity zee.test/.ZeeTest: 2296 ms (total 2296 ms) E/BluetoothEventLoop.cpp( 1169): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1240/hci0/dev_00_1C_4D_02_A6_55 I/usbd ( 1068): process_usb_uevent_message(): buffer = remove@/devices/virtual/bluetooth/hci0/hci0:1 I/usbd ( 1068): main(): call select(...) V/BluetoothEventRedirector( 1242): Received android.bleutooth.device.action.UUID D/ZeeTest ( 4294): Tried 50 times (0) D/ZeeTest ( 4294): Tried 100 times (0) D/ZeeTest ( 4294): Tried 150 times (0) D/ZeeTest ( 4294): Tried 200 times (0) D/ZeeTest ( 4294): ++++ Done: thread exit =========================================== Terminal log: $ sdptool browse Inquiring ... Browsing 00:1C:4D:02:A6:55 ... $ sdptool records 00:1C:4D:02:A6:55 Service Name: Zeemote Service RecHandle: 0x10015 Service Class ID List: UUID 128: 8e1f0cf7-508f-4875-b62c-fbb67fd34812 Protocol Descriptor List: "L2CAP" (0x0100) "RFCOMM" (0x0003) Channel: 1 Language Base Attr List: code_ISO639: 0x656e encoding: 0x6a base_offset: 0x100 $ rfcomm connect /dev/tty10 00:1C:4D:02:A6:55 Connected /dev/rfcomm0 to 00:1C:4D:02:A6:55 on channel 1 Press CTRL-C for hangup # rfcomm show /dev/tty10 rfcomm0: 00:1F:3A:E4:C8:40 - 00:1C:4D:02:A6:55 channel 1 connected [reuse-dlc release-on-hup tty-attached] # cat /dev/tty10 (nothing here) # hcidump HCI sniffer - Bluetooth packet analyzer ver 1.42 device: hci0 snap_len: 1028 filter: 0xffffffff < HCI Command: Create Connection (0x01|0x0005) plen 13 > HCI Event: Command Status (0x0f) plen 4 > HCI Event: Connect Complete (0x03) plen 11 < HCI Command: Read Remote Supported Features (0x01|0x001b) plen 2 > HCI Event: Read Remote Supported Features (0x0b) plen 11 < ACL data: handle 11 flags 0x02 dlen 10 L2CAP(s): Info req: type 2 > HCI Event: Command Status (0x0f) plen 4 > HCI Event: Page Scan Repetition Mode Change (0x20) plen 7 > HCI Event: Max Slots Change (0x1b) plen 3 < HCI Command: Remote Name Request (0x01|0x0019) plen 10 > HCI Event: Command Status (0x0f) plen 4 > ACL data: handle 11 flags 0x02 dlen 16 L2CAP(s): Info rsp: type 2 result 0 Extended feature mask 0x0000 < ACL data: handle 11 flags 0x02 dlen 12 L2CAP(s): Connect req: psm 3 scid 0x0040 > HCI Event: Number of Completed Packets (0x13) plen 5 > ACL data: handle 11 flags 0x02 dlen 16 L2CAP(s): Connect rsp: dcid 0x04fb scid 0x0040 result 1 status 2 Connection pending - Authorization pending > HCI Event: Remote Name Req Complete (0x07) plen 255 > ACL data: handle 11 flags 0x02 dlen 16 L2CAP(s): Connect rsp: dcid 0x04fb scid 0x0040 result 0 status 0 Connection successful < ACL data: handle 11 flags 0x02 dlen 16 L2CAP(s): Config req: dcid 0x04fb flags 0x00 clen 4 MTU 1013 (events are properly received using bluez)

    Read the article

  • Java Generics, JPA 2, J2EE, JSF 2, GWT, Ajax, Oracle's Java Strategies, Flex, iPhone, Agile ALM, Gra

    - by Kim Won
    Great Indian Developer Summit 2010 – India's Biggest Polyglot Conference and Workshops for IT Software Professionals Bangalore, April 9, 2010: The GIDS.Java Conference and Workshops has announced the complete program of over 50 sessions on the present and future of the Java language and VM, how they are evolving to meet the community's ever-changing needs, and some of the cutting-edge tools, technologies & techniques used for building robust enterprise Java applications today. The GIDs.Java track at Great Indian Developer Summit takes place 22 and 23 April 2010, at the Indian Institute of Science in Bangalore. As one of the longest running independent developer conferences in India, GIDS.Java at the Great Indian Developer Summit 2010 is uniquely positioned to provide a blend of practical, pragmatic and immediately applicable knowledge and a glimpse of the future of technology. During 22 and 23 April 2010, GIDS.Java offers a multi-track conference, workshops, expo show floor, and networking opportunities. The first keynote at GIDS.Java "Pointy Haired Bosses and Pragmatic Programmers" is led by Dr. Venkat Subramaniam. He speaks about how each of us has a professional responsibility to be objective and make decisions that will help us and our teams be productive and deliver results. Venkat will pick on some fallacies, lay down facts, and discuss how to stay professional and objective in our daily efforts. The second keynote of the day explains the practical features that make the Cloud so interesting, and why everyone should start using it in their everyday life. Simone Brunozzi, Amazon Web Services Technology Evangelist, will detail technical examples, business details all mixed with a lot of Italian humor to ensure audience enjoy this talk without a single line of code. The third keynote of the day gives an exciting overview of directions in the Java space for Oracle, featuring concrete signs of Oracles heavy investment, a clear concise strategy overview, and deep dives into some of the most interesting pieces of technology being developed in the Java Platform Group today; such as JavaEE, JDK7, JavaFX, and our exciting new visual tools. Featuring demos by a Java evangelism team star, Simon Ritter, this talk takes you top to bottom in Java Technology. Featured talks at GID.Web include: Good, Bad, and Ugly of Java Generics, Venkat Subramaniam Pure Java Ajax: An Overview of GWT 2.0, Marty Hall How JPA 2.0 Makes a Good Thing Even Better, Mike Keith Building Enterprise RIAs with Adobe Flex and Java, Sujit Reddy G Integrated Ajax Support in JSF 2.0, Marty Hall Design Patterns in Java and Groovy, Venkat Subramaniam A Gentle Introduction to iPhone and Obj-C for Java Developers, Matthew McCullough Cloud Computing: Azure for Java Developers, Janakiram MSV Ajax Support in the Prototype JavaScript Library, Marty Hall First steps to IT Heaven Through the Cloud. Part III: .Java, Simone Brunozi Building Web 2.0 User Interfaces for Web Service Models using JSF, Frank Nimphius and Jobinesh P Acceptance Test Driven Development, John Tobin and Mohammed Mohsinali Architecting Your Java Applications for the Cloud, Praveen Srivatsa Effective Java, Venkat Subramaniam The Amazing Groovy Weight-loss Plan, Scott Davis Enterprise Modeling - from Conceptual Planning to Technical Blueprints, J Sripad Java Collections Renaissance, Donald Raab and Vlad Zakharov Power 7 and IBM J9VM, Himanshu Goyal A Whistle-stop Tour of Maven 3.0, Matthew McCullough Mass Volume Opportunities for Java Developers, Jouko Nuottila Emerging Technology Complex Event Processing, Duvvuri Srinivas Agile ALM for Distributed Development, Karthi Swaminathan Dim Sum Grails - A Sampler of Practical Non Database-Driven Grails Applications, Scott Davis Diagnosing Performance Bottlenecks in J2EE, Deepak Kaul Business Driven Identity Management, Suneet Agera Combining Java EE with OSGi using Eclipse Gemini, Mike Keith Workshop: Essence of Functional Programming, Venkat Subramaniam Workshop: Agile Development, Tools, and Teams and Scrum Certification, Stephen Forte Workshop: Cloud Computing Boot Camp on the Google App Engine, Matthew McCullough Workshop: Building Your First Amazon App, Simone Brunozzi Workshop: The 180-min AJAX and JSON Spike Class, Scott Davis Workshop: PHP + Adobe Flex = Killer RIA, Shyamprasad P Workshop: User Expereince Evaluation Model Walkthrough, Sanna Häiväläinen Workshop: Building Data Centric Applications using Adobe Flex and Java, Prashant Singh Workshop: Monetizing your Apps with PayPal X Payments Platform, Khurram Khan, Praveen Alavilli Sponsors of Great Indian Developer Summit 2010 include: Platinum sponsors Microsoft, Oracle Forum Nokia and Adobe; Gold sponsors Intel and SAP; Silver sponsors Quest Software, PayPal, Telerik and AMT. About Great Indian Developer Summit Great Indian Developer Summit is the gold standard for India's software developer ecosystem for gaining exposure to and evaluating new projects, tools, services, platforms, languages, software and standards. Packed with premium knowledge, action plans and advise from been-there-done-it veterans, creators, and visionaries, the 2010 edition of Great Indian Developer Summit features focused sessions, case studies, workshops and power panels that will transform you into a force to reckon with. Featuring 3 co-located conferences: GIDS.NET, GIDS.Web, GIDS.Java and an exclusive day of in-depth tutorials - GIDS.Workshops, from 20 April to 24 April at the IISc campus in Bangalore. At GIDS you'll participate in hundreds of sessions encompassing the full range of Microsoft computing, Java, Agile, RIA, Rich Web, open source/standards, languages, frameworks and platforms, practical tutorials that deep dive into technical skill and best practices, inspirational keynote presentations, an Expo Hall featuring dozens of the latest projects and products activities, engaging networking events, and the interact with the best and brightest of speakers from around the world. For further information on GIDS 2010, please visit the summit on the web http://www.developersummit.com/ A Saltmarch Media Press Release E: [email protected] Ph: +91 80 4005 1000

    Read the article

  • Ajax, Lizard Brain Web Design, JSF, Struts, JavaScript, Mobile Web, Flash, jQuery, GWT, Harmony at I

    - by Kim Won
    Great Indian Developer Summit 2010 – India's Biggest Polyglot Conference and Workshops for IT Software Professionals Bangalore, April 9, 2010: The GIDS.Web Conference and Workshops has announced the complete program of over 30 sessions on how browser and rich web technologies such as AJAX, DHTML, Mashups, Web 2.0, Enterprise 2.0 technologies, and Rich UI technologies are making money and gaining market-share for some of the leading businesses in the world. The GIDS.Web track at Great Indian Developer Summit takes place 21 and 23 April 2010, at the Indian Institute of Science in Bangalore. As one of the longest running independent developer conferences in India, GIDS.Web at the Great Indian Developer Summit 2010 is uniquely positioned to provide a blend of practical, pragmatic and immediately applicable knowledge and a glimpse of the future of technology. During 21 and 23 April 2010, GIDS.Web offers a multi-track conference, workshops, expo show floor, and networking opportunities. The first keynote at GIDS.Web is led by the leading Java EE and Ajax developer, speaker, and author Marty Hall. The best of India's Java and RIA programmers have learnt the subject from Marty's seminal books Core Servlets and JavaServer Pages (first and second editions), More Servlets and JavaServer Pages, and Core Web Programming (first and second editions) from Prentice Hall and Sun Microsystems Press. Marty's keynote address is a comparison of approaches to building rich Internet applications with Ajax. Marty says Ajax development is difficult, and there are several fundamentally different strategies to building Ajaxified Web applications. The keynote address will survey the three most important of these approaches: using an Ajax-enabled JavaScript library such as jQuery, Prototype, Scriptaculous, Dojo, or Ext/JS; using a Web framework such as JSF 2.0 or Struts 2 that has integrated Ajax support; using the Google Web Toolkit (GWT) to build "pure Java" Ajax applications. The talk will compare and contrast these three approaches, discussing the types of applications that fit best for each option. Over the course of the summit Marty will conduct several more sessions on "Choosing an Ajax/JavaScript Toolkit: A Comparison of the Most Popular JavaScript Libraries", "Pure Java Ajax: An Overview of GWT 2.0", "Integrated Ajax Support in JSF 2.0" and "Ajax Support in the Prototype JavaScript Library". The second keynote by the head of Adobe's Flash initiative in India, Ramesh Srinivasaraghavan, explores the state of art in web application development and identify trends that could transform the way we create and use web applications. The talk explains how the Adobe Flash Platform has fuelled this revolution with an integrated set of technologies for delivering the most compelling applications, content and video to the widest possible audience. The Director of Forum Nokia will explain how cloud computing coupled with mobile applications enable consumers to have access to powerful services and improved user experiences never before thought possible. IEEE's 2010 President-Elect Sorel Reisman's afternoon address steps to improve the IT profession in India. Featured talks at GID.Web also include: Web 2.0 Checklist - Deconstructing Modern Websites, Scott Davis Choosing an Ajax/JavaScript Toolkit: Comparison of Popular JavaScript Libraries, Marty Hall Lizard Brain Web Design, Scott Davis Effective Design Processes and Resources for Mobile Web Development, Arabella David NoSQL: The Shift to a Non-relational World, Nosh Petigara Open Source Web Debugging Tools, Matthew McCullough Building Line of Business Applications with Silverlight 4.0, Stephen Forte Hadoop - Divide and Conquer, Matthew McCullough Adobe Flash Catalyst for Agile Interaction Design, Harish Sivaramakrishnan Using jQuery and AJAX to Build Front-ends for ASP.NET and ASP.NET MVC, Pandurang Nayak First Steps to IT Heaven Through the Cloud. Part II: .WEB, Simone Brunozzi Building Rich Internet Applications with SL RIA Web Services, Pandurang Nayak Enriching Cloud Applications with Adobe Flash Platform, Ramesh Srinivasaraghavan Payments for the Web.future, Khurram Khan and Praveen Alavilli Longevity of Scalable Systems, Nishad Kamat Transform yourself into a Mobile App Developer Using Web Run Time, Balagopal K S Developing Multi Screen Applications on Adobe Flash Platform, Hemanth Sharma Why Harmony and For Whom?, Himanshu Goyal IIS Hosting Solution for ASP.net and PHP Web Sites, Nahas Mohammed Building Pluggable Web applications using Django, Lakshman Prasad Workshop: The 180-min AJAX and JSON Spike Class, Scott Davis Workshop: Essence of Functional Programming, Venkat Subramaniam Workshop: Agile Development, Tools, and Teams and Scrum Certification, Stephen Forte Workshop: PHP + Adobe Flex = Killer RIA, Shyamprasad P Workshop: Cloud Computing Boot Camp on the Google App Engine, Matthew McCullough Workshop: Building Data Centric Applications using Adobe Flex and Java, Prashant Singh Workshop: Building Your First Amazon App, Simone Brunozzi Workshop: Windows Azure Deep Dive, Ramaprasanna Chellamuthu Workshop: Monetizing your Apps with PayPal X Payments Platform, Khurram Khan, Praveen Alavilli Workshop: User Expereince Evaluation Model Walkthrough, Sanna Häiväläinen Sponsors of Great Indian Developer Summit 2010 include: Platinum sponsors Microsoft, Oracle Forum Nokia and Adobe; Gold sponsors Intel and SAP; Silver sponsors Quest Software, PayPal, Telerik and AMT. About Great Indian Developer Summit Great Indian Developer Summit is the gold standard for India's software developer ecosystem for gaining exposure to and evaluating new projects, tools, services, platforms, languages, software and standards. Packed with premium knowledge, action plans and advise from been-there-done-it veterans, creators, and visionaries, the 2010 edition of Great Indian Developer Summit features focused sessions, case studies, workshops and power panels that will transform you into a force to reckon with. Featuring 3 co-located conferences: GIDS.NET, GIDS.Web, GIDS.Java and an exclusive day of in-depth tutorials - GIDS.Workshops, from 20 April to 24 April at the IISc campus in Bangalore. At GIDS you'll participate in hundreds of sessions encompassing the full range of Microsoft computing, Java, Agile, RIA, Rich Web, open source/standards, languages, frameworks and platforms, practical tutorials that deep dive into technical skill and best practices, inspirational keynote presentations, an Expo Hall featuring dozens of the latest projects and products activities, engaging networking events, and the interact with the best and brightest of speakers from around the world. For further information on GIDS 2010, please visit the summit on the web http://www.developersummit.com/ A Saltmarch Media Press Release E: [email protected] Ph: +91 80 4005 1000

    Read the article

  • migrating webclient to WCF; WCF client serializes parametername of method

    - by Wouter
    I'm struggling with migrating from webservice/webclient architecture to WCF architecture. The object are very complex, with lots of nested xsd's and different namespaces. Proxy classes are generated by adding a Web Reference to an original wsdl with 30+ webmethods and using xsd.exe for generating the missing SOAPFault objects. My pilot WCF Service consists of only 1 webmethod which matches the exact syntax of one of the original methods: 1 object as parameter, returning 1 other object as result value. I greated a WCF Interface using those proxy classes, using attributes: XMLSerializerFormat and ServiceContract on the interface, OperationContract on one method from original wsdl specifying Action, ReplyAction, all with the proper namespaces. I create incoming client messages using SoapUI; I generated a project from the original WSDL files (causing the SoapUI project to have 30+ methods) and created one new Request at the one implemented WebMethod, changed the url to my wcf webservice and send the message. Because of the specified (Reply-)Action in the OperationContractAttribute, the message is actually received and properly deserialized into an object. To get this far (40 hours of googling), a lot of frustration led me to using a custom endpoint in which the WCF 'wrapped tags' are removed, the namespaces for nested types are corrected, and the generated wsdl get's flattened (for better compatibility with other tools then MS VisualStudio). Interface code is this: [XmlSerializerFormat(Use = OperationFormatUse.Literal, Style = OperationFormatStyle.Document, SupportFaults = true)] [ServiceContract(Namespace = Constants.NamespaceStufZKN)] public interface IOntvangAsynchroon { [OperationContract(Action = Constants.NamespaceStufZKN + "/zakLk01", ReplyAction = Constants.NamespaceStufZKN + "/zakLk01", Name = "zakLk01")] [FaultContract(typeof(Fo03Bericht), Namespace = Constants.NamespaceStuf)] Bv03Bericht zakLk01([XmlElement("zakLk01", Namespace = Constants.NamespaceStufZKN)] ZAKLk01 zakLk011); When I use a Webclient in code to send a message, everything works. My problem is, when I use a WCF client. I use ChannelFactory< IOntvangAsynchroon to send a message. But the generated xml looks different: it includes the parametername of the method! It took me a lot of time to figure this one out, but here's what happens: Correct xml (stripped soap envelope): <soap:Body> <zakLk01 xmlns="http://www.egem.nl/StUF/sector/zkn/0310"> <stuurgegevens> <berichtcode xmlns="http://www.egem.nl/StUF/StUF0301">Bv01</berichtcode> <zender xmlns="http://www.egem.nl/StUF/StUF0301"> <applicatie>ONBEKEND</applicatie> </zender> </stuurgegevens> <parameters> </parameters> </zakLk01> </soap:Body> Bad xml: <soap:Body> <zakLk01 xmlns="http://www.egem.nl/StUF/sector/zkn/0310"> <zakLk011> <stuurgegevens> <berichtcode xmlns="http://www.egem.nl/StUF/StUF0301">Bv01</berichtcode> <zender xmlns="http://www.egem.nl/StUF/StUF0301"> <applicatie>ONBEKEND</applicatie> </zender> </stuurgegevens> <parameters> </parameters> </zakLk011> </zakLk01> </soap:Body> Notice the 'zakLk011' element? It is the name of the parameter of the method in my interface! So NOW it is zakLk011, but it when my parameter name was 'zakLk01', the xml seemed to contain some magical duplicate of the tag above, but without namespace. Of course, you can imagine me going crazy over what was happening before finding out it was the parametername! I know have actually created a WCF Service, at which I cannot send messages using a WCF Client anymore. For clarity: The method does get invoked using the WCF Client on my webservice, but the parameter object is empty. Because I'm using a custom endpoint to log the incoming xml, I can see the message is received fine, but just with the wrong syntax! WCF client code: ZAKLk01 stufbericht = MessageFactory.CreateZAKLk01(); ChannelFactory<IOntvangAsynchroon> factory = new ChannelFactory<IOntvangAsynchroon>(new BasicHttpBinding(), new EndpointAddress("http://localhost:8193/Roxit/Link/zkn0310")); factory.Endpoint.Behaviors.Add(new LinkEndpointBehavior()); IOntvangAsynchroon client = factory.CreateChannel(); client.zakLk01(stufbericht); I am not using a generated client, i just reference the webservice like i am lot's of times. Can anyone please help me? I can't google anything on this...

    Read the article

  • Correct way to edit and update complex viewmodel objects using asp.net-mvc2 and entity framework

    - by jslatts
    I have a table in my database with a one to many relationship to another table: ParentObject ID Name Description ChildObject ID Name Description ParentObjectID AnotherObjectID The objects are mapped into Entity Framework and exposed through a data access class. It seemed like ViewModels are recommended when the data to be displayed greatly differs from the domain object, so I created a ViewModel as follows: public class ViewModel { public IList<ParentObject> ParentObjects { get; set; } public ParentObject selectedObject { get; set; } public IList<ChildObject> ChildObjects { get; set; } } I have a view that displays a list of ParentObjects and when clicked will allow a ChildObject to be modified saved. <% using (Html.BeginForm()) { %> <table> <% foreach (var parent in Model.ParentObjects) { %> <tr> <td> ObjectID [<%= Html.Encode(parent.ID)%>] </td> <td> <%= Html.Encode(parent.Name)%> </td> <td> <%= Html.Encode(parent.Description)%> </td> </tr> <% } %> </table> <% if (Model.ParentObject != null) { %> <div> Name:<br /> <%= Html.TextBoxFor(model => model.ParentObject.Name) %> <%= Html.ValidationMessageFor(model => model.ParentObject.Name, "*")%> </div> <div> Description:<br /> <%= Html.TextBoxFor(model => model.ParentObject.Description) %> <%= Html.ValidationMessageFor(model => model.ParentObject.Description, "*")%> </div> <div> Child Objects </div> <% for (int i = 0; i < Model.ParentObject.ChildObjects.Count(); i++) { %> <div> <%= Html.DisplayTextFor(sd => sd.ChildObjects[i].Name) %> </div> <div> <%= Html.HiddenFor(sd => sd.ChildObjects[i].ID )%> <%= Html.TextBoxFor( sd => sd.ChildObjects[i].Description) %> <%= Html.ValidationMessageFor(sd => sd.ChildObjects[i].Description, "*") %> </div> <% } } } %> This all works fine. My question is around the best way to update the EF objects and persist the changes back to the database. I initially tried: [HttpPost] public ActionResult Edit(ViewModel viewModel) { ParentObject parent = myRepository.GetParentObjectByID(viewModel.SelectedObject.ID); if ((!ModelState.IsValid) || !TryUpdateModel(parent, "SelectedObject", new[] { "Name", "Description" })) { || !TryUpdateModel(parent.ChildObjects, "ChildObjects", new[] { "Name", "Description" })) { //Code to handle failure and return the current model snipped return View(viewModel); } myRepository.Save(); return RedirectToAction("Edit"); } When I try to save a change to the child object, I get this exception: Entities in 'MyEntities.ChildObject' participate in the 'FK_ChildObject_AnotherObject' relationship. 0 related 'AnotherObject' were found. 1 'AnotherObject' is expected. Investigation on StackOverflow and generally googling led me to this blog post that seems to describe my problem: TryUpdateModel() does not correctly handle nested collections. Apparently, (and stepping through the debugger confirms this) it creates a new ChildObject instead of associating with the EF objects from my instantiated context. My hacky work around is this: if (viewModel.ChildObjects.Count > 0) { foreach (ChildObject modelChildObject in viewModel.ChildObjects) { ChildObject childToUpdate = ParentObject.ChildObject.Where(a => a.ID == modelChildObject.ID).First(); childToUpdate.Name = modelChildObject.Name; } } This seems to work fine. My question to you good folks: Is there correct way to do this? I tried following the suggestion for making a custom model binder per the blog link I posted above but it didn't work (there was an issue with reflection) and I needed to get something going ASAP. PS - I tried to cleanup the code to hide specific information, so beware I may have hosed something up. I mainly just want to know if other people have solved this problem. Thanks!

    Read the article

  • How to compare a memory bits in C++?

    - by Trunet
    Hi, I need help with a memory bit comparison function. I bought a LED Matrix here with 4 x HT1632C chips and I'm using it on my arduino mega2560. There're no code available for this chipset(it's not the same as HT1632) and I'm writing on my own. I have a plot function that get x,y coordinates and a color and that pixel turn on. Only this is working perfectly. But I need more performance on my display so I tried to make a shadowRam variable that is a "copy" of my device memory. Before I plot anything on display it checks on shadowRam to see if it's really necessary to change that pixel. When I enabled this(getShadowRam) on plot function my display has some, just SOME(like 3 or 4 on entire display) ghost pixels(pixels that is not supposed to be turned on). If I just comment the prev_color if's on my plot function it works perfectly. Also, I'm cleaning my shadowRam array setting all matrix to zero. variables: #define BLACK 0 #define GREEN 1 #define RED 2 #define ORANGE 3 #define CHIP_MAX 8 byte shadowRam[63][CHIP_MAX-1] = {0}; getShadowRam function: byte HT1632C::getShadowRam(byte x, byte y) { byte addr, bitval, nChip; if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } bitval = 8>>(y&3); x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); if ((shadowRam[addr][nChip-1] & bitval) && (shadowRam[addr+32][nChip-1] & bitval)) { return ORANGE; } else if (shadowRam[addr][nChip-1] & bitval) { return GREEN; } else if (shadowRam[addr+32][nChip-1] & bitval) { return RED; } else { return BLACK; } } plot function: void HT1632C::plot (int x, int y, int color) { if (x<0 || x>X_MAX || y<0 || y>Y_MAX) return; if (color != BLACK && color != GREEN && color != RED && color != ORANGE) return; char addr, bitval; byte nChip; byte prev_color = HT1632C::getShadowRam(x,y); bitval = 8>>(y&3); if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); switch(color) { case BLACK: if (prev_color != BLACK) { // compare with memory to only set if pixel is other color // clear the bit in both planes; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case GREEN: if (prev_color != GREEN) { // compare with memory to only set if pixel is other color // set the bit in the green plane and clear the bit in the red plane; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case RED: if (prev_color != RED) { // compare with memory to only set if pixel is other color // clear the bit in green plane and set the bit in the red plane; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case ORANGE: if (prev_color != ORANGE) { // compare with memory to only set if pixel is other color // set the bit in both the green and red planes; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; } } If helps: The datasheet of board I'm using. On page 7 has the memory mapping I'm using. Also, I have a video of display working.

    Read the article

  • Application crashing when talking to oracle unless executable path contains spaces

    - by Lasse V. Karlsen
    We have an x-files problem with our .NET application. Or, rather, hybrid Win32 and .NET application. When it attempts to communicate with Oracle, it just dies. Vanishes. Goes to the big black void in the sky. No event log message, no exception, no nothing. If we simply ask the application to talk to a MS SQL Server instead, which has the effect of replacing the usage of OracleConnection and related classes with SqlConnection and related classes, it works as expected. Today we had a breakthrough. For some reason, a customer had figured out that by placing all the application files in a directory on his desktop, it worked as expected with Oracle as well. Moving the directory down to the root of the drive, or in C:\Temp or, well, around a bit, made the crash reappear. Basically it was 100% reproducable that the application worked if run from directory on desktop, and failed if run from directory in root. Today we figured out that the difference that counted was wether there was a space in the directory name or not. So, these directories would work: C:\Program Files\AppDir\Executable.exe C:\Temp Lemp\AppDir\Executable.exe C:\Documents and Settings\someuser\Desktop\AppDir\Executable.exe whereas these would not: C:\CompanyName\AppDir\Executable.exe C:\Programfiler\AppDir\Executable.exe <-- Program Files in norwegian C:\Temp\AppDir\Executable.exe I'm hoping someone reading this has seen similar behavior and have a "aha, you need to twiddle the frob on the oracle glitz driver configuration" or similar. Anyone? Followup #1: Ok, I've processed the procmon output now, both files from when I hit the button that attempts to open the window that triggers the cascade failure, and I've noticed that they keep track mostly, there's some smallish differences near the top of both files, and they they keep track a long way down. However, when one run fails, the other keeps going and the next few lines of the log output are these: ReadFile C:\oracle\product\10.2.0\db_1\BIN\orageneric10.dll SUCCESS Offset: 274 432, Length: 32 768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O ReadFile C:\oracle\product\10.2.0\db_1\BIN\orageneric10.dll SUCCESS Offset: 233 472, Length: 32 768, I/O Flags: Non-cached, Paging I/O, Synchronous Paging I/O After this, the working run continues to execute, and the other touches the mscorwks.dll files a few times before threads close down and the app closes. Thus, the failed run does not touch the above files. Followup #2: Figured I'd try to upgrade the oracle client drivers, but 10.2.0.1 is apparently the highest version available for Windows 2003 server and XP clients. Followup #3: Well, we've ended up with a black-box solution. Basically we found that the problem is somewhere related to XPO and Oracle. XPO has a system-table it manages, called XPObjectType, with three columns: Oid, TypeName and AssemblyName. Due to how Oracle is configured in the databases we talk to, the column names were OID, TYPENAME and ASSEMBLYNAME. This would ordinarily not be a problem, except that XPO talks to the schema information directly and checks if the table is there with the right column names, and XPO doesn't handle case differences so it sees a XPObjectType table with three unknown columns and none of those it expects. Exactly what XPO does now I don't really know, but if I dropped this table, and recreated it with the right case, using double quotes around all the column names to get the case right, the problem doesn't crop up. Exactly where the space in the folder name comes into this, I still have no idea, but this problem had two tiers: Stop the application from crashing at our customers, short-term solution Fix the bug, long-term solution Right now tier 1 is solved, tier 2 will be put back into the queue for now and prioritized. We're facing some bigger changes to our data tier anyway so this might not be a problem we need to solve, at least if all our Oracle-customers verify that the table-fix actually gets rid of the problem. I'll accept the answer by Dave Markle since though Process Monitor (the big brother of File Monitor) didn't actually pinpoint the problem, I was able to use it to determine that after my breakpoint in user-code where XPO had built up the query for this table, no I/O happened until all the entries for the application closing down was logged, which led me to believe it was this table that was the culprit, or at least influenced the problem somehow. If I manage to get to the real cause of this, I'll update the post.

    Read the article

  • Objects in Java ArrayList don't get updated.

    - by Sbm007
    This is going to be a very long post, hopefully you can understand what I'm talking about and I appreciate any help. Thanks Basically, I've created a personal, non-commercial project (which I don't plan to release) that can read ZIP and RAR files. It can only read the contents in the archive, the folders inside, the files inside the folders and its properties (such as last modified date, last modified time, CRC checksum, uncompressed size, compressed size and file name). It can't extract files either, so it's really a ZIP/RAR viewer if you may. Anyway that's slightly irrelevant to my problem but I thought I'd give you some background info. Now for my problem: I can successfully list all the folders and files inside a ZIP archive, so now I want to take that raw input and link it together in some useful way. I made 2 classes: ArchiveFile (represents a file inside a ZIP) and ArchiveFolder (represents a folder inside a ZIP). They both have some useful methods such as getLastModifiedDate, getName, getPath and so on. But the difference is that ArchiveFolder can hold an ArrayList of ArchiveFile's and additional ArchiveFolder's (think of this as files and folders inside a folder). Now I want to populate my raw input into one root ArchiveFolder, which will have all the files in the root dir of the ZIP in the ArchiveFile's ArrayList and any additional folders in the root dir of the ZIP in the ArchiveFolder's ArrayList (and this process can continue on like this like a chain reaction (more files/folders in that ArchiveFolder etc etc). So I came up with the following code: while (archive.hasMore()) { String path = ""; ArchiveFolder current = root; String[] contents = archive.getName().split("/"); for (int x = 0; x < contents.length; ++x) { if (x == (contents.length - 1) && !archive.getName().endsWith("/")) { // If on last item and item is a file path += contents[x]; // Update final path ArchiveFile file = new ArchiveFile(path, contents[x], archive.getUncompressedSize(), archive.getCompressedSize(), archive.getModifiedTime(), archive.getModifiedDate(), archive.getCRC()); current.addFile(file); // Create and add the file to the current ArchiveFolder } else if (x == (contents.length - 1)) { // Else if we are on last item and it is a folder path += contents[x] + "/"; // Update final path ArchiveFolder folder = new ArchiveFolder(path, contents[x], archive.getModifiedTime(), archive.getModifiedDate()); current.addFolder(folder); // Create and add this folder to the current ArchiveFile } else { // Else if we are still traversing through the path path += contents[x] + "/"; // Update path ArchiveFolder folder = new ArchiveFolder(path, contents[x]); current.addFolder(folder); // Create and add folder (remember we do not know the modified date/time as all we know is the path, so we can deduce the name only) current = folder; // Update current ArchiveFolder to the newly created one for the next iteration of the for loop } } archive.getNext(); } Assume that root is the root ArchiveFolder (initially empty). And that archive.getName() returns the name of the current file OR folder in the following fashion: file.txt or folder1/file2.txt or folder4/folder2/ (this is a empty folder) etc. So basically the relative path from the root of the ZIP archive. Please read through the comments in the above code to familiarize yourself with it. Also assume that the addFolder method in an ArchiveFile, only adds the folder if it doesn't exist already (so there are no multiple folders) and it also updates the time and date of an existing folder if it is blank (ie it was a intermediate folder we only knew the name of, but now we know its details). The code for addFolder is (pretty self-explanitory): public void addFolder(ArchiveFolder folder) { int loc = folders.indexOf(folder); // folders is the ArrayList containing ArchiveFolder's if (loc == -1) { folders.add(folder); } else { ArchiveFolder real = folders.get(loc); if (real.time == null) { real.setTime(folder.getTime()); real.setDate(folder.getDate()); } } } So I can't see anything wrong with the code, it works and after finishing, the root ArchiveFolder contains all the files in the root of the ZIP as I want it to, and it contains all the direcories in the root folder as I want it to. So you'd think it works as expected, but no the ArchiveFolder's in the root folder don't contain the data inside those 'child' folders, it's just a blank folder with no additional files and folders (while it does really contain some more files/folders when viewed in WinZip). After debugging using Eclipse, the for loop does iterate through all the files (even those not included above), so this led me to believe that there is a problem with this line of the code: current = folder; What it does is, it updates the current folder (used as an intermediate by the loop) to the newly added folder. I thought Java passed by reference and thus all new operations and new additions in future ArchiveFile's and ArchiveFolder's are automatically updated, and parent ArchiveFolder's will be updated accordingly. But that does not appear to be the case? I know this is a long ass post and I really hope anyone can help me out with this. Thanks in advance.

    Read the article

  • Why is Java EE 6 better than Spring ?

    - by arungupta
    Java EE 6 was released over 2 years ago and now there are 14 compliant application servers. In all my talks around the world, a question that is frequently asked is Why should I use Java EE 6 instead of Spring ? There are already several blogs covering that topic: Java EE wins over Spring by Bill Burke Why will I use Java EE instead of Spring in new Enterprise Java projects in 2012 ? by Kai Waehner (more discussion on TSS) Spring to Java EE migration (Part 1 and 2, 3 and 4 coming as well) by David Heffelfinger Spring to Java EE - A Migration Experience by Lincoln Baxter Migrating Spring to Java EE 6 by Bert Ertman and Paul Bakker at NLJUG Moving from Spring to Java EE 6 - The Age of Frameworks is Over at TSS Java EE vs Spring Shootout by Rohit Kelapure and Reza Rehman at JavaOne 2011 Java EE 6 and the Ewoks by Murat Yener Definite excuse to avoid Spring forever - Bert Ertman and Arun Gupta I will try to share my perspective in this blog. First of all, I'd like to start with a note: Thank you Spring framework for filling the interim gap and providing functionality that is now included in the mainstream Java EE 6 application servers. The Java EE platform has evolved over the years learning from frameworks like Spring and provides all the functionality to build an enterprise application. Thank you very much Spring framework! While Spring was revolutionary in its time and is still very popular and quite main stream in the same way Struts was circa 2003, it really is last generation's framework - some people are even calling it legacy. However my theory is "code is king". So my approach is to build/take a simple Hello World CRUD application in Java EE 6 and Spring and compare the deployable artifacts. I started looking at the official tutorial Developing a Spring Framework MVC Application Step-by-Step but it is using the older version 2.5. I wasn't able to find any updated version in the current 3.1 release. Next, I downloaded Spring Tool Suite and thought that would provide some template samples to get started. A least a quick search did not show any handy tutorials - either video or text-based. So I searched and found a link to their SVN repository at src.springframework.org/svn/spring-samples/. I tried the "mvc-basic" sample and the generated WAR file was 4.43 MB. While it was named a "basic" sample it seemed to come with 19 different libraries bundled but it was what I could find: ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/joda-time-jsptags-1.0.2.jar./WEB-INF/lib/jstl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar And it is not even using any database! The app deployed fine on GlassFish 3.1.2 but the "@Controller Example" link did not work as it was missing the context root. With a bit of tweaking I could deploy the application and assume that the account got created because no error was displayed in the browser or server log. Next I generated the WAR for "mvc-ajax" and the 5.1 MB WAR had 20 JARs (1 removed, 2 added): ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jackson-core-asl-1.6.4.jar./WEB-INF/lib/jackson-mapper-asl-1.6.4.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/jstl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar 2 more JARs for just doing Ajax. Anyway, deploying this application gave the following error: Caused by: java.lang.NoSuchMethodError: org.codehaus.jackson.map.SerializationConfig.<init>(Lorg/codehaus/jackson/map/ClassIntrospector;Lorg/codehaus/jackson/map/AnnotationIntrospector;Lorg/codehaus/jackson/map/introspect/VisibilityChecker;Lorg/codehaus/jackson/map/jsontype/SubtypeResolver;)V    at org.springframework.samples.mvc.ajax.json.ConversionServiceAwareObjectMapper.<init>(ConversionServiceAwareObjectMapper.java:20)    at org.springframework.samples.mvc.ajax.json.JacksonConversionServiceConfigurer.postProcessAfterInitialization(JacksonConversionServiceConfigurer.java:40)    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:407) Seems like some incorrect repos in the "pom.xml". Next one is "mvc-showcase" and the 6.49 MB WAR now has 28 JARs as shown below: ./WEB-INF/lib/aopalliance-1.0.jar./WEB-INF/lib/aspectjrt-1.6.10.jar./WEB-INF/lib/commons-fileupload-1.2.2.jar./WEB-INF/lib/commons-io-2.0.1.jar./WEB-INF/lib/el-api-2.2.jar./WEB-INF/lib/hibernate-validator-4.1.0.Final.jar./WEB-INF/lib/jackson-core-asl-1.8.1.jar./WEB-INF/lib/jackson-mapper-asl-1.8.1.jar./WEB-INF/lib/javax.inject-1.jar./WEB-INF/lib/jcl-over-slf4j-1.6.1.jar./WEB-INF/lib/jdom-1.0.jar./WEB-INF/lib/joda-time-1.6.2.jar./WEB-INF/lib/jstl-api-1.2.jar./WEB-INF/lib/jstl-impl-1.2.jar./WEB-INF/lib/log4j-1.2.16.jar./WEB-INF/lib/rome-1.0.0.jar./WEB-INF/lib/slf4j-api-1.6.1.jar./WEB-INF/lib/slf4j-log4j12-1.6.1.jar./WEB-INF/lib/spring-aop-3.1.0.RELEASE.jar./WEB-INF/lib/spring-asm-3.1.0.RELEASE.jar./WEB-INF/lib/spring-beans-3.1.0.RELEASE.jar./WEB-INF/lib/spring-context-3.1.0.RELEASE.jar./WEB-INF/lib/spring-context-support-3.1.0.RELEASE.jar./WEB-INF/lib/spring-core-3.1.0.RELEASE.jar./WEB-INF/lib/spring-expression-3.1.0.RELEASE.jar./WEB-INF/lib/spring-web-3.1.0.RELEASE.jar./WEB-INF/lib/spring-webmvc-3.1.0.RELEASE.jar./WEB-INF/lib/validation-api-1.0.0.GA.jar The app at least deployed and showed results this time. But still no database! Next I tried building "jpetstore" and got the error: [ERROR] Failed to execute goal on project org.springframework.samples.jpetstore:Could not resolve dependencies for project org.springframework.samples:org.springframework.samples.jpetstore:war:1.0.0-SNAPSHOT: Failed to collect dependencies for [commons-fileupload:commons-fileupload:jar:1.2.1 (compile), org.apache.struts:com.springsource.org.apache.struts:jar:1.2.9 (compile), javax.xml.rpc:com.springsource.javax.xml.rpc:jar:1.1.0 (compile), org.apache.commons:com.springsource.org.apache.commons.dbcp:jar:1.2.2.osgi (compile), commons-io:commons-io:jar:1.3.2 (compile), hsqldb:hsqldb:jar:1.8.0.7 (compile), org.apache.tiles:tiles-core:jar:2.2.0 (compile), org.apache.tiles:tiles-jsp:jar:2.2.0 (compile), org.tuckey:urlrewritefilter:jar:3.1.0 (compile), org.springframework:spring-webmvc:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework:spring-orm:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework:spring-context-support:jar:3.0.0.BUILD-SNAPSHOT (compile), org.springframework.webflow:spring-js:jar:2.0.7.RELEASE (compile), org.apache.ibatis:com.springsource.com.ibatis:jar:2.3.4.726 (runtime), com.caucho:com.springsource.com.caucho:jar:3.2.1 (compile), org.apache.axis:com.springsource.org.apache.axis:jar:1.4.0 (compile), javax.wsdl:com.springsource.javax.wsdl:jar:1.6.1 (compile), javax.servlet:jstl:jar:1.2 (runtime), org.aspectj:aspectjweaver:jar:1.6.5 (compile), javax.servlet:servlet-api:jar:2.5 (provided), javax.servlet.jsp:jsp-api:jar:2.1 (provided), junit:junit:jar:4.6 (test)]: Failed to read artifact descriptor for org.springframework:spring-webmvc:jar:3.0.0.BUILD-SNAPSHOT: Could not transfer artifact org.springframework:spring-webmvc:pom:3.0.0.BUILD-SNAPSHOT from/to JBoss repository (http://repository.jboss.com/maven2): Access denied to: http://repository.jboss.com/maven2/org/springframework/spring-webmvc/3.0.0.BUILD-SNAPSHOT/spring-webmvc-3.0.0.BUILD-SNAPSHOT.pom It appears the sample is broken - maybe I was pulling from the wrong repository - would be great if someone were to point me at a good target to use here. With a 50% hit on samples in this repository, I started searching through numerous blogs, most of which have either outdated information (using XML-heavy Spring 2.5), some piece of configuration (which is a typical "feature" of Spring) is missing, or too much complexity in the sample. I finally found this blog that worked like a charm. This blog creates a trivial Spring MVC 3 application using Hibernate and MySQL. This application performs CRUD operations on a single table in a database using typical Spring technologies.  I downloaded the sample code from the blog, deployed it on GlassFish 3.1.2 and could CRUD the "person" entity. The source code for this application can be downloaded here. More details on the application statistics below. And then I built a similar CRUD application in Java EE 6 using NetBeans wizards in a couple of minutes. The source code for the application can be downloaded here and the WAR here. The Spring Source Tool Suite may also offer similar wizard-driven capabilities but this blog focus primarily on comparing the runtimes. The lack of STS tutorials was slightly disappointing as well. NetBeans however has tons of text-based and video tutorials and tons of material even by the community. One more bit on the download size of tools bundle ... NetBeans 7.1.1 "All" is 211 MB (which includes GlassFish and Tomcat) Spring Tool Suite  2.9.0 is 347 MB (~ 65% bigger) This blog is not about the tooling comparison so back to the Java EE 6 version of the application .... In order to run the Java EE version on GlassFish, copy the MySQL Connector/J to glassfish3/glassfish/domains/domain1/lib/ext directory and create a JDBC connection pool and JDBC resource as: ./bin/asadmin create-jdbc-connection-pool --datasourceclassname \\ com.mysql.jdbc.jdbc2.optional.MysqlDataSource --restype \\ javax.sql.DataSource --property \\ portNumber=3306:user=mysql:password=mysql:databaseName=mydatabase \\ myConnectionPool ./bin/asadmin create-jdbc-resource --connectionpoolid myConnectionPool jdbc/myDataSource I generated WARs for the two projects and the table below highlights some differences between them: Java EE 6 Spring WAR File Size 0.021030 MB 10.87 MB (~516x) Number of files 20 53 (> 2.5x) Bundled libraries 0 36 Total size of libraries 0 12.1 MB XML files 3 5 LoC in XML files 50 (11 + 15 + 24) 129 (27 + 46 + 16 + 11 + 19) (~ 2.5x) Total .properties files 1 Bundle.properties 2 spring.properties, log4j.properties Cold Deploy 5,339 ms 11,724 ms Second Deploy 481 ms 6,261 ms Third Deploy 528 ms 5,484 ms Fourth Deploy 484 ms 5,576 ms Runtime memory ~73 MB ~101 MB Some points worth highlighting from the table ... 516x WAR file, 10x deployment time - With 12.1 MB of libraries (for a very basic application) bundled in your application, the WAR file size and the deployment time will naturally go higher. The WAR file for Spring-based application is 516x bigger and the deployment time is double during the first deployment and ~ 10x during subsequent deployments. The Java EE 6 application is fully portable and will run on any Java EE 6 compliant application server. 36 libraries in the WAR - There are 14 Java EE 6 compliant application servers today. Each of those servers provide all the functionality like transactions, dependency injection, security, persistence, etc typically required of an enterprise or web application. There is no need to bundle 36 libraries worth 12.1 MB for a trivial CRUD application. These 14 compliant application servers provide all the functionality baked in. Now you can also deploy these libraries in the container but then you don't get the "portability" offered by Spring in that case. Does your typical Spring deployment actually do that ? 3x LoC in XML - The number of XML files is about 1.6x and the LoC is ~ 2.5x. So much XML seems circa 2003 when the Java language had no annotations. The XML files can be further reduced, e.g. faces-config.xml can be replaced without providing i18n, but I just want to compare stock applications. Memory usage - Both the applications were deployed on default GlassFish 3.1.2 installation and any additional memory consumed as part of deployment/access was attributed to the application. This is by no means scientific but at least provides an initial ballpark. This area definitely needs more investigation. Another table that compares typical Java EE 6 compliant application servers and the custom-stack created for a Spring application ... Java EE 6 Spring Web Container ? 53 MB (tcServer 2.6.3 Developer Edition) Security ? 12 MB (Spring Security 3.1.0) Persistence ? 6.3 MB (Hibernate 4.1.0, required) Dependency Injection ? 5.3 MB (Framework) Web Services ? 796 KB (Spring WS 2.0.4) Messaging ? 3.4 MB (RabbitMQ Server 2.7.1) 936 KB (Java client 936) OSGi ? 1.3 MB (Spring OSGi 1.2.1) GlassFish and WebLogic (starting at 33 MB) 83.3 MB There are differentiating factors on both the stacks. But most of the functionality like security, persistence, and dependency injection is baked in a Java EE 6 compliant application server but needs to be individually managed and patched for a Spring application. This very quickly leads to a "stack explosion". The Java EE 6 servers are tested extensively on a variety of platforms in different combinations whereas a Spring application developer is responsible for testing with different JDKs, Operating Systems, Versions, Patches, etc. Oracle has both the leading OSS lightweight server with GlassFish and the leading enterprise Java server with WebLogic Server, both Java EE 6 and both with lightweight deployment options. The Web Container offered as part of a Java EE 6 application server not only deploys your enterprise Java applications but also provide operational management, diagnostics, and mission-critical capabilities required by your applications. The Java EE 6 platform also introduced the Web Profile which is a subset of the specifications from the entire platform. It is targeted at developers of modern web applications offering a reasonably complete stack, composed of standard APIs, and is capable out-of-the-box of addressing the needs of a large class of Web applications. As your applications grow, the stack can grow to the full Java EE 6 platform. The GlassFish Server Web Profile starting at 33MB (smaller than just the non-standard tcServer) provides most of the functionality typically required by a web application. WebLogic provides battle-tested functionality for a high throughput, low latency, and enterprise grade web application. No individual managing or patching, all tested and commercially supported for you! Note that VMWare does have a server, tcServer, but it is non-standard and not even certified to the level of the standard Web Profile most customers expect these days. Customers who choose this risk proprietary lock-in since VMWare does not seem to want to formally certify with either Java EE 6 Enterprise Platform or with Java EE 6 Web Profile but of course it would be great if they were to join the community and help their customers reduce the risk of deploying on VMWare software. Some more points to help you decide choose between Java EE 6 and Spring ... Freedom to choose container - There are 14 Java EE 6 compliant application servers today, with a variety of open source and commercial offerings. A Java EE 6 application can be deployed on any of those containers. So if you deployed your application on GlassFish today and would like to scale up with your demands then you can deploy the same application to WebLogic. And because of the portability of a Java EE 6 application, you can even take it a different vendor altogether. Spring requires a runtime which could be any of these app servers as well. But why use Spring when all the required functionality is already baked into the application server itself ? Spring also has a different definition of portability where they claim to bundle all the libraries in the WAR file and move to any application server. But we saw earlier how bloated that archive could be. The equivalent features in Spring runtime offerings (mainly tcServer) are not all open source, not as mature, and often require manual assembly.  Vendor choice - The Java EE 6 platform is created using the Java Community Process where all the big players like Oracle, IBM, RedHat, and Apache are conritbuting to make the platform successful. Each application server provides the basic Java EE 6 platform compliance and has its own competitive offerings. This allows you to choose an application server for deploying your Java EE 6 applications. If you are not happy with the support or feature of one vendor then you can move your application to a different vendor because of the portability promise offered by the platform. Spring is a set of products from a single company, one price book, one support organization, one sustaining organization, one sales organization, etc. If any of those cause a customer headache, where do you go ? Java EE, backed by multiple vendors, is a safer bet for those that are risk averse. Production support - With Spring, typically you need to get support from two vendors - VMWare and the container provider. With Java EE 6, all of this is typically provided by one vendor. For example, Oracle offers commercial support from systems, operating systems, JDK, application server, and applications on top of them. VMWare certainly offers complete production support but do you really want to put all your eggs in one basket ? Do you really use tcServer ? ;-) Maintainability - With Spring, you are likely building your own distribution with multiple JAR files, integrating, patching, versioning, etc of all those components. Spring's claim is that multiple JAR files allow you to go à la carte and pick the latest versions of different components. But who is responsible for testing whether all these versions work together ? Yep, you got it, its YOU! If something does not work, who patches and maintains the JARs ? Of course, you! Commercial support for such a configuration ? On your own! The Java EE application servers manage all of this for you and provide a well-tested and commercially supported bundle. While it is always good to realize that there is something new and improved that updates and replaces older frameworks like Spring, the good news is not only does a Java EE 6 container offer what is described here, most also will let you deploy and run your Spring applications on them while you go through an upgrade to a more modern architecture. End result, you get the best of both worlds - keeping your legacy investment but moving to a more agile, lightweight world of Java EE 6. A message to the Spring lovers ... The complexity in J2EE 1.2, 1.3, and 1.4 led to the genesis of Spring but that was in 2004. This is 2012 and the name has changed to "Java EE 6" :-) There are tons of improvements in the Java EE platform to make it easy-to-use and powerful. Some examples: Adding @Stateless on a POJO makes it an EJB EJBs can be packaged in a WAR with no special packaging or deployment descriptors "web.xml" and "faces-config.xml" are optional in most of the common cases Typesafe dependency injection is now part of the Java EE platform Add @Path on a POJO allows you to publish it as a RESTful resource EJBs can be used as backing beans for Facelets-driven JSF pages providing full MVC Java EE 6 WARs are known to be kilobytes in size and deployed in milliseconds Tons of other simplifications in the platform and application servers So if you moved away from J2EE to Spring many years ago and have not looked at Java EE 6 (which has been out since Dec 2009) then you should definitely try it out. Just be at least aware of what other alternatives are available instead of restricting yourself to one stack. Here are some workshops and screencasts worth trying: screencast #37 shows how to build an end-to-end application using NetBeans screencast #36 builds the same application using Eclipse javaee-lab-feb2012.pdf is a 3-4 hours self-paced hands-on workshop that guides you to build a comprehensive Java EE 6 application using NetBeans Each city generally has a "spring cleanup" program every year. It allows you to clean up the mess from your house. For your software projects, you don't need to wait for an annual event, just get started and reduce the technical debt now! Move away from your legacy Spring-based applications to a lighter and more modern approach of building enterprise Java applications using Java EE 6. Watch this beautiful presentation that explains how to migrate from Spring -> Java EE 6: List of files in the Java EE 6 project: ./index.xhtml./META-INF./person./person/Create.xhtml./person/Edit.xhtml./person/List.xhtml./person/View.xhtml./resources./resources/css./resources/css/jsfcrud.css./template.xhtml./WEB-INF./WEB-INF/classes./WEB-INF/classes/Bundle.properties./WEB-INF/classes/META-INF./WEB-INF/classes/META-INF/persistence.xml./WEB-INF/classes/org./WEB-INF/classes/org/javaee./WEB-INF/classes/org/javaee/javaeemysql./WEB-INF/classes/org/javaee/javaeemysql/AbstractFacade.class./WEB-INF/classes/org/javaee/javaeemysql/Person.class./WEB-INF/classes/org/javaee/javaeemysql/Person_.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController$1.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController$PersonControllerConverter.class./WEB-INF/classes/org/javaee/javaeemysql/PersonController.class./WEB-INF/classes/org/javaee/javaeemysql/PersonFacade.class./WEB-INF/classes/org/javaee/javaeemysql/util./WEB-INF/classes/org/javaee/javaeemysql/util/JsfUtil.class./WEB-INF/classes/org/javaee/javaeemysql/util/PaginationHelper.class./WEB-INF/faces-config.xml./WEB-INF/web.xml List of files in the Spring 3.x project: ./META-INF ./META-INF/MANIFEST.MF./WEB-INF./WEB-INF/applicationContext.xml./WEB-INF/classes./WEB-INF/classes/log4j.properties./WEB-INF/classes/org./WEB-INF/classes/org/krams ./WEB-INF/classes/org/krams/tutorial ./WEB-INF/classes/org/krams/tutorial/controller ./WEB-INF/classes/org/krams/tutorial/controller/MainController.class ./WEB-INF/classes/org/krams/tutorial/domain ./WEB-INF/classes/org/krams/tutorial/domain/Person.class ./WEB-INF/classes/org/krams/tutorial/service ./WEB-INF/classes/org/krams/tutorial/service/PersonService.class ./WEB-INF/hibernate-context.xml ./WEB-INF/hibernate.cfg.xml ./WEB-INF/jsp ./WEB-INF/jsp/addedpage.jsp ./WEB-INF/jsp/addpage.jsp ./WEB-INF/jsp/deletedpage.jsp ./WEB-INF/jsp/editedpage.jsp ./WEB-INF/jsp/editpage.jsp ./WEB-INF/jsp/personspage.jsp ./WEB-INF/lib ./WEB-INF/lib/antlr-2.7.6.jar ./WEB-INF/lib/aopalliance-1.0.jar ./WEB-INF/lib/c3p0-0.9.1.2.jar ./WEB-INF/lib/cglib-nodep-2.2.jar ./WEB-INF/lib/commons-beanutils-1.8.3.jar ./WEB-INF/lib/commons-collections-3.2.1.jar ./WEB-INF/lib/commons-digester-2.1.jar ./WEB-INF/lib/commons-logging-1.1.1.jar ./WEB-INF/lib/dom4j-1.6.1.jar ./WEB-INF/lib/ejb3-persistence-1.0.2.GA.jar ./WEB-INF/lib/hibernate-annotations-3.4.0.GA.jar ./WEB-INF/lib/hibernate-commons-annotations-3.1.0.GA.jar ./WEB-INF/lib/hibernate-core-3.3.2.GA.jar ./WEB-INF/lib/javassist-3.7.ga.jar ./WEB-INF/lib/jstl-1.1.2.jar ./WEB-INF/lib/jta-1.1.jar ./WEB-INF/lib/junit-4.8.1.jar ./WEB-INF/lib/log4j-1.2.14.jar ./WEB-INF/lib/mysql-connector-java-5.1.14.jar ./WEB-INF/lib/persistence-api-1.0.jar ./WEB-INF/lib/slf4j-api-1.6.1.jar ./WEB-INF/lib/slf4j-log4j12-1.6.1.jar ./WEB-INF/lib/spring-aop-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-asm-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-beans-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-context-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-context-support-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-core-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-expression-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-jdbc-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-orm-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-tx-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-web-3.0.5.RELEASE.jar ./WEB-INF/lib/spring-webmvc-3.0.5.RELEASE.jar ./WEB-INF/lib/standard-1.1.2.jar ./WEB-INF/lib/xml-apis-1.0.b2.jar ./WEB-INF/spring-servlet.xml ./WEB-INF/spring.properties ./WEB-INF/web.xml So, are you excited about Java EE 6 ? Want to get started now ? Here are some resources: Java EE 6 SDK (including runtime, samples, tutorials etc) GlassFish Server Open Source Edition 3.1.2 (Community) Oracle GlassFish Server 3.1.2 (Commercial) Java EE 6 using WebLogic 12c and NetBeans (Video) Java EE 6 with NetBeans and GlassFish (Video) Java EE with Eclipse and GlassFish (Video)

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • MacBook Pro Late 2009 SATA Resets, Slowness

    - by A Student at a University
    My MacBook Pro runs slower the longer it's on. I am getting kernel warnings. The resets correlate with AC power connects and disconnects. I don't know if the warnings do. (How do I tell?) Are these bus CRC errors? Or something else? Can this damage the drive or corrupt data? What is it seeing that motivates these? 02:37:16 :[ 0.791992] ahci 0000:00:0b.0: PCI INT A -> Link[LSI0] -> GSI 20 (level, low) -> IRQ 20 02:37:16 :[ 0.792053] ahci 0000:00:0b.0: controller can't do PMP, turning off CAP_PMP 02:37:16 :[ 0.792104] ahci 0000:00:0b.0: AHCI 0001.0200 32 slots 6 ports 1.5 Gbps 0x3 impl IDE mode 02:37:16 :[ 0.792107] ahci 0000:00:0b.0: flags: 64bit ncq sntf pm led pio slum part boh 02:37:16 :[ 0.813473] scsi0 : ahci 02:37:16 :[ 0.823340] scsi1 : ahci 02:37:16 :[ 0.848164] ata1: SATA max UDMA/133 abar m8192@0xe7484000 port 0xe7484100 irq 43 02:37:16 :[ 0.848166] ata2: SATA max UDMA/133 abar m8192@0xe7484000 port 0xe7484180 irq 43 02:37:16 :[ 1.190132] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16 :[ 1.190153] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16 :[ 1.213568] ata1.00: ATA-8: OCZ-VERTEX2, 1.23, max UDMA/133 02:37:16 :[ 1.213572] ata1.00: 195371568 sectors, multi 1: LBA48 NCQ (depth 31/32) 02:37:16 :[ 1.227293] ata2.00: ATA-8: ST9500420ASG, 0002SDM1, max UDMA/133 02:37:16 :[ 1.227297] ata2.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 31/32) 02:37:16 :[ 1.229570] ata2.00: configured for UDMA/133 02:37:16 :[ 1.240133] ata2: hard resetting link 02:37:16 :[ 1.260738] ata1.00: configured for UDMA/133 02:37:16 :[ 1.280122] ata1: hard resetting link 02:37:16 :[ 1.470125] usb 2-5: new high speed USB device using ehci_hcd and address 3 02:37:16 :[ 1.550165] firewire_core: created device fw0: GUID 58b035fffea99f5c, S800 02:37:16 :[ 1.631306] Initializing USB Mass Storage driver... 02:37:16 :[ 1.631392] scsi6 : usb-storage 2-5:1.0 02:37:16 :[ 1.631454] usbcore: registered new interface driver usb-storage 02:37:16 :[ 1.631455] USB Mass Storage support registered. 02:37:16 :[ 1.960128] usb 4-1: new full speed USB device using ohci_hcd and address 2 02:37:16 :[ 1.990101] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16 :[ 1.994215] ata2.00: configured for UDMA/133 02:37:16 :[ 1.994220] ata2: EH complete 02:37:16 :[ 2.030097] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:16 :[ 2.090773] ata1.00: configured for UDMA/133 02:37:16 :[ 2.090778] ata1: EH complete 02:37:16 :[ 2.090931] scsi 0:0:0:0: Direct-Access ATA OCZ-VERTEX2 1.23 PQ: 0 ANSI: 5 02:37:16 :[ 2.091045] sd 0:0:0:0: Attached scsi generic sg0 type 0 02:37:16 :[ 2.091121] sd 0:0:0:0: [sda] 195371568 512-byte logical blocks: (100 GB/93.1 GiB) 02:37:16 :[ 2.091159] scsi 1:0:0:0: Direct-Access ATA ST9500420ASG 0002 PQ: 0 ANSI: 5 02:37:16 :[ 2.091163] sd 0:0:0:0: [sda] Write Protect is off 02:37:16 :[ 2.091183] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA 02:37:16 :[ 2.091252] sd 1:0:0:0: Attached scsi generic sg1 type 0 02:37:16 :[ 2.091337] sda: 02:37:16 :[ 2.091446] sd 1:0:0:0: [sdb] 976773168 512-byte logical blocks: (500 GB/465 GiB) 02:37:16 :[ 2.091580] sd 1:0:0:0: [sdb] Write Protect is off 02:37:16 :[ 2.091637] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA 02:37:16 :[ 2.091756] sdb: sda1 sda2 02:37:16 :[ 2.093140] sd 0:0:0:0: [sda] Attached SCSI disk 02:37:16 :[ 2.093505] sdb1 sdb2 sdb3 02:37:16 :[ 2.093773] sd 1:0:0:0: [sdb] Attached SCSI disk 02:37:16 :[ 2.693899] EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null) 02:37:16 :[ 5.483492] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro 02:37:16 :[ 7.905040] EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null) 02:37:25 :[ 19.553095] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:37:25 :[ 19.555266] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:37:25 :[ 19.641533] ata1: hard resetting link 02:37:25 :[ 19.642084] ata2: hard resetting link 02:37:26 :[ 20.392606] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:26 :[ 20.392610] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:26 :[ 20.396697] ata2.00: configured for UDMA/133 02:37:26 :[ 20.396703] ata2: EH complete 02:37:26 :[ 20.451491] ata1.00: configured for UDMA/133 02:37:26 :[ 20.451498] ata1: EH complete 02:37:30 :[ 24.563725] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:37:30 :[ 24.565939] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:37:30 :[ 24.627246] ata1: hard resetting link 02:37:30 :[ 24.632250] ata2: hard resetting link 02:37:31 :[ 25.372582] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:31 :[ 25.382615] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) 02:37:31 :[ 25.386782] ata2.00: configured for UDMA/133 02:37:31 :[ 25.386788] ata2: EH complete 02:37:31 :[ 25.431668] ata1.00: configured for UDMA/133 02:37:31 :[ 25.431674] ata1: EH complete 02:45:54 :[ 529.141844] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:45:55 :[ 529.544529] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:45:55 :[ 529.622561] ata1: limiting SATA link speed to 1.5 Gbps 02:45:55 :[ 529.622583] ata1: hard resetting link 02:45:55 :[ 529.622609] ata2: limiting SATA link speed to 1.5 Gbps 02:45:55 :[ 529.622624] ata2: hard resetting link 02:45:56 :[ 530.380135] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:56 :[ 530.380157] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:56 :[ 530.384305] ata2.00: configured for UDMA/133 02:45:56 :[ 530.384314] ata2: EH complete 02:45:56 :[ 530.399225] ata1.00: configured for UDMA/133 02:45:56 :[ 530.399233] ata1: EH complete 02:45:58 :[ 532.395990] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:45:58 :[ 532.518270] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:45:58 :[ 532.590983] ata1: hard resetting link 02:45:58 :[ 532.591045] ata2: hard resetting link 02:45:59 :[ 533.340147] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:59 :[ 533.340168] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:45:59 :[ 533.344416] ata2.00: configured for UDMA/133 02:45:59 :[ 533.344424] ata2: EH complete 02:45:59 :[ 533.360839] ata1.00: configured for UDMA/133 02:45:59 :[ 533.360847] ata1: EH complete 02:45:59 :[ 533.584449] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:45:59 :[ 533.586999] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:45:59 :[ 533.660132] ata2: hard resetting link 02:45:59 :[ 533.660151] ata1: hard resetting link 02:46:00 :[ 534.412536] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:00 :[ 534.412562] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:00 :[ 534.416768] ata2.00: configured for UDMA/133 02:46:00 :[ 534.416777] ata2: EH complete 02:46:00 :[ 534.431396] ata1.00: configured for UDMA/133 02:46:00 :[ 534.431401] ata1: EH complete 02:46:03 :[ 537.384649] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:03 :[ 537.504214] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:03 :[ 537.586002] ata1: hard resetting link 02:46:03 :[ 537.586036] ata2: hard resetting link 02:46:04 :[ 538.330147] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:04 :[ 538.330168] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:04 :[ 538.334389] ata2.00: configured for UDMA/133 02:46:04 :[ 538.334398] ata2: EH complete 02:46:04 :[ 538.343511] ata1.00: configured for UDMA/133 02:46:04 :[ 538.343519] ata1: EH complete 02:46:04 :[ 538.456413] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:46:04 :[ 538.459404] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:46:04 :[ 538.540138] ata1.00: limiting speed to UDMA/100:PIO4 02:46:04 :[ 538.540159] ata1: hard resetting link 02:46:04 :[ 538.540202] ata2.00: limiting speed to UDMA/100:PIO4 02:46:04 :[ 538.540220] ata2: hard resetting link 02:46:05 :[ 539.290054] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:05 :[ 539.290041] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:05 :[ 539.294100] ata2.00: configured for UDMA/100 02:46:05 :[ 539.294106] ata2: EH complete 02:46:05 :[ 539.314125] ata1.00: configured for UDMA/100 02:46:05 :[ 539.314132] ------------[ cut here ]------------ 02:46:05 :[ 539.314140] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 02:46:05 :[ 539.314144] Hardware name: MacBookPro5,3 02:46:05 :[ 539.314146] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 02:46:05 :[ 539.314221] Pid: 202, comm: scsi_eh_0 Tainted: P 2.6.35-25-generic #44-Ubuntu 02:46:05 :[ 539.314224] Call Trace: 02:46:05 :[ 539.314233] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 02:46:05 :[ 539.314237] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 02:46:05 :[ 539.314242] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 02:46:05 :[ 539.314246] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 02:46:05 :[ 539.314256] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 02:46:05 :[ 539.314261] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 02:46:05 :[ 539.314266] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 02:46:05 :[ 539.314270] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 02:46:05 :[ 539.314275] [<ffffffff8107f266>] kthread+0x96/0xa0 02:46:05 :[ 539.314280] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 02:46:05 :[ 539.314284] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 02:46:05 :[ 539.314288] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 02:46:05 :[ 539.314291] ---[ end trace 76dbffc2d5d49d9b ]--- 02:46:05 :[ 539.314296] ata1: EH complete 02:46:12 :[ 547.040117] ata1: hard resetting link 02:46:13 :[ 547.390144] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:13 :[ 547.408430] ata1.00: configured for UDMA/100 02:46:13 :[ 547.408438] ------------[ cut here ]------------ 02:46:13 :[ 547.408447] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 02:46:13 :[ 547.408451] Hardware name: MacBookPro5,3 02:46:13 :[ 547.408453] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 02:46:13 :[ 547.408528] Pid: 202, comm: scsi_eh_0 Tainted: P W 2.6.35-25-generic #44-Ubuntu 02:46:13 :[ 547.408531] Call Trace: 02:46:13 :[ 547.408540] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 02:46:13 :[ 547.408544] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 02:46:13 :[ 547.408549] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 02:46:13 :[ 547.408553] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 02:46:13 :[ 547.408563] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 02:46:13 :[ 547.408567] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 02:46:13 :[ 547.408572] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 02:46:13 :[ 547.408577] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 02:46:13 :[ 547.408582] [<ffffffff8107f266>] kthread+0x96/0xa0 02:46:13 :[ 547.408587] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 02:46:13 :[ 547.408591] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 02:46:13 :[ 547.408595] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 02:46:13 :[ 547.408598] ---[ end trace 76dbffc2d5d49d9c ]--- 02:46:13 :[ 547.408620] ata1: EH complete 02:46:13 :[ 547.562470] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:13 :[ 547.671380] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:13 :[ 547.738198] ata1.00: limiting speed to UDMA/33:PIO4 02:46:13 :[ 547.738218] ata1: hard resetting link 02:46:13 :[ 547.738274] ata2: hard resetting link 02:46:14 :[ 548.482561] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:14 :[ 548.484083] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:14 :[ 548.486809] ata2.00: configured for UDMA/100 02:46:14 :[ 548.486818] ata2: EH complete 02:46:14 :[ 548.498998] ata1.00: configured for UDMA/33 02:46:14 :[ 548.499004] ata1: EH complete 02:46:18 :[ 552.410499] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:18 :[ 552.522521] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:18 :[ 552.529684] ata1: hard resetting link 02:46:18 :[ 552.529723] ata2: hard resetting link 02:46:19 :[ 553.280059] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:19 :[ 553.280068] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:19 :[ 553.284141] ata2.00: configured for UDMA/100 02:46:19 :[ 553.284150] ata2: EH complete 02:46:19 :[ 553.301629] ata1.00: configured for UDMA/33 02:46:19 :[ 553.301637] ata1: EH complete 02:46:21 :[ 556.078830] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:46:21 :[ 556.180361] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:46:22 :[ 556.262612] ata1: hard resetting link 02:46:22 :[ 556.262617] ata2: hard resetting link 02:46:22 :[ 557.010050] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:22 :[ 557.010070] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:22 :[ 557.014069] ata2.00: configured for UDMA/100 02:46:22 :[ 557.014075] ata2: EH complete 02:46:22 :[ 557.023646] ata1.00: configured for UDMA/33 02:46:22 :[ 557.023654] ata1: EH complete 02:46:30 :[ 565.047438] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:46:30 :[ 565.051554] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:46:30 :[ 565.108332] ata1: hard resetting link 02:46:30 :[ 565.108389] ata2.00: limiting speed to UDMA/33:PIO4 02:46:30 :[ 565.108406] ata2: hard resetting link 02:46:31 :[ 565.850048] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:31 :[ 565.850068] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:31 :[ 565.854304] ata2.00: configured for UDMA/33 02:46:31 :[ 565.854313] ata2: EH complete 02:46:31 :[ 565.868477] ata1.00: configured for UDMA/33 02:46:31 :[ 565.868485] ata1: EH complete 02:46:35 :[ 569.265469] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 02:46:35 :[ 569.268139] EXT4-fs (dm-2): re-mounted. Opts: commit=0 02:46:35 :[ 569.340079] ata1: hard resetting link 02:46:35 :[ 569.340113] ata2: hard resetting link 02:46:35 :[ 570.092568] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:35 :[ 570.092589] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:46:35 :[ 570.096828] ata2.00: configured for UDMA/33 02:46:35 :[ 570.096837] ata2: EH complete 02:46:35 :[ 570.110727] ata1.00: configured for UDMA/33 02:46:35 :[ 570.110735] ata1: EH complete 02:47:04 :[ 598.528232] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 02:47:04 :[ 598.653973] EXT4-fs (dm-2): re-mounted. Opts: commit=600 02:47:04 :[ 598.730854] ata1: hard resetting link 02:47:04 :[ 598.730910] ata2: hard resetting link 02:47:05 :[ 599.480136] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:47:05 :[ 599.480159] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 02:47:05 :[ 599.484206] ata2.00: configured for UDMA/33 02:47:05 :[ 599.484213] ata2: EH complete 02:47:05 :[ 599.496699] ata1.00: configured for UDMA/33 02:47:05 :[ 599.496707] ata1: EH complete 04:45:59 :[ 7733.756548] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 04:45:59 :[ 7733.882748] EXT4-fs (dm-2): re-mounted. Opts: commit=0 04:45:59 :[ 7733.960142] ata1: hard resetting link 04:45:59 :[ 7733.960189] ata2: hard resetting link 04:46:00 :[ 7734.701926] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 04:46:00 :[ 7734.719939] ata1.00: configured for UDMA/33 04:46:00 :[ 7734.719946] ata1: EH complete 04:46:00 :[ 7734.722547] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 04:46:00 :[ 7734.726652] ata2.00: configured for UDMA/33 04:46:00 :[ 7734.726659] ata2: EH complete 04:46:02 :[ 7736.656465] ACPI: EC: GPE storm detected, transactions will use polling mode 13:38:49 :[39704.188621] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 13:38:49 :[39704.280588] EXT4-fs (dm-2): re-mounted. Opts: commit=600 13:38:49 :[39704.360819] ata1: hard resetting link 13:38:49 :[39704.360882] ata2: hard resetting link 13:38:50 :[39705.112956] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 13:38:50 :[39705.114435] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 13:38:50 :[39705.118673] ata2.00: configured for UDMA/33 13:38:50 :[39705.118682] ata2: EH complete 13:38:50 :[39705.127076] ata1.00: configured for UDMA/33 13:38:50 :[39705.127084] ata1: EH complete 13:39:49 :[39764.142463] applesmc: F1Mn: write arg fail 13:48:11 :[40267.025145] applesmc: FS! : read arg fail 13:52:53 :[40548.596735] applesmc: FS! : read arg fail 13:53:58 :[40613.972856] applesmc: FS! : read arg fail 13:54:08 :[40624.057339] applesmc: FS! : read arg fail 13:58:20 :[40875.397749] applesmc: TC0D: read data fail 14:16:56 :[41991.722054] applesmc: Th2H: read data fail 14:22:32 :[42327.991522] applesmc: light sensor data length set to 10 14:26:19 :[42554.788886] applesmc: F1Mn: write arg fail 14:32:36 :[42931.860443] applesmc: TC0F: read data fail 14:34:32 :[43048.041469] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 14:34:33 :[43048.185850] EXT4-fs (dm-2): re-mounted. Opts: commit=0 14:34:33 :[43048.270184] ata1: hard resetting link 14:34:33 :[43048.270224] ata2: hard resetting link 14:34:33 :[43049.030049] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:33 :[43049.030065] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:33 :[43049.034106] ata2.00: configured for UDMA/33 14:34:33 :[43049.034112] ata2: EH complete 14:34:33 :[43049.056952] ata1.00: configured for UDMA/33 14:34:33 :[43049.056959] ------------[ cut here ]------------ 14:34:33 :[43049.056968] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 14:34:33 :[43049.056971] Hardware name: MacBookPro5,3 14:34:33 :[43049.056973] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 14:34:33 :[43049.057048] Pid: 202, comm: scsi_eh_0 Tainted: P W 2.6.35-25-generic #44-Ubuntu 14:34:33 :[43049.057052] Call Trace: 14:34:33 :[43049.057060] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 14:34:33 :[43049.057064] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 14:34:33 :[43049.057069] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 14:34:33 :[43049.057074] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 14:34:33 :[43049.057083] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 14:34:33 :[43049.057088] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 14:34:33 :[43049.057093] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 14:34:33 :[43049.057097] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 14:34:33 :[43049.057102] [<ffffffff8107f266>] kthread+0x96/0xa0 14:34:33 :[43049.057107] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 14:34:33 :[43049.057111] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 14:34:33 :[43049.057115] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 14:34:33 :[43049.057118] ---[ end trace 76dbffc2d5d49d9d ]--- 14:34:33 :[43049.057123] ata1: EH complete 14:34:41 :[43057.012698] ata1: hard resetting link 14:34:42 :[43057.362780] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:42 :[43057.381432] ata1.00: configured for UDMA/33 14:34:42 :[43057.381441] ------------[ cut here ]------------ 14:34:42 :[43057.381450] WARNING: at /build/buildd/linux-2.6.35/drivers/ata/libata-eh.c:3638 ata_eh_finish+0xdf/0xf0() 14:34:42 :[43057.381453] Hardware name: MacBookPro5,3 14:34:42 :[43057.381455] Modules linked in: michael_mic arc4 xt_multiport binfmt_misc rfcomm sco bnep l2cap parport_pc ppdev nvidia(P) ipt_REJECT xt_recent snd_hda_codec_cirrus xt_limit xt_tcpudp ipt_addrtype xt_state snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi applesmc led_class ip6table_filter lib80211_crypt_tkip snd_rawmidi snd_seq_midi_event ip6_tables input_polldev hid_apple snd_seq wl(P) snd_timer snd_seq_device snd joydev bcm5974 usbhid mbp_nvidia_bl uvcvideo btusb videodev v4l1_compat v4l2_compat_ioctl32 nf_nat_irc hid nf_conntrack_irc soundcore snd_page_alloc i2c_nforce2 coretemp lib80211 bluetooth nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack lp parport iptable_filter ip_tables x_tables usb_storage firewire_ohci firewire_core forcedeth crc_itu_t ahci libahci 14:34:42 :[43057.381530] Pid: 202, comm: scsi_eh_0 Tainted: P W 2.6.35-25-generic #44-Ubuntu 14:34:42 :[43057.381533] Call Trace: 14:34:42 :[43057.381542] [<ffffffff8106091f>] warn_slowpath_common+0x7f/0xc0 14:34:42 :[43057.381546] [<ffffffff8106097a>] warn_slowpath_null+0x1a/0x20 14:34:42 :[43057.381551] [<ffffffff813dc77f>] ata_eh_finish+0xdf/0xf0 14:34:42 :[43057.381556] [<ffffffff813e441e>] sata_pmp_error_handler+0x2e/0x40 14:34:42 :[43057.381565] [<ffffffffa00021bf>] ahci_error_handler+0x1f/0x90 [libahci] 14:34:42 :[43057.381569] [<ffffffff813dd6d2>] ata_scsi_error+0x492/0x5e0 14:34:42 :[43057.381575] [<ffffffff813b24cd>] scsi_error_handler+0x10d/0x190 14:34:42 :[43057.381579] [<ffffffff813b23c0>] ? scsi_error_handler+0x0/0x190 14:34:42 :[43057.381584] [<ffffffff8107f266>] kthread+0x96/0xa0 14:34:42 :[43057.381589] [<ffffffff8100aee4>] kernel_thread_helper+0x4/0x10 14:34:42 :[43057.381594] [<ffffffff8107f1d0>] ? kthread+0x0/0xa0 14:34:42 :[43057.381598] [<ffffffff8100aee0>] ? kernel_thread_helper+0x0/0x10 14:34:42 :[43057.381601] ---[ end trace 76dbffc2d5d49d9e ]--- 14:34:42 :[43057.381624] ata1: EH complete 14:34:42 :[43057.557887] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 14:34:42 :[43057.560517] EXT4-fs (dm-2): re-mounted. Opts: commit=600 14:34:42 :[43057.621194] ata1: hard resetting link 14:34:42 :[43057.621252] ata2: hard resetting link 14:34:43 :[43058.370141] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:43 :[43058.370162] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:43 :[43058.374407] ata2.00: configured for UDMA/33 14:34:43 :[43058.374415] ata2: EH complete 14:34:43 :[43058.381989] ata1.00: configured for UDMA/33 14:34:43 :[43058.381996] ata1: EH complete 14:34:43 :[43058.616228] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=600 14:34:43 :[43058.618931] EXT4-fs (dm-2): re-mounted. Opts: commit=600 14:34:43 :[43058.626687] ata1: hard resetting link 14:34:43 :[43058.626731] ata2: hard resetting link 14:34:44 :[43059.372908] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:44 :[43059.372932] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 14:34:44 :[43059.376997] ata2.00: configured for UDMA/33 14:34:44 :[43059.377003] ata2: EH complete 14:34:44 :[43059.392576] ata1.00: configured for UDMA/33 14:34:44 :[43059.392585] ata1: EH complete 15:48:19 :[47474.710860] ata1: hard resetting link 15:48:19 :[47474.710882] ata2: hard resetting link 15:48:20 :[47475.460144] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 15:48:20 :[47475.460169] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 15:48:20 :[47475.473709] ata1.00: configured for UDMA/33 15:48:20 :[47475.473717] ata1: EH complete 15:48:20 :[47475.727960] ata2.00: configured for UDMA/33 15:48:20 :[47475.727969] ata2: EH complete 16:29:39 :[49954.295017] EXT4-fs (dm-0): re-mounted. Opts: errors=remount-ro,commit=0 16:29:39 :[49954.622307] EXT4-fs (dm-2): re-mounted. Opts: commit=0 16:29:39 :[49954.710139] ata1: hard resetting link 16:29:39 :[49954.710174] ata2: hard resetting link 16:29:40 :[49955.460046] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 16:29:40 :[49955.460062] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) 16:29:40 :[49955.464138] ata2.00: configured for UDMA/33 16:29:40 :[49955.464144] ata2: EH complete 16:29:40 :[49955.473251] ata1.00: configured for UDMA/33 16:29:40 :[49955.473258] ata1: EH complete

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • ASP.NET MVC 2 Model Binding for a Collection

    - by nmarun
    Yes, my yet another post on Model Binding (previous one is here), but this one uses features presented in MVC 2. How I got to writing this blog? Well, I’m on a project where we’re doing some MVC things for a shopping cart. Let me show you what I was working with. Below are my model classes: 1: public class Product 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public int Quantity { get; set; } 6: public decimal UnitPrice { get; set; } 7: } 8:   9: public class Totals 10: { 11: public decimal SubTotal { get; set; } 12: public decimal Tax { get; set; } 13: public decimal Total { get; set; } 14: } 15:   16: public class Basket 17: { 18: public List<Product> Products { get; set; } 19: public Totals Totals { get; set;} 20: } The view looks as below:  1: <h2>Shopping Cart</h2> 2:   3: <% using(Html.BeginForm()) { %> 4: 5: <h3>Products</h3> 6: <% for (int i = 0; i < Model.Products.Count; i++) 7: { %> 8: <div style="width: 100px;float:left;">Id</div> 9: <div style="width: 100px;float:left;"> 10: <%= Html.TextBox("ID", Model.Products[i].Id) %> 11: </div> 12: <div style="clear:both;"></div> 13: <div style="width: 100px;float:left;">Name</div> 14: <div style="width: 100px;float:left;"> 15: <%= Html.TextBox("Name", Model.Products[i].Name) %> 16: </div> 17: <div style="clear:both;"></div> 18: <div style="width: 100px;float:left;">Quantity</div> 19: <div style="width: 100px;float:left;"> 20: <%= Html.TextBox("Quantity", Model.Products[i].Quantity)%> 21: </div> 22: <div style="clear:both;"></div> 23: <div style="width: 100px;float:left;">Unit Price</div> 24: <div style="width: 100px;float:left;"> 25: <%= Html.TextBox("UnitPrice", Model.Products[i].UnitPrice)%> 26: </div> 27: <div style="clear:both;"><hr /></div> 28: <% } %> 29: 30: <h3>Totals</h3> 31: <div style="width: 100px;float:left;">Sub Total</div> 32: <div style="width: 100px;float:left;"> 33: <%= Html.TextBox("SubTotal", Model.Totals.SubTotal)%> 34: </div> 35: <div style="clear:both;"></div> 36: <div style="width: 100px;float:left;">Tax</div> 37: <div style="width: 100px;float:left;"> 38: <%= Html.TextBox("Tax", Model.Totals.Tax)%> 39: </div> 40: <div style="clear:both;"></div> 41: <div style="width: 100px;float:left;">Total</div> 42: <div style="width: 100px;float:left;"> 43: <%= Html.TextBox("Total", Model.Totals.Total)%> 44: </div> 45: <div style="clear:both;"></div> 46: <p /> 47: <input type="submit" name="Submit" value="Submit" /> 48: <% } %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Nothing fancy, just a bunch of div’s containing textboxes and a submit button. Just make note that the textboxes have the same name as the property they are going to display. Yea, yea, I know. I’m displaying unit price as a textbox instead of a label, but that’s beside the point (and trust me, this will not be how it’ll look on the production site!!). The way my controller works is that initially two dummy products are added to the basked object and the Totals are calculated based on what products were added in what quantities and their respective unit price. So when the page loads in edit mode, where the user can change the quantity and hit the submit button. In the ‘post’ version of the action method, the Totals get recalculated and the new total will be displayed on the screen. Here’s the code: 1: public ActionResult Index() 2: { 3: Product product1 = new Product 4: { 5: Id = 1, 6: Name = "Product 1", 7: Quantity = 2, 8: UnitPrice = 200m 9: }; 10:   11: Product product2 = new Product 12: { 13: Id = 2, 14: Name = "Product 2", 15: Quantity = 1, 16: UnitPrice = 150m 17: }; 18:   19: List<Product> products = new List<Product> { product1, product2 }; 20:   21: Basket basket = new Basket 22: { 23: Products = products, 24: Totals = ComputeTotals(products) 25: }; 26: return View(basket); 27: } 28:   29: [HttpPost] 30: public ActionResult Index(Basket basket) 31: { 32: basket.Totals = ComputeTotals(basket.Products); 33: return View(basket); 34: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } That’s that. Now I run the app, I see two products with the totals section below them. I look at the view source and I see that the input controls have the right ID, the right name and the right value as well. 1: <input id="ID" name="ID" type="text" value="1" /> 2: <input id="Name" name="Name" type="text" value="Product 1" /> 3: ... 4: <input id="ID" name="ID" type="text" value="2" /> 5: <input id="Name" name="Name" type="text" value="Product 2" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } So just as a regular user would do, I change the quantity value of one of the products and hit the submit button. The ‘post’ version of the Index method gets called and I had put a break-point on line 32 in the above snippet. When I hovered my mouse on the ‘basked’ object, happily assuming that the object would be all bound and ready for use, I was surprised to see both basket.Products and basket.Totals were null. Huh? A little research and I found out that the reason the DefaultModelBinder could not do its job is because of a naming mismatch on the input controls. What I mean is that when you have to bind to a custom .net type, you need more than just the property name. You need to pass a qualified name to the name property of the input control. I modified my view and the emitted code looked as below: 1: <input id="Product_Name" name="Product.Name" type="text" value="Product 1" /> 2: ... 3: <input id="Product_Name" name="Product.Name" type="text" value="Product 2" /> 4: ... 5: <input id="Totals_SubTotal" name="Totals.SubTotal" type="text" value="550" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, I update the quantity and hit the submit button and I see that the Totals object is populated, but the Products list is still null. Once again I went: ‘Hmm.. time for more research’. I found out that the way to do this is to provide the name as: 1: <%= Html.TextBox(string.Format("Products[{0}].ID", i), Model.Products[i].Id) %> 2: <!-- this will be rendered as --> 3: <input id="Products_0__ID" name="Products[0].ID" type="text" value="1" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } It was only now that I was able to see both the products and the totals being properly bound in the ‘post’ action method. Somehow, I feel this is kinda ‘clunky’ way of doing things. Seems like people at MS felt in a similar way and offered us a much cleaner way to solve this issue. The simple solution is that instead of using a Textbox, we can either use a TextboxFor or an EditorFor helper method. This one directly spits out the name of the input property as ‘Products[0].ID and so on. Cool right? I totally fell for this and changed my UI to contain EditorFor helper method. At this point, I ran the application, changed the quantity field and pressed the submit button. Of course my basket object parameter in my action method was correctly bound after these changes. I let the app complete the rest of the lines in the action method. When the page finally rendered, I did see that the quantity was changed to what I entered before the post. But, wait a minute, the totals section did not reflect the changes and showed the old values. My status: COMPLETELY PUZZLED! Just to recap, this is what my ‘post’ Index method looked like: 1: [HttpPost] 2: public ActionResult Index(Basket basket) 3: { 4: basket.Totals = ComputeTotals(basket.Products); 5: return View(basket); 6: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } A careful debug confirmed that the basked.Products[0].Quantity showed the updated value and the ComputeTotals() method also returns the correct totals. But still when I passed this basket object, it ended up showing the old totals values only. I began playing a bit with the code and my first guess was that the input controls got their values from the ModelState object. For those who don’t know, the ModelState is a temporary storage area that ASP.NET MVC uses to retain incoming attempted values plus binding and validation errors. Also, the fact that input controls populate the values using data taken from: Previously attempted values recorded in the ModelState["name"].Value.AttemptedValue Explicitly provided value (<%= Html.TextBox("name", "Some value") %>) ViewData, by calling ViewData.Eval("name") FYI: ViewData dictionary takes precedence over ViewData's Model properties – read more here. These two indicators led to my guess. It took me quite some time, but finally I hit this post where Brad brilliantly explains why this is the preferred behavior. My guess was right and I, accordingly modified my code to reflect the following way: 1: [HttpPost] 2: public ActionResult Index(Basket basket) 3: { 4: // read the following posts to see why the ModelState 5: // needs to be cleared before passing it the view 6: // http://forums.asp.net/t/1535846.aspx 7: // http://forums.asp.net/p/1527149/3687407.aspx 8: if (ModelState.IsValid) 9: { 10: ModelState.Clear(); 11: } 12:   13: basket.Totals = ComputeTotals(basket.Products); 14: return View(basket); 15: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } What this does is that in the case where your ModelState IS valid, it clears the dictionary. This enables the values to be read from the model directly and not from the ModelState. So the verdict is this: If you need to pass other parameters (like html attributes and the like) to your input control, use 1: <%= Html.TextBox(string.Format("Products[{0}].ID", i), Model.Products[i].Id) %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Since, in EditorFor, there is no direct and simple way of passing this information to the input control. If you don’t have to pass any such ‘extra’ piece of information to the control, then go the EditorFor way. The code used in the post can be found here.

    Read the article

  • ZFS for Database Log Files

    - by user12620111
    I've been troubled by drop outs in CPU usage in my application server, characterized by the CPUs suddenly going from close to 90% CPU busy to almost completely CPU idle for a few seconds. Here is an example of a drop out as shown by a snippet of vmstat data taken while the application server is under a heavy workload. # vmstat 1  kthr      memory            page            disk          faults      cpu  r b w   swap  free  re  mf pi po fr de sr s3 s4 s5 s6   in   sy   cs us sy id  1 0 0 130160176 116381952 0 16 0 0 0 0  0  0  0  0  0 207377 117715 203884 70 21 9  12 0 0 130160160 116381936 0 25 0 0 0 0 0  0  0  0  0 200413 117162 197250 70 20 9  11 0 0 130160176 116381920 0 16 0 0 0 0 0  0  1  0  0 203150 119365 200249 72 21 7  8 0 0 130160176 116377808 0 19 0 0 0 0  0  0  0  0  0 169826 96144 165194 56 17 27  0 0 0 130160176 116377800 0 16 0 0 0 0  0  0  0  0  1 10245 9376 9164 2  1 97  0 0 0 130160176 116377792 0 16 0 0 0 0  0  0  0  0  2 15742 12401 14784 4 1 95  0 0 0 130160176 116377776 2 16 0 0 0 0  0  0  1  0  0 19972 17703 19612 6 2 92  14 0 0 130160176 116377696 0 16 0 0 0 0 0  0  0  0  0 202794 116793 199807 71 21 8  9 0 0 130160160 116373584 0 30 0 0 0 0  0  0 18  0  0 203123 117857 198825 69 20 11 This behavior occurred consistently while the application server was processing synthetic transactions: HTTP requests from JMeter running on an external machine. I explored many theories trying to explain the drop outs, including: Unexpected JMeter behavior Network contention Java Garbage Collection Application Server thread pool problems Connection pool problems Database transaction processing Database I/O contention Graphing the CPU %idle led to a breakthrough: Several of the drop outs were 30 seconds apart. With that insight, I went digging through the data again and looking for other outliers that were 30 seconds apart. In the database server statistics, I found spikes in the iostat "asvc_t" (average response time of disk transactions, in milliseconds) for the disk drive that was being used for the database log files. Here is an example:                     extended device statistics     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 2053.6    0.0 8234.3  0.0  0.2    0.0    0.1   0  24 c3t60080E5...F4F6d0s0     0.0 2162.2    0.0 8652.8  0.0  0.3    0.0    0.1   0  28 c3t60080E5...F4F6d0s0     0.0 1102.5    0.0 10012.8  0.0  4.5    0.0    4.1   0  69 c3t60080E5...F4F6d0s0     0.0   74.0    0.0 7920.6  0.0 10.0    0.0  135.1   0 100 c3t60080E5...F4F6d0s0     0.0  568.7    0.0 6674.0  0.0  6.4    0.0   11.2   0  90 c3t60080E5...F4F6d0s0     0.0 1358.0    0.0 5456.0  0.0  0.6    0.0    0.4   0  55 c3t60080E5...F4F6d0s0     0.0 1314.3    0.0 5285.2  0.0  0.7    0.0    0.5   0  70 c3t60080E5...F4F6d0s0 Here is a little more information about my database configuration: The database and application server were running on two different SPARC servers. Storage for the database was on a storage array connected via 8 gigabit Fibre Channel Data storage and log file were on different physical disk drives Reliable low latency I/O is provided by battery backed NVRAM Highly available: Two Fibre Channel links accessed via MPxIO Two Mirrored cache controllers The log file physical disks were mirrored in the storage device Database log files on a ZFS Filesystem with cutting-edge technologies, such as copy-on-write and end-to-end checksumming Why would I be getting service time spikes in my high-end storage? First, I wanted to verify that the database log disk service time spikes aligned with the application server CPU drop outs, and they did: At first, I guessed that the disk service time spikes might be related to flushing the write through cache on the storage device, but I was unable to validate that theory. After searching the WWW for a while, I decided to try using a separate log device: # zpool add ZFS-db-41 log c3t60080E500017D55C000015C150A9F8A7d0 The ZFS log device is configured in a similar manner as described above: two physical disks mirrored in the storage array. This change to the database storage configuration eliminated the application server CPU drop outs: Here is the zpool configuration: # zpool status ZFS-db-41   pool: ZFS-db-41  state: ONLINE  scan: none requested config:         NAME                                     STATE         ZFS-db-41                                ONLINE           c3t60080E5...F4F6d0  ONLINE         logs           c3t60080E5...F8A7d0  ONLINE Now, the I/O spikes look like this:                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1053.5    0.0 4234.1  0.0  0.8    0.0    0.7   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1131.8    0.0 4555.3  0.0  0.8    0.0    0.7   0  76 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1167.6    0.0 4682.2  0.0  0.7    0.0    0.6   0  74 c3t60080E5...F8A7d0s0     0.0  162.2    0.0 19153.9  0.0  0.7    0.0    4.2   0  12 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1247.2    0.0 4992.6  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0     0.0   41.0    0.0   70.0  0.0  0.1    0.0    1.6   0   2 c3t60080E5...F4F6d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1241.3    0.0 4989.3  0.0  0.8    0.0    0.6   0  75 c3t60080E5...F8A7d0s0                     extended device statistics                  r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device     0.0 1193.2    0.0 4772.9  0.0  0.7    0.0    0.6   0  71 c3t60080E5...F8A7d0s0 We can see the steady flow of 4k writes to the ZIL device from O_SYNC database log file writes. The spikes are from flushing the transaction group. Like almost all problems that I run into, once I thoroughly understand the problem, I find that other people have documented similar experiences. Thanks to all of you who have documented alternative approaches. Saved for another day: now that the problem is obvious, I should try "zfs:zfs_immediate_write_sz" as recommended in the ZFS Evil Tuning Guide. References: The ZFS Intent Log Solaris ZFS, Synchronous Writes and the ZIL Explained ZFS Evil Tuning Guide: Cache Flushes ZFS Evil Tuning Guide: Tuning ZFS for Database Performance

    Read the article

  • Anyone have ideas for solving the "n items remaining" problem on Internet Explorer?

    - by CMPalmer
    In my ASP.Net app, which is javascript and jQuery heavy, but also uses master pages and .Net Ajax pieces, I am consistently seeing on the status bar of IE 6 (and occasionally IE 7) the message "2 items remaining" or "15 items remaining" followed by "loading somegraphicsfile.png|gif ." This message never goes away and may or may not prevent some page functionality from running (it certainly seems to bog down, but I'm not positive). I can cause this to happen 99% of the time by just refreshing an .aspx age, but the number of items and, sometimes, the file it mentions varies. Usually it is 2, 3, 12, 13, or 15. I've Googled for answers and there are several suggestions or explanations. Some of them haven't worked for us, and others aren't practical for us to implement or try. Here are some of the ideas/theories: IE isn't caching images right, so it repeatedly asks for the same image if the image is repeated on the page and the server assumes that it should be cached locally since it's already served it in that page context. IE displays the images correctly, but sits and waits for a server response that never comes. Typically the file it says it is waiting on is repeated on the page. The page is using PNG graphics with transparency. Indeed it is, but they are jQuery-UI Themeroller generated graphics which, according to the jQuery-UI folks, are IE safe. The jQuery-UI components are the only things using PNGs. All of our PNG references are in CSS, if that helps. I've changed some of the graphics from PNG to GIF, but it is just as likely to say it's waiting for somegraphicsfile.png as it is for somegraphicsfile.gif Images are being specified in CSS and/or JavaScript but are on things that aren't currently being displayed (display: none items for example). This may be true, but if it is, then I would think preloading images would work, but so far, adding a preloader doesn't do any good. IIS's caching policy is confusing the browser. If this is true, it is only Microsoft server SW having problems with Microsoft's browser (which doesn't surprise me at all). Unfortunately, I don't have much control over the IIS configuration that will be hosting the app. Has anyone seen this and found a way to combat it? Particularly on ASP.Net apps with jQuery and jQuery-UI? UPDATE One other data point: on at least one of the pages, just commenting out the jQuery-UI Datepicker component setup causes the problem to go away, but I don't think (or at least I'm not sure) if that fixes all of the pages. If it does "fix" them, I'll have to swap out plug-ins because that functionality needs to be there. There doesn't seem to be any open issues against jQuery-UI on IE6/7 currently... UPDATE 2 I checked the IIS settings and "enable content expiration" was not set on any of my folders. Unchecking that setting was a common suggestion for fixing this problem. I have another, simpler, page that I can consistently create the error on. I'm using the jQuery-UI 1.6rc6 file (although I've also tried jQuery-UI 1.7.1 with the same results). The problem only occurs when I refresh the page that contains the jQuery-UI Datepicker. If I comment out the Datepicker setup, the problem goes away. Here are a few things I notice when I do this: This page always says "(1 item remaining) Downloading picture http:///images/Calendar_scheduleHS.gif", but only when reloading. When I look at HTTP logging, I see that it requests that image from the server every time it is dynamically turned on, without regard to caching. All of the requests for that graphic are complete and return the graphic correctly. None are marked code 200 or 304 (indicating that the server is telling IE to use the cached version). Why it says waiting on that graphic when all of the requests have completed I have no idea. There is a single other graphic on the page (one of the UI PNG files) that has a code 304 (Not Modified). On another page where I managed to log HTTP traffic with "2 items remaining", two different graphic files (both UI PNGs) had a 304 as well (but neither was the one listed as "Downloading". This error is not innocuous - the page is not fully responsive. For example, if I click on one of the buttons which should execute a client-side action, the page refreshes. Going away from the page and coming back does not produce the error. I have moved the script and script references to the bottom of the content and this doesn't affect this problem. The script is still running in the $(document).ready() though (it's too hairy to divide out unless I absolutely have to). FINAL UPDATE AND ANSWER There were a lot of good answers and suggestions below, but none of them were exactly our problem. The closest one (and the one that led me to the solution) was the one about long running JavaScript, so I awarded the bounty there (I guess I could have answered it myself, but I'd rather reward info that leads to solutions). Here was our solution: We had multiple jQueryUI datepickers that were created on the $(document).ready event in script included from the ASP.Net master page. On this client page, a local script's $(document).ready event had script that destroyed the datepickers under certain conditions. We had to use "destroy" because the previous version of datepicker had a problem with "disable". When we upgraded to the latest version of jQuery UI (1.7.1) and replaced the "destroy"s with "disable"s for the datepickers, the problem went away (or mostly went away - if you do things too fast while the page is loading, it is still possible to get the "n items remaining" status). My theory as to what was happening goes like this: The page content loads and has 12 or so text boxes with the datepicker class. The master page script creates datepickers on those text boxes. IE queues up requests for each calendar graphic independently because IE doesn't know how to properly cache dynamic image requests. Before the requests get processed, the client area script destroys those datepickers so the graphics are no longer needed. IE is left with some number of orphaned requests that it doesn't know what to do with.

    Read the article

  • Error 2013: Lost connection to MySQL server during query when executing CHECK TABLE FOR UPGRADE

    - by Dean Richardson
    I just upgraded Ubuntu from 11.10 to 12.04. My rails app now returns the (passenger) error "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) (Mysql2::Error)". I get a similar error when I try to access mysql at the command line on my Ubuntu server using mysql -u root -p. I have mysql-server 5.5 installed. I've checked and mysql is not running. When I try to restart it, it fails. Here are some key lines from the tail of /var/log/syslog after an attempted restart: dean@dgwjasonfried:/etc/mysql$ tail -f /var/log/syslog Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: /usr/bin/mysqlcheck: Got error: 2013: Lost connection to MySQL server during query when executing 'CHECK TABLE ... FOR UPGRADE' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: FATAL ERROR: Upgrade failed Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.assets OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.ecd_types OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5124]: Checking for insecure root accounts. Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769657] init: mysql main process (5064) terminated with status 1 Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769697] init: mysql respawning too fast, stopped Here is most of /etc/mysql/my.cnf: Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock Here is entries for some specific programs The following values assume you have at least 32M ram This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] Basic Settings user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking Instead of skip-networking the default is now to listen only on localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 And here are permissions for var/run/mysqld/mysqld.sock: srwxrwxrwx 1 mysql mysql 0 Mar 7 09:18 mysqld.sock I'd be grateful for any suggestions the community might have. I reviewed the related questions here and attempted some of the fixes offered but to no avail. Thanks! Dean Richardson Update: Thanks to quanta's suggestion, I looked at the /var/log/mysql/error.log file. I found error messages relating to pointers, fatal signals, and more stuff that I really couldn't make much sense of. I also found mysql man page references, however. One suggested that I try starting mysqld with the --innodb_force_recovery=# option, then attempt to dump (or drop) the offending/corrupted database or table. I worked through the escalating option levels one-by-one (innodb_force_recovery=1, innodb_force_recovery=2, etc.) This allowed me to successfully run mysql -u root -p from the command line and execute several commands. I was able to run queries on my production database, but any attempt to query, dump, or even drop my development database raised an error and led to me losing the connection to mysql. So I've made progress, but until I'm somehow able to drop or repair my development db I'm still unable to get my app to load. Any further advice or suggestions? Thanks! Dean Update: Right after running sudo mysqld --innodb_force_recover=1 from the command line, the error.log contains this: Right after retrying sudo mysqld --innodb_force_recover=1, The error.log file shows this: 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) Then after mysql -u root -p and mysql> drop database molex_app_development; ERROR 2013 (HY000): Lost connection to MySQL server during query mysql> the error.log contains: dean@dgwjasonfried:/var/log/mysql$ tail -f error.log /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6a3ff9ecbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6a1c004bd8): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 130308 4:58:23 [ERROR] Incorrect definition of table mysql.proc: expected column 'comment' at position 15 to have type text, found type char(64). 130308 4:58:23 InnoDB: Assertion failure in thread 140168992810752 in file fsp0fsp.c line 3639 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 10:58:23 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f7ba4f6c2f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f7ba3065e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f7ba3609039] mysqld(handle_fatal_signal+0x483)[0x7f7ba34cf9c3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f7ba2220cb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f7ba188c425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f7ba188fb8b] mysqld(+0x65e0fc)[0x7f7ba37160fc] mysqld(+0x602be6)[0x7f7ba36babe6] mysqld(+0x635006)[0x7f7ba36ed006] mysqld(+0x5d7072)[0x7f7ba368f072] mysqld(+0x5d7b9c)[0x7f7ba368fb9c] mysqld(+0x6a3348)[0x7f7ba375b348] mysqld(+0x6a3887)[0x7f7ba375b887] mysqld(+0x5c6a86)[0x7f7ba367ea86] mysqld(+0x5ae3a7)[0x7f7ba36663a7] mysqld(_Z15ha_delete_tableP3THDP10handlertonPKcS4_S4_b+0x16d)[0x7f7ba34d3ffd] mysqld(_Z23mysql_rm_table_no_locksP3THDP10TABLE_LISTbbbb+0x568)[0x7f7ba3417f78] mysqld(_Z11mysql_rm_dbP3THDPcbb+0x8aa)[0x7f7ba339780a] mysqld(_Z21mysql_execute_commandP3THD+0x394c)[0x7f7ba33b886c] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f7ba33bb28f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f7ba33bc6e0] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f7ba346119d] mysqld(handle_one_connection+0x50)[0x7f7ba3461200] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f7ba2218e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f7ba1949cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f7b7c004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. --Dean

    Read the article

  • Wireless access point -> Powerline -> Router -> Internet, should this work?

    - by Anthony
    My network at home used to be a laptop and desktop connected wirelessly to a single Wireless ADSL router, a Cisco 877W. Wireless reception around the house with this setup was quite unreliable, so I've gone about looking to improve it. I purchased some Belkin Gigabit powerline adapters and I've got these working fine. I can hook a computer up to one of the powerline adapters, and with the other one plugged into the ADSL router the computer has internet access. Additionally I can hook a Netgear DG834G Wireless ADSL router into it with the adsl not plugged in, and after turning off DHCP can RJ45 a computer up to the network. Everything works fine. However, if I setup a wireless network on the Netgear then any computer that connects wirelessly to it cannot access the internet. It gets an IP address very slowly via DHCP which is a good one, but it cannot access the internet. It can however communicate with the RJ45'd computer also connected to the Netgear. I wondered whether this could be a problem with the Netgear so I've borrowed a Cisco Aironet 1200 and got this working fine when it's attached directly to the primary ADSL router. I can connect to it wireless and get onto the internet. However, if I then plug it into the Netgear I can communicate with other devices attached to the Netgear, but can't get any further than the Netgear. All the while though the other devices RJ45'd to the Netgear are communicating with the internet just fine. I'm starting to suspect it's one of two things causing the problem: 1) For some reason the belkin powerline adapters don't like carrying wireless-originating signals. Could this be possible? 2) The primary Cisco ADSL router doesn't want to communicate with other devices on my network more than one hop away from it. I'm making an assumption here that within the Netgear box the wireless and wired sides are handled differently. Could this be true? Has anyone successfully setup something similar to what I'm trying, with a wireless device on the otherside of a pair of powerline connectors? Update 06/07/2010 - Response to irrational John 28 June Thanks for the answer John - and for clearing up some of my questions. The model number of the belkin powerline adapters are F5D4076. Security was apparently enabled by default on them, and I didn't change them from their default setting. The network diagram in your answer shows exactly what I'm trying to setup: I've followed that guide and I'm still not able to get things working properly. The thing that perplexes me is that wired network traffic works just fine - it's only the wireless traffic that doesn't. This is with the same laptop, and the same DHCP or static IPs. "1. What IP addresses did you assign to each router? What subnet masks are you using?" - subnet is 255.255.255.0, the router connected to the adsl is 192.168.153.1 and that has the DHCP server. The access point on the other side of the powerline adapters I've tried both a static IP of 192.168.153.110, same subnet, and a DHCP-assigned IP. The other devices are DHCP, although I also tried manually entering IP settings. "2. Have you correctly enabled DHCP on only one of the routers and disabled it on all the others?" Yes I have - only the internet-connected router has DHCP enabled. The IP range for the DHCP is from 192.168.153.11 - 192.168.153.200. The strange thing is that wired connections work fine on the LAN, plugged into any router, work fine - it's only the wireless connections that aren't working when they're plugged into the non-primary AP. "Since the routers you are using appear to integrate an ADSL modem I'm assuming there is no WAN port on them." There's no NAT within the LAN, and all wired connections are connected to LAN ports. It's something wrong with the wireless - wired works fine throughout the whole LAN. Update 06/07/2010 - Response to irrational John 29 June The diagram you've drawn in your answer shows pretty much exactly what I'm trying to do. I've spent another evening trying different things and made some progress but I'm still scratching my head. I've borrowed a Netgear access point and been trying with this, and the strange thing is that my PC is working now - this is a Windows 7 PC connected to the access point in the position of where the DG834G is in the diagram. Meanwhile, however, I have an old Powerbook G4 12" I use for music, and while that has a DHCP-assigned IP address, it's not getting any network throughput to either LAN or internet addresses. To make matters more strange, my phone appears to be intermittently working when it's on the wifi. The access point is a Netgear WPN802v1, DHCP, NAT both switched off, running firmware 2.0.9.0. Last night I set it up with exactly the same settings, and similar to tonight I could get a couple of devices to work, and a couple not to. By the morning, however, everything had stopped working - nothing could get a DHCP IP address. I rebooted the 877W earlier this evening and I'm wondering whether this is why a few things are working now. "Could it be possible that the issue could be with the 877W?" I didn't configure this - is it possible that the DHCP server only likes assigning devices that are immediately attached to it? Or similar, could a firewall be stopping too many addresses that are coming through one device? (ie. the Access Point) This could explain why devices are working at the start but then not by the end. In reply to your questions, "1. I looked at the Netgear DG834G support page. There are five versions of this router. Which version do you have? Netgear usually lists this on the label on the bottom of the router. What version of the firmware does it have?" It's a DG834Gv3, and the firmware is the last on the netgear site version 4.01.40. "3. Not knowing which version you have, I glanced at the reference manual for the DG834G v3. In the section for Wireless Settings under the subsection Wireless Access Point there is a check box for a Wireless Isolation setting. If you have this setting it should be off/unchecked. If it is checked then any device connected via wireless would not be able to talk to any other device on the LAN. This sounds like your problem so maybe this is the cause?" I've checked this and it's switched off. I've made a change to the IP of the access point to something outside the DHCP range - it's now 192.158.153.5, with DHCP starting at 11 and going up to 254. Thanks for the tip about this - I only have a few devices so wouldn't anticipate the DHCP server assigning up to 110, but better safe than sorry. Finally one more thing I thought I should add, is with the Powerbook G4 that's not working - it's getting a DHCP IP address and it can communicate with the WPN802 as I can visit the administration page. Anything further than this, however, it can't reach; I can't administrate the 192.168.153.1 (877W router). Strangely, however, when I open Finder on the same powerbook it's detecting my NAS which is attached directly via wire to the 877W. If I try to browse it, it says connection failed. RE: "Perhaps the problem with your Powerbook is with DNS?.." The IP settings on the powerbook are identical to that of the PC with the exception of the IP address; the PC is 192.168.153.17 and the powerbook is 192.168.153.12. Subnets are the same, 255.255.255.0 and default gateway is the same, .1, and the DNS servers are the same. I administrate the 877W by going to 192.168.153.1 in the browser. This is what isn't working from the Powerbook, despite the PC working fine when I do the same. Meanwhile, however, I can administrate the AP on 192.168.153.5 from both PC and Powerbook Update 06/07/2010 - FINAL RESOLUTION of sorts: First off, sorry for the length of this question. I need start to practice a more concise writing style, so I'm going to try to keep this bit brief. After much fiddling, and with the hugely-appreciated help of irrational John, I have come to the conclusion that it's something wrong with the powerbook. I believe that this was perhaps the reason I doubted things worked at the very beginning. I now have the original DG834Gv3 running both wirelessly and wired, and both wired devices and wireless devices get internet connectivity. The only anomaly is the powerbook which I've had to keep wired, as no matter what I do it refuses to work wirelessly. I still have suspicions that the 877W isn't quite right; I'm fairly sure that if I RJ45 the powerline adapter into a different LAN port on it then everything will break. I've just about run out of patience to test this further, and I think I need to go into the 877W's config to match the 877w's lan port's settings. I'm accepting irrational John's answer as he's been enormously helpful, way above the call of duty, and for this line he wrote: Beats the heck out of me. which in the midst of great frustration made me chuckle, and for a sentence in one of his comments to the same answer: If it is specific to the Powerbook I would put that issue aside until after you feel you have the rest of your LAN and the additional WAP all working together correctlyt It was this second sentence that made me put the powerbook aside and concentrate on the other devices that ultimately led me to getting things working.

    Read the article

  • Adapting a HTML/CSS dropdown menu to multi-level

    - by Adam Nygate
    Ive been trying to make the original dropdown into multi level for a site im working on. All of my attempts have failed (. For some reason i can only do "margin-right" to align the elements, and this causes some problems. I think it has something to do with the position attribute. Here is my HTML: <ol id="nav"> <li><a href="index.php">Home</a></li> <li class="dropdown_alignedLeft"> <a href="">Products</a> <ul><li class="dropdown_alignedRight"> <a href="">iPoP</a> <ul style="margin-right:-400px; top:0px;-webkit-border-top-right-radius: 5px;border-top-right-radius: 5px;-moz-border-radius-topright: 5px;"><li><a href="customers.php?category=ipop">iPoP - Network Solutions for Vessels</a></li></ul><li class="dropdown_alignedRight"> <a href="">Cameras</a> <ul style="margin-right:-400px; top:0px;-webkit-border-top-right-radius: 5px;border-top-right-radius: 5px;-moz-border-radius-topright: 5px;"><li><a href="customers.php?category=icam">iCam 501 Ultra - Intrinsically Safe Digital Camera with Flash</a></li></ul><li class="dropdown_alignedRight"> <a href="">BNWAS</a> <ul style="margin-right:-400px; top:0px;-webkit-border-top-right-radius: 5px;border-top-right-radius: 5px;-moz-border-radius-topright: 5px;"><li><a href="customers.php?category=bnwas">BNWAS - Bridge Navigation Watch Alarm System</a></li></ul><li class="dropdown_alignedRight"> <a href="">Lighting</a> <ul style="margin-right:-400px; top:0px;-webkit-border-top-right-radius: 5px;border-top-right-radius: 5px;-moz-border-radius-topright: 5px;"><li><a href="customers.php?category=peli">Peli 2690 - Intrinsically Safe LED Head Lamp</a></li></ul><li class="dropdown_alignedRight"> <a href="">Communication</a> <ul style="margin-right:-400px; top:0px;-webkit-border-top-right-radius: 5px;border-top-right-radius: 5px;-moz-border-radius-topright: 5px;"><li><a href="customers.php?category=handy">Ex-Handy 06 - Intrinsically Safe Cell Phone</a></li></ul> </ul> <li class="dropdown_alignedLeft"> <a href="">Customers</a> <ul> <li><a href="customers.php?category=maritime">Maritime</a></li> <li><a href="customers.php?category=non">Non-Maritime</a></li> <li class="dropdown_lastItem"><a href="customers.php?category=organizations">Regulatory Organizations</a></li> </ul> <li><a href="order.php">Product Enquiry</a></li> <li><a href="contact.php">Contact Us</a></li> <li class="dropdown_alignedLeft"> <a href="">Company</a> <ul> <!-- <li><a href="">About Us</a></li> --> <li><a href="newsandpr.php?category=News">News</a></li> <li class="dropdown_lastItem"><a href="newsandpr.php?category=Press Release">Press Releases</a></li> </ul> </ol> And my CSS: #nav { float:right; margin:15px 0 0; } #nav li { float:left; } #nav li a { display:block; font-family:"PT Sans","Helvetica Neue",Arial,sans-serif; font-size:16px; text-decoration:none; color:#2B95C8; padding:10px 20px 20px; } .dropdown_alignedLeft,.dropdown_alignedRight { position:relative; } #nav .dropdown_alignedLeft>a,#nav .dropdown_alignedRight>a { background:url(../images/dropdown_arrow_blue.png) no-repeat top right; padding:10px 30px 20px 20px; } #nav .dropdown_alignedLeft:hover>a,#nav .dropdown_alignedRight:hover>a { -moz-border-radius-topleft:5px; -moz-border-radius-topright:5px; -moz-border-radius-bottomright:0; -moz-border-radius-bottomleft:0; -webkit-border-top-left-radius:5px; -webkit-border-top-right-radius:5px; -webkit-border-bottom-right-radius:0; -webkit-border-bottom-left-radius:0; border-top-left-radius:5px; border-top-right-radius:5px; border-bottom-right-radius:0; border-bottom-left-radius:0; color:#FFF; background:#2378A1 url(../images/dropdown_arrow_blue.png) no-repeat bottom right; } .dropdown_alignedLeft ul,.dropdown_alignedRight ul { display:none; } #nav .dropdown_alignedLeft:hover>ul,#nav .dropdown_alignedRight:hover>ul { display:block; z-index:100; position:absolute; top:50px; -moz-border-radius-topleft:0; -moz-border-radius-topright:0; -moz-border-radius-bottomright:5px; -moz-border-radius-bottomleft:5px; -webkit-border-top-left-radius:0; -webkit-border-top-right-radius:0; -webkit-border-bottom-right-radius:5px; -webkit-border-bottom-left-radius:5px; border-top-left-radius:0; border-top-right-radius:0; border-bottom-right-radius:5px; border-bottom-left-radius:5px; background:#2378A1; padding:0 0 6px; } #nav .dropdown_alignedRight:hover>ul { top:50px; right:0; text-align:right; } #nav li ul li { float:none; border-bottom:1px dashed #2B95C8; margin:0 20px; } #nav li ul li.dropdown_innerTitle { border:none; font-family:"Helvetica Neue",Arial,sans-serif; font-size:15px; white-space:nowrap; color:#C8DDE7; margin:10px 20px 0; padding:10px 0; } #nav li ul li.dropdown_lastItem { border:none; } #nav li ul li a { font-family:"Helvetica Neue",Arial,sans-serif; font-size:13px; color:#FFF; white-space:nowrap; padding:10px 0 9px; } #nav>li:hover>a,#nav li .current_page { color:#2378A1; background:url(../images/current_page_arrow_blue.png) no-repeat center bottom; } #nav li ul li a:hover { color: #C8DDE7; } For a live version of the menu, please go here: JSFiddle - Live Menu

    Read the article

  • Array sorting efficiency... Beginner need advice

    - by SoleSoft
    I'll start by saying I am very much a beginner programmer, this is essentially my first real project outside of using learning material. I've been making a 'Simon Says' style game (the game where you repeat the pattern generated by the computer) using C# and XNA, the actual game is complete and working fine but while creating it, I wanted to also create a 'top 10' scoreboard. The scoreboard would record player name, level (how many 'rounds' they've completed) and combo (how many buttons presses they got correct), the scoreboard would then be sorted by combo score. This led me to XML, the first time using it, and I eventually got to the point of having an XML file that recorded the top 10 scores. The XML file is managed within a scoreboard class, which is also responsible for adding new scores and sorting scores. Which gets me to the point... I'd like some feedback on the way I've gone about sorting the score list and how I could have done it better, I have no other way to gain feedback =(. I know .NET features Array.Sort() but I wasn't too sure of how to use it as it's not just a single array that needs to be sorted. When a new score needs to be entered into the scoreboard, the player name and level also have to be added. These are stored within an 'array of arrays' (10 = for 'top 10' scores) scoreboardComboData = new int[10]; // Combo scoreboardTextData = new string[2][]; scoreboardTextData[0] = new string[10]; // Name scoreboardTextData[1] = new string[10]; // Level as string The scoreboard class works as follows: - Checks to see if 'scoreboard.xml' exists, if not it creates it - Initialises above arrays and adds any player data from scoreboard.xml, from previous run - when AddScore(name, level, combo) is called the sort begins - Another method can also be called that populates the XML file with above array data The sort checks to see if the new score (combo) is less than or equal to any recorded scores within the scoreboardComboData array (if it's greater than a score, it moves onto the next element). If so, it moves all scores below the score it is less than or equal to down one element, essentially removing the last score and then places the new score within the element below the score it is less than or equal to. If the score is greater than all recorded scores, it moves all scores down one and inserts the new score within the first element. If it's the only score, it simply adds it to the first element. When a new score is added, the Name and Level data is also added to their relevant arrays, in the same way. What a tongue twister. Below is the AddScore method, I've added comments in the hope that it makes things clearer O_o. You can get the actual source file HERE. Below the method is an example of the quickest way to add a score to follow through with a debugger. public static void AddScore(string name, string level, int combo) { // If the scoreboard has not yet been filled, this adds another 'active' // array element each time a new score is added. The actual array size is // defined within PopulateScoreBoard() (set to 10 - for 'top 10' if (totalScores < scoreboardComboData.Length) totalScores++; // Does the scoreboard even need sorting? if (totalScores > 1) { for (int i = totalScores - 1; i > - 1; i--) { // Check to see if score (combo) is greater than score stored in // array if (combo > scoreboardComboData[i] && i != 0) { // If so continue to next element continue; } // Check to see if score (combo) is less or equal to element 'i' // score && that the element is not the last in the // array, if so the score does not need to be added to the scoreboard else if (combo <= scoreboardComboData[i] && i != scoreboardComboData.Length - 1) { // If the score is lower than element 'i' and greater than the last // element within the array, it needs to be added to the scoreboard. This is achieved // by moving each element under element 'i' down an element. The new score is then inserted // into the array under element 'i' for (int j = totalScores - 1; j > i; j--) { // Name and level data are moved down in their relevant arrays scoreboardTextData[0][j] = scoreboardTextData[0][j - 1]; scoreboardTextData[1][j] = scoreboardTextData[1][j - 1]; // Score (combo) data is moved down in relevant array scoreboardComboData[j] = scoreboardComboData[j - 1]; } // The new Name, level and score (combo) data is inserted into the relevant array under element 'i' scoreboardTextData[0][i + 1] = name; scoreboardTextData[1][i + 1] = level; scoreboardComboData[i + 1] = combo; break; } // If the method gets the this point, it means that the score is greater than all scores within // the array and therefore cannot be added in the above way. As it is not less than any score within // the array. else if (i == 0) { // All Names, levels and scores are moved down within their relevant arrays for (int j = totalScores - 1; j != 0; j--) { scoreboardTextData[0][j] = scoreboardTextData[0][j - 1]; scoreboardTextData[1][j] = scoreboardTextData[1][j - 1]; scoreboardComboData[j] = scoreboardComboData[j - 1]; } // The new number 1 top name, level and score, are added into the first element // within each of their relevant arrays. scoreboardTextData[0][0] = name; scoreboardTextData[1][0] = level; scoreboardComboData[0] = combo; break; } // If the methods get to this point, the combo score is not high enough // to be on the top10 score list and therefore needs to break break; } } // As totalScores < 1, the current score is the first to be added. Therefore no checks need to be made // and the Name, Level and combo data can be entered directly into the first element of their relevant // array. else { scoreboardTextData[0][0] = name; scoreboardTextData[1][0] = level; scoreboardComboData[0] = combo; } } } Example for adding score: private static void Initialize() { scoreboardDoc = new XmlDocument(); if (!File.Exists("Scoreboard.xml")) GenerateXML("Scoreboard.xml"); PopulateScoreBoard("Scoreboard.xml"); // ADD TEST SCORES HERE! AddScore("EXAMPLE", "10", 100); AddScore("EXAMPLE2", "24", 999); PopulateXML("Scoreboard.xml"); } In it's current state the source file is just used for testing, initialize is called within main and PopulateScoreBoard handles the majority of other initialising, so nothing else is needed, except to add a test score. I thank you for your time!

    Read the article

  • SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Op

    - by pinaldave
    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event in Indian subcontinents other than this. There are 21 Technical Tracks at Tech·Ed India 2010 that span more than 745 learning opportunities. I was fortunate enough to be a part of this whole event as a speaker and a delegate, as well. TechEd India Speaker Badge and A Token of Lifetime Hotel Selection I presented three different sessions at TechEd India and was also a part of panel discussion. (The details of the sessions are given at the end of this blog post.) Due to extensive traveling, I stay away from my family occasionally. For this reason, I took my wife – Nupur and daughter Shaivi (8 months old) to the event along with me. We stayed at the same hotel where the event was organized so as to maximize my time bonding with my family and to have more time in networking with technology community, at the same time. The hotel Lalit Ashok is the largest and most luxurious venue one can find in Bangalore, located in the middle of the city. The cost of the hotel was a bit pricey, but looking at all the advantages, I had decided to ask for a booking there. Hotel Lalit Ashok Nupur Dave and Shaivi Dave Arrival Day – DAY 0 – April 11, 2010 I reached the event a day earlier, and that was one wise decision for I was able to relax a bit and go over my presentation for the next day’s course. I am a kind of person who likes to get everything ready ahead of time. I was also able to enjoy a pleasant evening with several Microsoft employees and my family friends. I even checked out the location where I would be doing presentations the next day. I was fortunate enough to meet Bijoy Singhal from Microsoft who helped me out with a few of the logistics issues that occured the day before. I was not aware of the fact that the very next day he was going to be “The Man” of the TechEd 2010 event. Vinod Kumar from Microsoft was really very kind as he talked to me regarding my subsequent session. He gave me some suggestions which were really helpful that I was able to incorporate them during my presentation. Finally, I was able to meet Abhishek Kant from Microsoft; his valuable suggestions and unlimited passion have inspired many people like me to work with the Community. Pradipta from Microsoft was also around, being extremely busy with logistics; however, in those busy times, he did find some good spare time to have a chat with me and the other Community leaders. I also met Harish Ranganathan and Sachin Rathi, both from Microsoft. It was so interesting to listen to both of them talking about SharePoint. I just have no words to express my overwhelmed spirit because of all these passionate young guys - Pradipta,Vinod, Bijoy, Harish, Sachin and Ahishek (of course!). Map of TechEd India 2010 Event Day 1 – April 12, 2010 From morning until night time, today was truly a very busy day for me. I had two presentations and one panel discussion for the day. Needless to say, I had a few meetings to attend as well. The day started with a keynote from S. Somaseger where he announced the launch of Visual Studio 2010. The keynote area was really eye-catching because of the very large, bigger-than- life uniform screen. This was truly one to show. The title music of the keynote was very interesting and it featured Bijoy Singhal as the model. It was interesting to talk to him afterwards, when we laughed at jokes together about his modeling assignment. TechEd India Keynote Opening Featuring Bijoy TechEd India 2010 Keynote – S. Somasegar Time: 11:15pm – 11:45pm Session 1: True Lies of SQL Server – SQL Myth Buster Following the excellent keynote, I had my very first session on the subject of SQL Server Myth Buster. At first, I was a bit nervous as right after the keynote, for this was my very first session and during my presentation I saw lots of Microsoft Product Team members. Well, it really went well and I had a really good discussion with attendees of the session. I felt that a well begin was half-done and my confidence was regained. Right after the session, I met a few of my Community friends and had meaningful discussions with them on many subjects. The abstract of the session is as follows: In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myths and their resolutions as I back them up with some demo. This demo presentation is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet fun session. Pinal Presenting session at TechEd India 2010 Time: 1:00 PM – 2:00 PM Lunch with Somasegar After the session I went to see my daughter, and then I headed right away to the lunch with S. Somasegar – the keynote speaker and senior vice president of the Developer Division at Microsoft. I really thank to Abhishek who made it possible for us. Because of his efforts, all the MVPs had the opportunity to meet such a legendary person and had to talk with them on Microsoft Technology. Though Somasegar is currently holding such a high position in Microsoft, he is very polite and a real gentleman, and how I wish that everybody in industry is like him. Believe me, if you spread love and kindness, then that is what you will receive back. As soon as lunch time was over, I ran to the session hall as my second presentation was about to start. Time: 2:30pm – 3:30pm Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Business Intelligence is a subject which was widely talked about at TechEd. Everybody was interested in this subject, and I did not excuse myself from this great concept as well. I consider myself fortunate as I was presenting on the subject of Master Data Services at TechEd. When I had initially learned this subject, I had a bit of confusion about the usage of this tool. Later on, I decided that I would tackle about how we all developers and DBAs are not able to understand something so simple such as this, and even worst, creating confusion about the technology. During system designing, it is very important to have a reference material or master lookup tables. Well, I talked about the same subject and presented the session keeping that as my center talk. The session went very well and I received lots of interesting questions. I got many compliments for talking about this subject on the real-life scenario. I really thank Rushabh Mehta (CEO, Solid Quality Mentors India) for his supportive suggestions that helped me prepare the slide deck, as well as the subject. Pinal Presenting session at TechEd India 2010 The abstract of the session is as follows: SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in-depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision-making process by allowing you to create, manage and propagate changes from a single master view of your business entities. Also, MDS – Master Data-hub which is a vital component, helps ensure the consistency of reporting across systems and deliver faster and more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Pinal Presenting session at TechEd India 2010 The day was still not over for me. I had ran into several friends but we were not able keep our enthusiasm under control about all the rumors saying that SQL Server 2008 R2 was about to be launched tomorrow in the keynote. I then ran to my third and final technical event for the day- a panel discussion with the top technologies of India. Time: 5:00pm – 6:00pm Panel Discussion: Harness the power of Web – SEO and Technical Blogging As I have delivered two technical sessions by this time, I was a bit tired but  not less enthusiastic when I had to talk about Blog and Technology. We discussed many different topics there. I told them that the most important aspect for any blog is its content. We discussed in depth the issues with plagiarism and how to avoid it. Another topic of discussion was how we technology bloggers can create awareness in the Community about what the right kind of blogging is and what morally and technically wrong acts are. A couple of questions were raised about what type of liberty a person can have in terms of writing blogs. Well, it was generically agreed that a blog is mainly a representation of our ideas and thoughts; it should not be governed by external entities. As long as one is writing what they really want to say, but not providing incorrect information or not practicing plagiarism, a blogger should be allowed to express himself. This panel discussion was supposed to be over in an hour, but the interest of the participants was remarkable and so it was extended for 30 minutes more. Finally, we decided to bring to a close the discussion and agreed that we will continue the topic next year. TechEd India Panel Discussion on Web, Technology and SEO Surprisingly, the day was just beginning after doing all of these. By this time, I have almost met all the MVP who arrived at the event, as well as many Microsoft employees. There were lots of Community folks present, too. I decided that I would go to meet several friends from the Community and continue to communicate with me on SQLAuthority.com. I also met Abhishek Baxi and had a good talk with him regarding Win Mobile and Twitter. He also took a very quick video of me wherein I spoke in my mother’s tongue, Gujarati. It was funny that I talked in Gujarati almost all the day, but when I was talking in the interview I could not find the right Gujarati words to speak. I think we all think in English when we think about Technology, so as to address universality. After meeting them, I headed towards the Speakers’ Dinner. Time: 8:00 PM – onwards Speakers Dinner The Speakers’ dinner was indeed a wonderful opportunity for all the speakers to get together and relax. We talked so many different things, from XBOX to Hindi Movies, and from SQL to Samosas. I just could not express how much fun I had. After a long evening, when I returned tmy room and met Shaivi, I just felt instantly relaxed. Kids are really gifts from God. Today was a really long but exciting day. So many things happened in just one day: Visual Studio Lanch, lunch with Somasegar, 2 technical sessions, 1 panel discussion, community leaders meeting, speakers dinner and, last but not leas,t playing with my child! A perfect day! Day 2 – April 13, 2010 Today started with a bang with the excellent keynote by Kamal Hathi who launched SQL Server 2008 R2 in India and demonstrated the power of PowerPivot to all of us. 101 Million Rows in Excel brought lots of applause from the audience. Kamal Hathi Presenting Keynote at TechEd India 2010 The day was a bit easier one for me. I had no sessions today and no events planned. I had a few meetings planned for the second day of the event. I sat in the speaker’s lounge for half a day and met many people there. I attended nearly 9 different meetings today. The subjects of the meetings were very different. Here is a list of the topics of the Community-related meetings: SQL PASS and its involvement in India and subcontinents How to start community blogging Forums and developing aptitude towards technology Ahmedabad/Gandhinagar User Groups and their developments SharePoint and SQL Business Meeting – a client meeting Business Meeting – a potential performance tuning project Business Meeting – Solid Quality Mentors (SolidQ) And family friends Pinal Dave at TechEd India The day passed by so quickly during this meeting. In the evening, I headed to Partners Expo with friends and checked out few of the booths. I really wanted to talk about some of the products, but due to the freebies there was so much crowd that I finally decided to just take the contact details of the partner. I will now start sending them with my queries and, hopefully, I will have my questions answered. Nupur and Shaivi had also one meeting to attend; it was with our family friend Vijay Raj. Vijay is also a person who loves Technology and loves it more than anybody. I see him growing and learning every day, but still remaining as a ‘human’. I believe that if someone acquires as much knowledge as him, that person will become either a computer or cyborg. Here, Vijay is still a kind gentleman and is able to stay as our close family friend. Shaivi was really happy to play with Uncle Vijay. Pinal Dave and Vijay Raj Renuka Prasad, a Microsoft MVP, impressed me with his passion and knowledge of SQL. Every time he gives me credit for his success, I believe that he is very humble. He has way more certifications than me and has worked many more years with SQL compared to me. He is an excellent photographer as well. Most of the photos in this blog post have been taken by him. I told him if ever he wants to do a part time job, he can do the photography very well. Pinal Dave and Renuka Prasad I also met L Srividya from Microsoft, whom I was looking forward to meet. She is a bundle of knowledge that everyone would surely learn a lot from her. I was able to get a few minutes from her and well, I felt confident. She enlightened me with SQL Server BI concepts, domain management and SQL Server security and few other interesting details. I also had a wonderful time talking about SharePoint with fellow Solid Quality Mentor Joy Rathnayake. He is very passionate about SharePoint but when you talk .NET and SQL with him, he is still overwhelmingly knowledgeable. In fact, while talking to him, I figured out that the recent training he delivered was on SQL Server 2008 R2. I told him a joke that it hurts my ego as he is more popular now in SQL training and consulting than me. I am sure all of you agree that working with good people is a gift from God. I am fortunate enough to work with the best of the best Industry experts. It was a great pleasure to hang out with my Community friends – Ahswin Kini, HimaBindu Vejella, Vasudev G, Suprotim Agrawal, Dhananjay, Vikram Pendse, Mahesh Dhola, Mahesh Mitkari,  Manu Zacharia, Shobhan, Hardik Shah, Ashish Mohta, Manan, Subodh Sohani and Sanjay Shetty (of course!) .  (Please let me know if I have met you at the event and forgot your name to list here). Time: 8:00 PM – onwards Community Leaders Dinner After lots of meetings, I headed towards the Community Leaders dinner meeting and met almost all the folks I met in morning. The discussion was almost the same but the real good thing was that we were enjoying it. The food was really good. Nupur was invited in the event, but Shaivi could not come. When Nupur tried to enter the event, she was stopped as Shaivi did not have the pass to enter the dinner. Nupur expressed that Shaivi is only 8 months old and does not eat outside food as well and could not stay by herself at this age, but the door keeper did not agree and asked that without the entry details Shaivi could not go in, but Nupur could. Nupur called me on phone and asked me to help her out. By the time, I was outside; the organizer of the event reached to the door and happily approved Shaivi to join the party. Once in the party, Shaivi had lots of fun meeting so many people. Shaivi Dave and Abhishek Kant Dean Guida (Infragistics President and CEO) and Pinal Dave (SQLAuthority.com) Day 3 – April 14, 2010 Though, it was last day, I was very much excited today as I was about to present my very favorite session. Query Optimization and Performance Tuning is my domain expertise and I make my leaving by consulting and training the same. Today’s session was on the same subject and as an additional twist, another subject about Spatial Database was presented. I was always intrigued with Spatial Database and I have enjoyed learning about it; however, I have never thought about Spatial Indexing before it was decided that I will do this session. I really thank Solid Quality Mentor Dr. Greg Low for his assistance in helping me prepare the slide deck and also review the content. Furthermore, today was really what I call my ‘learning day’ . So far I had not attended any session in TechEd and I felt a bit down for that. Everybody spends their valuable time & money to learn something new and exciting in TechEd and I had not attended a single session at the moment thinking that it was already last day of the event. I did have a plan for the day and I attended two technical sessions before my session of spatial database. I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller and there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod is syhe speaker. He did not have a single time disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL. Time: 12:30pm-1:30pm SQL Server Query Optimization, Execution and Debugging Query Performance I really had a fun time attending this session. Vinod made this session very interactive. The entire audience really got into the presentation and started participating in the event. Vinod was presenting a small problem with Query Tuning, which any developer would have encountered and solved with their help in such a fashion that a developer feels he or she have already resolved it. In one question, I was the only one who was ready to answer and Vinod told me in a light tone that I am now allowed to answer it! The audience really found it very amusing. There was a huge crowd around Vinod after the session. Vinod – A master storyteller! Time: 3:45pm-4:45pm Data Recovery / consistency with CheckDB This session was much heavier than the earlier one, and I must say this is my most favorite session I EVER attended in India. In this TechEd I have only attended two sessions, but in my career, I have attended numerous technical sessions not only in India, but all over the world. This session had taken my breath away. One by one, Vinod took the different databases, and started to corrupt them in different ways. Each database has some unique ways to get corrupted. Once that was done, Vinod started to show the DBCC CEHCKDB and demonstrated how it can solve your problem. He finally fixed all the databases with this single tool. I do have a good knowledge of this subject, but let me honestly admit that I have learned a lot from this session. I enjoyed and cheered during this session along with other attendees. I had total satisfaction that, just like everyone, I took advantage of the event and learned something. I am now TECHnically EDucated. Pinal Dave and Vinod Kumar After two very interactive and informative SQL Sessions from Vinod Kumar, the next turn me presenting on Spatial Database and Indexing. I got once again nervous but Vinod told me to stay natural and do my presentation. Well, once I got a huge stage with a total of four projectors and a large crowd, I felt better. Time: 5:00pm-6:00pm Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Pinal Presenting session at TechEd India 2010 Pinal Presenting session at TechEd India 2010 I kicked off this session with Michael J Swart‘s beautiful spatial image. This session was the last one for the day but, to my surprise, I had more than 200+ attendees. Slowly, the rain was starting outside and I was worried that the hall would not be full; despite this, there was not a single seat available in the first five minutes of the session. Thanks to all of you for attending my presentation. I had demonstrated the map of world (and India) and quickly explained what  Geographic and Geometry data types in Spatial Database are. This session had interesting story of Indexing and Comparison, as well as how different traditional indexes are from spatial indexing. Pinal Presenting session at TechEd India 2010 Due to the heavy rain during this event, the power went off for about 22 minutes (just an accident – nobodies fault). During these minutes, there were no audio, no video and no light. I continued to address the mass of 200+ people without any audio device and PowerPoint. I must thank the audience because not a single person left from the session. They all stayed in their place, some moved closure to listen to me properly. I noticed that the curiosity and eagerness to learn new things was at the peak even though it was the very last session of the TechEd. Everybody wanted get the maximum knowledge out of this whole event. I was touched by the support from audience. They listened and participated in my session even without any kinds of technology (no ppt, no mike, no AC, nothing). During these 22 minutes, I had completed my theory verbally. Pinal Presenting session at TechEd India 2010 After a while, we got the projector back online and we continued with some exciting demos. Many thanks to Microsoft people who worked energetically in background to get the backup power for project up. I had a very interesting demo wherein I overlaid Bangalore and Hyderabad on the India Map and find their aerial distance between them. After finding the aerial distance, we browsed online and found that SQL Server estimates the exact aerial distance between these two cities, as compared to the factual distance. There was a huge applause from the crowd on the subject that SQL Server takes into the count of the curvature of the earth and finds the precise distances based on details. During the process of finding the distance, I demonstrated a few examples of the indexes where I expressed how one can use those indexes to find these distances and how they can improve the performance of similar query. I also demonstrated few examples wherein we were able to see in which data type the Index is most useful. We finished the demos with a few more internal stuff. Pinal Presenting session at TechEd India 2010 Despite all issues, I was mostly satisfied with my presentation. I think it was the best session I have ever presented at any conference. There was no help from Technology for a while, but I still got lots of appreciation at the end. When we ended the session, the applause from the audience was so loud that for a moment, the rain was not audible. I was truly moved by the dedication of the Technology enthusiasts. Pinal Dave After Presenting session at TechEd India 2010 The abstract of the session is as follows: The Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Time: 8:00 PM – onwards Dinner by Sponsors After the lively session during the day, there was another dinner party courtesy of one of the sponsors of TechEd. All the MVPs and several Community leaders were present at the dinner. I would like to express my gratitude to Abhishek Kant for organizing this wonderful event for us. It was a blast and really relaxing in all angles. We all stayed there for a long time and talked about our sweet and unforgettable memories of the event. Pinal Dave and Bijoy Singhal It was really one wonderful event. After writing this much, I say that I have no words to express about how much I enjoyed TechEd. However, it is true that I shared with you only 1% of the total activities I have done at the event. There were so many people I have met, yet were not mentioned here although I wanted to write their names here, too . Anyway, I have learned so many things and up until now, I am not able to get over all the fun I had in this event. Pinal Dave at TechEd India 2010 The Next Days – April 15, 2010 – till today I am still not able to get my mind out of the whole experience I had at TechEd India 2010. It was like a whole Microsoft Family working together to celebrate a happy occasion. TechEd India – Truly An Unforgettable Experience! Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Op

    - by pinaldave
    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event in Indian subcontinents other than this. There are 21 Technical Tracks at Tech·Ed India 2010 that span more than 745 learning opportunities. I was fortunate enough to be a part of this whole event as a speaker and a delegate, as well. TechEd India Speaker Badge and A Token of Lifetime Hotel Selection I presented three different sessions at TechEd India and was also a part of panel discussion. (The details of the sessions are given at the end of this blog post.) Due to extensive traveling, I stay away from my family occasionally. For this reason, I took my wife – Nupur and daughter Shaivi (8 months old) to the event along with me. We stayed at the same hotel where the event was organized so as to maximize my time bonding with my family and to have more time in networking with technology community, at the same time. The hotel Lalit Ashok is the largest and most luxurious venue one can find in Bangalore, located in the middle of the city. The cost of the hotel was a bit pricey, but looking at all the advantages, I had decided to ask for a booking there. Hotel Lalit Ashok Nupur Dave and Shaivi Dave Arrival Day – DAY 0 – April 11, 2010 I reached the event a day earlier, and that was one wise decision for I was able to relax a bit and go over my presentation for the next day’s course. I am a kind of person who likes to get everything ready ahead of time. I was also able to enjoy a pleasant evening with several Microsoft employees and my family friends. I even checked out the location where I would be doing presentations the next day. I was fortunate enough to meet Bijoy Singhal from Microsoft who helped me out with a few of the logistics issues that occured the day before. I was not aware of the fact that the very next day he was going to be “The Man” of the TechEd 2010 event. Vinod Kumar from Microsoft was really very kind as he talked to me regarding my subsequent session. He gave me some suggestions which were really helpful that I was able to incorporate them during my presentation. Finally, I was able to meet Abhishek Kant from Microsoft; his valuable suggestions and unlimited passion have inspired many people like me to work with the Community. Pradipta from Microsoft was also around, being extremely busy with logistics; however, in those busy times, he did find some good spare time to have a chat with me and the other Community leaders. I also met Harish Ranganathan and Sachin Rathi, both from Microsoft. It was so interesting to listen to both of them talking about SharePoint. I just have no words to express my overwhelmed spirit because of all these passionate young guys - Pradipta,Vinod, Bijoy, Harish, Sachin and Ahishek (of course!). Map of TechEd India 2010 Event Day 1 – April 12, 2010 From morning until night time, today was truly a very busy day for me. I had two presentations and one panel discussion for the day. Needless to say, I had a few meetings to attend as well. The day started with a keynote from S. Somaseger where he announced the launch of Visual Studio 2010. The keynote area was really eye-catching because of the very large, bigger-than- life uniform screen. This was truly one to show. The title music of the keynote was very interesting and it featured Bijoy Singhal as the model. It was interesting to talk to him afterwards, when we laughed at jokes together about his modeling assignment. TechEd India Keynote Opening Featuring Bijoy TechEd India 2010 Keynote – S. Somasegar Time: 11:15pm – 11:45pm Session 1: True Lies of SQL Server – SQL Myth Buster Following the excellent keynote, I had my very first session on the subject of SQL Server Myth Buster. At first, I was a bit nervous as right after the keynote, for this was my very first session and during my presentation I saw lots of Microsoft Product Team members. Well, it really went well and I had a really good discussion with attendees of the session. I felt that a well begin was half-done and my confidence was regained. Right after the session, I met a few of my Community friends and had meaningful discussions with them on many subjects. The abstract of the session is as follows: In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myths and their resolutions as I back them up with some demo. This demo presentation is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet fun session. Pinal Presenting session at TechEd India 2010 Time: 1:00 PM – 2:00 PM Lunch with Somasegar After the session I went to see my daughter, and then I headed right away to the lunch with S. Somasegar – the keynote speaker and senior vice president of the Developer Division at Microsoft. I really thank to Abhishek who made it possible for us. Because of his efforts, all the MVPs had the opportunity to meet such a legendary person and had to talk with them on Microsoft Technology. Though Somasegar is currently holding such a high position in Microsoft, he is very polite and a real gentleman, and how I wish that everybody in industry is like him. Believe me, if you spread love and kindness, then that is what you will receive back. As soon as lunch time was over, I ran to the session hall as my second presentation was about to start. Time: 2:30pm – 3:30pm Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Business Intelligence is a subject which was widely talked about at TechEd. Everybody was interested in this subject, and I did not excuse myself from this great concept as well. I consider myself fortunate as I was presenting on the subject of Master Data Services at TechEd. When I had initially learned this subject, I had a bit of confusion about the usage of this tool. Later on, I decided that I would tackle about how we all developers and DBAs are not able to understand something so simple such as this, and even worst, creating confusion about the technology. During system designing, it is very important to have a reference material or master lookup tables. Well, I talked about the same subject and presented the session keeping that as my center talk. The session went very well and I received lots of interesting questions. I got many compliments for talking about this subject on the real-life scenario. I really thank Rushabh Mehta (CEO, Solid Quality Mentors India) for his supportive suggestions that helped me prepare the slide deck, as well as the subject. Pinal Presenting session at TechEd India 2010 The abstract of the session is as follows: SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in-depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision-making process by allowing you to create, manage and propagate changes from a single master view of your business entities. Also, MDS – Master Data-hub which is a vital component, helps ensure the consistency of reporting across systems and deliver faster and more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Pinal Presenting session at TechEd India 2010 The day was still not over for me. I had ran into several friends but we were not able keep our enthusiasm under control about all the rumors saying that SQL Server 2008 R2 was about to be launched tomorrow in the keynote. I then ran to my third and final technical event for the day- a panel discussion with the top technologies of India. Time: 5:00pm – 6:00pm Panel Discussion: Harness the power of Web – SEO and Technical Blogging As I have delivered two technical sessions by this time, I was a bit tired but  not less enthusiastic when I had to talk about Blog and Technology. We discussed many different topics there. I told them that the most important aspect for any blog is its content. We discussed in depth the issues with plagiarism and how to avoid it. Another topic of discussion was how we technology bloggers can create awareness in the Community about what the right kind of blogging is and what morally and technically wrong acts are. A couple of questions were raised about what type of liberty a person can have in terms of writing blogs. Well, it was generically agreed that a blog is mainly a representation of our ideas and thoughts; it should not be governed by external entities. As long as one is writing what they really want to say, but not providing incorrect information or not practicing plagiarism, a blogger should be allowed to express himself. This panel discussion was supposed to be over in an hour, but the interest of the participants was remarkable and so it was extended for 30 minutes more. Finally, we decided to bring to a close the discussion and agreed that we will continue the topic next year. TechEd India Panel Discussion on Web, Technology and SEO Surprisingly, the day was just beginning after doing all of these. By this time, I have almost met all the MVP who arrived at the event, as well as many Microsoft employees. There were lots of Community folks present, too. I decided that I would go to meet several friends from the Community and continue to communicate with me on SQLAuthority.com. I also met Abhishek Baxi and had a good talk with him regarding Win Mobile and Twitter. He also took a very quick video of me wherein I spoke in my mother’s tongue, Gujarati. It was funny that I talked in Gujarati almost all the day, but when I was talking in the interview I could not find the right Gujarati words to speak. I think we all think in English when we think about Technology, so as to address universality. After meeting them, I headed towards the Speakers’ Dinner. Time: 8:00 PM – onwards Speakers Dinner The Speakers’ dinner was indeed a wonderful opportunity for all the speakers to get together and relax. We talked so many different things, from XBOX to Hindi Movies, and from SQL to Samosas. I just could not express how much fun I had. After a long evening, when I returned tmy room and met Shaivi, I just felt instantly relaxed. Kids are really gifts from God. Today was a really long but exciting day. So many things happened in just one day: Visual Studio Lanch, lunch with Somasegar, 2 technical sessions, 1 panel discussion, community leaders meeting, speakers dinner and, last but not leas,t playing with my child! A perfect day! Day 2 – April 13, 2010 Today started with a bang with the excellent keynote by Kamal Hathi who launched SQL Server 2008 R2 in India and demonstrated the power of PowerPivot to all of us. 101 Million Rows in Excel brought lots of applause from the audience. Kamal Hathi Presenting Keynote at TechEd India 2010 The day was a bit easier one for me. I had no sessions today and no events planned. I had a few meetings planned for the second day of the event. I sat in the speaker’s lounge for half a day and met many people there. I attended nearly 9 different meetings today. The subjects of the meetings were very different. Here is a list of the topics of the Community-related meetings: SQL PASS and its involvement in India and subcontinents How to start community blogging Forums and developing aptitude towards technology Ahmedabad/Gandhinagar User Groups and their developments SharePoint and SQL Business Meeting – a client meeting Business Meeting – a potential performance tuning project Business Meeting – Solid Quality Mentors (SolidQ) And family friends Pinal Dave at TechEd India The day passed by so quickly during this meeting. In the evening, I headed to Partners Expo with friends and checked out few of the booths. I really wanted to talk about some of the products, but due to the freebies there was so much crowd that I finally decided to just take the contact details of the partner. I will now start sending them with my queries and, hopefully, I will have my questions answered. Nupur and Shaivi had also one meeting to attend; it was with our family friend Vijay Raj. Vijay is also a person who loves Technology and loves it more than anybody. I see him growing and learning every day, but still remaining as a ‘human’. I believe that if someone acquires as much knowledge as him, that person will become either a computer or cyborg. Here, Vijay is still a kind gentleman and is able to stay as our close family friend. Shaivi was really happy to play with Uncle Vijay. Pinal Dave and Vijay Raj Renuka Prasad, a Microsoft MVP, impressed me with his passion and knowledge of SQL. Every time he gives me credit for his success, I believe that he is very humble. He has way more certifications than me and has worked many more years with SQL compared to me. He is an excellent photographer as well. Most of the photos in this blog post have been taken by him. I told him if ever he wants to do a part time job, he can do the photography very well. Pinal Dave and Renuka Prasad I also met L Srividya from Microsoft, whom I was looking forward to meet. She is a bundle of knowledge that everyone would surely learn a lot from her. I was able to get a few minutes from her and well, I felt confident. She enlightened me with SQL Server BI concepts, domain management and SQL Server security and few other interesting details. I also had a wonderful time talking about SharePoint with fellow Solid Quality Mentor Joy Rathnayake. He is very passionate about SharePoint but when you talk .NET and SQL with him, he is still overwhelmingly knowledgeable. In fact, while talking to him, I figured out that the recent training he delivered was on SQL Server 2008 R2. I told him a joke that it hurts my ego as he is more popular now in SQL training and consulting than me. I am sure all of you agree that working with good people is a gift from God. I am fortunate enough to work with the best of the best Industry experts. It was a great pleasure to hang out with my Community friends – Ahswin Kini, HimaBindu Vejella, Vasudev G, Suprotim Agrawal, Dhananjay, Vikram Pendse, Mahesh Dhola, Mahesh Mitkari,  Manu Zacharia, Shobhan, Hardik Shah, Ashish Mohta, Manan, Subodh Sohani and Sanjay Shetty (of course!) .  (Please let me know if I have met you at the event and forgot your name to list here). Time: 8:00 PM – onwards Community Leaders Dinner After lots of meetings, I headed towards the Community Leaders dinner meeting and met almost all the folks I met in morning. The discussion was almost the same but the real good thing was that we were enjoying it. The food was really good. Nupur was invited in the event, but Shaivi could not come. When Nupur tried to enter the event, she was stopped as Shaivi did not have the pass to enter the dinner. Nupur expressed that Shaivi is only 8 months old and does not eat outside food as well and could not stay by herself at this age, but the door keeper did not agree and asked that without the entry details Shaivi could not go in, but Nupur could. Nupur called me on phone and asked me to help her out. By the time, I was outside; the organizer of the event reached to the door and happily approved Shaivi to join the party. Once in the party, Shaivi had lots of fun meeting so many people. Shaivi Dave and Abhishek Kant Dean Guida (Infragistics President and CEO) and Pinal Dave (SQLAuthority.com) Day 3 – April 14, 2010 Though, it was last day, I was very much excited today as I was about to present my very favorite session. Query Optimization and Performance Tuning is my domain expertise and I make my leaving by consulting and training the same. Today’s session was on the same subject and as an additional twist, another subject about Spatial Database was presented. I was always intrigued with Spatial Database and I have enjoyed learning about it; however, I have never thought about Spatial Indexing before it was decided that I will do this session. I really thank Solid Quality Mentor Dr. Greg Low for his assistance in helping me prepare the slide deck and also review the content. Furthermore, today was really what I call my ‘learning day’ . So far I had not attended any session in TechEd and I felt a bit down for that. Everybody spends their valuable time & money to learn something new and exciting in TechEd and I had not attended a single session at the moment thinking that it was already last day of the event. I did have a plan for the day and I attended two technical sessions before my session of spatial database. I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller and there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod is syhe speaker. He did not have a single time disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL. Time: 12:30pm-1:30pm SQL Server Query Optimization, Execution and Debugging Query Performance I really had a fun time attending this session. Vinod made this session very interactive. The entire audience really got into the presentation and started participating in the event. Vinod was presenting a small problem with Query Tuning, which any developer would have encountered and solved with their help in such a fashion that a developer feels he or she have already resolved it. In one question, I was the only one who was ready to answer and Vinod told me in a light tone that I am now allowed to answer it! The audience really found it very amusing. There was a huge crowd around Vinod after the session. Vinod – A master storyteller! Time: 3:45pm-4:45pm Data Recovery / consistency with CheckDB This session was much heavier than the earlier one, and I must say this is my most favorite session I EVER attended in India. In this TechEd I have only attended two sessions, but in my career, I have attended numerous technical sessions not only in India, but all over the world. This session had taken my breath away. One by one, Vinod took the different databases, and started to corrupt them in different ways. Each database has some unique ways to get corrupted. Once that was done, Vinod started to show the DBCC CEHCKDB and demonstrated how it can solve your problem. He finally fixed all the databases with this single tool. I do have a good knowledge of this subject, but let me honestly admit that I have learned a lot from this session. I enjoyed and cheered during this session along with other attendees. I had total satisfaction that, just like everyone, I took advantage of the event and learned something. I am now TECHnically EDucated. Pinal Dave and Vinod Kumar After two very interactive and informative SQL Sessions from Vinod Kumar, the next turn me presenting on Spatial Database and Indexing. I got once again nervous but Vinod told me to stay natural and do my presentation. Well, once I got a huge stage with a total of four projectors and a large crowd, I felt better. Time: 5:00pm-6:00pm Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Pinal Presenting session at TechEd India 2010 Pinal Presenting session at TechEd India 2010 I kicked off this session with Michael J Swart‘s beautiful spatial image. This session was the last one for the day but, to my surprise, I had more than 200+ attendees. Slowly, the rain was starting outside and I was worried that the hall would not be full; despite this, there was not a single seat available in the first five minutes of the session. Thanks to all of you for attending my presentation. I had demonstrated the map of world (and India) and quickly explained what  Geographic and Geometry data types in Spatial Database are. This session had interesting story of Indexing and Comparison, as well as how different traditional indexes are from spatial indexing. Pinal Presenting session at TechEd India 2010 Due to the heavy rain during this event, the power went off for about 22 minutes (just an accident – nobodies fault). During these minutes, there were no audio, no video and no light. I continued to address the mass of 200+ people without any audio device and PowerPoint. I must thank the audience because not a single person left from the session. They all stayed in their place, some moved closure to listen to me properly. I noticed that the curiosity and eagerness to learn new things was at the peak even though it was the very last session of the TechEd. Everybody wanted get the maximum knowledge out of this whole event. I was touched by the support from audience. They listened and participated in my session even without any kinds of technology (no ppt, no mike, no AC, nothing). During these 22 minutes, I had completed my theory verbally. Pinal Presenting session at TechEd India 2010 After a while, we got the projector back online and we continued with some exciting demos. Many thanks to Microsoft people who worked energetically in background to get the backup power for project up. I had a very interesting demo wherein I overlaid Bangalore and Hyderabad on the India Map and find their aerial distance between them. After finding the aerial distance, we browsed online and found that SQL Server estimates the exact aerial distance between these two cities, as compared to the factual distance. There was a huge applause from the crowd on the subject that SQL Server takes into the count of the curvature of the earth and finds the precise distances based on details. During the process of finding the distance, I demonstrated a few examples of the indexes where I expressed how one can use those indexes to find these distances and how they can improve the performance of similar query. I also demonstrated few examples wherein we were able to see in which data type the Index is most useful. We finished the demos with a few more internal stuff. Pinal Presenting session at TechEd India 2010 Despite all issues, I was mostly satisfied with my presentation. I think it was the best session I have ever presented at any conference. There was no help from Technology for a while, but I still got lots of appreciation at the end. When we ended the session, the applause from the audience was so loud that for a moment, the rain was not audible. I was truly moved by the dedication of the Technology enthusiasts. Pinal Dave After Presenting session at TechEd India 2010 The abstract of the session is as follows: The Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Time: 8:00 PM – onwards Dinner by Sponsors After the lively session during the day, there was another dinner party courtesy of one of the sponsors of TechEd. All the MVPs and several Community leaders were present at the dinner. I would like to express my gratitude to Abhishek Kant for organizing this wonderful event for us. It was a blast and really relaxing in all angles. We all stayed there for a long time and talked about our sweet and unforgettable memories of the event. Pinal Dave and Bijoy Singhal It was really one wonderful event. After writing this much, I say that I have no words to express about how much I enjoyed TechEd. However, it is true that I shared with you only 1% of the total activities I have done at the event. There were so many people I have met, yet were not mentioned here although I wanted to write their names here, too . Anyway, I have learned so many things and up until now, I am not able to get over all the fun I had in this event. Pinal Dave at TechEd India 2010 The Next Days – April 15, 2010 – till today I am still not able to get my mind out of the whole experience I had at TechEd India 2010. It was like a whole Microsoft Family working together to celebrate a happy occasion. TechEd India – Truly An Unforgettable Experience! Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • TCP RST right after FIN/ACK

    - by Nitzan Shaked
    I am having the weirdest issue: I have a web server which sometimes, only on very specific requests, will send a RST to the client after having sent the FIN datagram. First, a description of the setup: The server runs on an Ubuntu 12.04.1 LTS, which itself is a VM guest inside a Win7 x64 host, in bridged mode. ufw is disabled on the host The client runs on a iOS simulator, which runs on OS X Mountain Lion, which is a VM guest (hackintosh) inside a Win7 x64 host, in bridged mode. Both client and server are on the same LAN, one is connected to the home router via an Ethernet cable, and then other thru WiFi. I happened to glimpse over the server's http logs and found that the client sometimes issuing multiple subsequent identical requests. Further investigation led me to discover that this happens when the server sends a RST, and that the client is simply re-trying. I am attaching several tcpdump's: Good1 is the server-side tcpdump of a good session ("good" meaning no RST was generated). Good3 is another sever-side tcpdump of a good session. (The difference between Good1 and Good3 is the order in which ACK's were sent from the server to the client, ACK'ing the client's request. The client's request arives in 2 segements (specifically: one for the http headers, and another for a body containing an empty json object, "{}"). In Good1, the server ACK's both request segments, using 2 ACK segments, after the second request has arrived. In Good3, the server ACK's each request segment with an ACK segment as soon as the request segment arrives. Not that it should make a difference.) Bad1 is a dump, both client- and server-side, of a bad session. Bad2 is another bad session, this time server-side only. Note that in all "bad" sessions, the server ACK's each request segments immediately after having received it. I've looked at a few other bad sessions, and the situation is the same in all of them. But this is also the behavior in "Good3", so I don't see how that observation helps me, of for that matter why it should matter. I can't find any difference between good and bad sessions, or at least one that I think should matter. My question is: why are those RST's being generated? Or at least: how do I go about debugging this, or providing more info here that'll help? Edit 2 new facts that I have learned: Section 4.2.2.13 of the RFC (1122) (and Wikipedia, in the article "TCP", under "Connection Termination") says that a TCP application on one host may close the connection before it has read all of the data in its socket buffer, and in such a case the TCP on the host will sent a RST to the other side, to let it know that not all the data it has sent has been read. I'm not sure I completely understand this, since closing my side of the connection still allows me to read, no? It also means that I can't write any more. I am not sure this is relevant, though, since I see a RST after FIN. There are multiple complaints of this happening with wsgiref (Python's dev-mode HTTP server), which is exactly what I'm using. I'll keep updating as I find out more. Thanks! ~~~~~~~~~~~~~~~~~~~~ Good1 -- Server Side ~~~~~~~~~~~~~~~~~~~~ 13:28:02.308319 IP 192.168.1.51.51479 > 192.168.1.132.5000: Flags [S], seq 94268074, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 943308864 ecr 0,sackOK,eol], length 0 13:28:02.308336 IP 192.168.1.132.5000 > 192.168.1.51.51479: Flags [S.], seq 1726304574, ack 94268075, win 14480, options [mss 1460,sackOK,TS val 326480982 ecr 943308864,nop,wscale 3], length 0 13:28:02.309750 IP 192.168.1.51.51479 > 192.168.1.132.5000: Flags [.], ack 1, win 8235, options [nop,nop,TS val 943308865 ecr 326480982], length 0 13:28:02.310744 IP 192.168.1.51.51479 > 192.168.1.132.5000: Flags [P.], seq 1:351, ack 1, win 8235, options [nop,nop,TS val 943308865 ecr 326480982], length 350 13:28:02.310766 IP 192.168.1.51.51479 > 192.168.1.132.5000: Flags [P.], seq 351:353, ack 1, win 8235, options [nop,nop,TS val 943308865 ecr 326480982], length 2 13:28:02.310841 IP 192.168.1.132.5000 > 192.168.1.51.51479: Flags [.], ack 351, win 1944, options [nop,nop,TS val 326480983 ecr 943308865], length 0 13:28:02.310918 IP 192.168.1.132.5000 > 192.168.1.51.51479: Flags [.], ack 353, win 1944, options [nop,nop,TS val 326480983 ecr 943308865], length 0 13:28:02.315931 IP 192.168.1.132.5000 > 192.168.1.51.51479: Flags [P.], seq 1:18, ack 353, win 1944, options [nop,nop,TS val 326480984 ecr 943308865], length 17 13:28:02.316107 IP 192.168.1.132.5000 > 192.168.1.51.51479: Flags [FP.], seq 18:684, ack 353, win 1944, options [nop,nop,TS val 326480984 ecr 943308865], length 666 13:28:02.317651 IP 192.168.1.51.51479 > 192.168.1.132.5000: Flags [.], ack 18, win 8234, options [nop,nop,TS val 943308872 ecr 326480984], length 0 13:28:02.318288 IP 192.168.1.51.51479 > 192.168.1.132.5000: Flags [.], ack 685, win 8192, options [nop,nop,TS val 943308872 ecr 326480984], length 0 13:28:02.318640 IP 192.168.1.51.51479 > 192.168.1.132.5000: Flags [F.], seq 353, ack 685, win 8192, options [nop,nop,TS val 943308872 ecr 326480984], length 0 13:28:02.318651 IP 192.168.1.132.5000 > 192.168.1.51.51479: Flags [.], ack 354, win 1944, options [nop,nop,TS val 326480985 ecr 943308872], length 0 ~~~~~~~~~~~~~~~~~~~~ Good3 -- Server Side ~~~~~~~~~~~~~~~~~~~~ 13:28:03.311143 IP 192.168.1.51.51486 > 192.168.1.132.5000: Flags [S], seq 1982901126, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 943309853 ecr 0,sackOK,eol], length 0 13:28:03.311155 IP 192.168.1.132.5000 > 192.168.1.51.51486: Flags [S.], seq 2245063571, ack 1982901127, win 14480, options [mss 1460,sackOK,TS val 326481233 ecr 943309853,nop,wscale 3], length 0 13:28:03.312671 IP 192.168.1.51.51486 > 192.168.1.132.5000: Flags [.], ack 1, win 8235, options [nop,nop,TS val 943309854 ecr 326481233], length 0 13:28:03.313330 IP 192.168.1.51.51486 > 192.168.1.132.5000: Flags [P.], seq 1:351, ack 1, win 8235, options [nop,nop,TS val 943309855 ecr 326481233], length 350 13:28:03.313337 IP 192.168.1.132.5000 > 192.168.1.51.51486: Flags [.], ack 351, win 1944, options [nop,nop,TS val 326481234 ecr 943309855], length 0 13:28:03.313342 IP 192.168.1.51.51486 > 192.168.1.132.5000: Flags [P.], seq 351:353, ack 1, win 8235, options [nop,nop,TS val 943309855 ecr 326481233], length 2 13:28:03.313346 IP 192.168.1.132.5000 > 192.168.1.51.51486: Flags [.], ack 353, win 1944, options [nop,nop,TS val 326481234 ecr 943309855], length 0 13:28:03.327942 IP 192.168.1.132.5000 > 192.168.1.51.51486: Flags [P.], seq 1:18, ack 353, win 1944, options [nop,nop,TS val 326481237 ecr 943309855], length 17 13:28:03.328253 IP 192.168.1.132.5000 > 192.168.1.51.51486: Flags [FP.], seq 18:684, ack 353, win 1944, options [nop,nop,TS val 326481237 ecr 943309855], length 666 13:28:03.329076 IP 192.168.1.51.51486 > 192.168.1.132.5000: Flags [.], ack 18, win 8234, options [nop,nop,TS val 943309868 ecr 326481237], length 0 13:28:03.329688 IP 192.168.1.51.51486 > 192.168.1.132.5000: Flags [.], ack 685, win 8192, options [nop,nop,TS val 943309868 ecr 326481237], length 0 13:28:03.330361 IP 192.168.1.51.51486 > 192.168.1.132.5000: Flags [F.], seq 353, ack 685, win 8192, options [nop,nop,TS val 943309869 ecr 326481237], length 0 13:28:03.330370 IP 192.168.1.132.5000 > 192.168.1.51.51486: Flags [.], ack 354, win 1944, options [nop,nop,TS val 326481238 ecr 943309869], length 0 ~~~~~~~~~~~~~~~~~~~~ Bad1 -- Server Side ~~~~~~~~~~~~~~~~~~~~ 13:28:01.311876 IP 192.168.1.51.51472 > 192.168.1.132.5000: Flags [S], seq 920400580, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 943307883 ecr 0,sackOK,eol], length 0 13:28:01.311896 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [S.], seq 3103085782, ack 920400581, win 14480, options [mss 1460,sackOK,TS val 326480733 ecr 943307883,nop,wscale 3], length 0 13:28:01.313509 IP 192.168.1.51.51472 > 192.168.1.132.5000: Flags [.], ack 1, win 8235, options [nop,nop,TS val 943307884 ecr 326480733], length 0 13:28:01.315614 IP 192.168.1.51.51472 > 192.168.1.132.5000: Flags [P.], seq 1:351, ack 1, win 8235, options [nop,nop,TS val 943307886 ecr 326480733], length 350 13:28:01.315727 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [.], ack 351, win 1944, options [nop,nop,TS val 326480734 ecr 943307886], length 0 13:28:01.316229 IP 192.168.1.51.51472 > 192.168.1.132.5000: Flags [P.], seq 351:353, ack 1, win 8235, options [nop,nop,TS val 943307886 ecr 326480733], length 2 13:28:01.316242 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [.], ack 353, win 1944, options [nop,nop,TS val 326480734 ecr 943307886], length 0 13:28:01.321019 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [P.], seq 1:18, ack 353, win 1944, options [nop,nop,TS val 326480735 ecr 943307886], length 17 13:28:01.321294 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [FP.], seq 18:684, ack 353, win 1944, options [nop,nop,TS val 326480736 ecr 943307886], length 666 13:28:01.321386 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [R.], seq 685, ack 353, win 1944, options [nop,nop,TS val 326480736 ecr 943307886], length 0 13:28:01.322727 IP 192.168.1.51.51472 > 192.168.1.132.5000: Flags [.], ack 18, win 8234, options [nop,nop,TS val 943307891 ecr 326480735], length 0 13:28:01.322733 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [R], seq 3103085800, win 0, length 0 13:28:01.323221 IP 192.168.1.51.51472 > 192.168.1.132.5000: Flags [.], ack 685, win 8192, options [nop,nop,TS val 943307892 ecr 326480736], length 0 13:28:01.323231 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [R], seq 3103086467, win 0, length 0 ~~~~~~~~~~~~~~~~~~~~ Bad1 -- Client Side ~~~~~~~~~~~~~~~~~~~~ 13:28:11.374654 IP 192.168.1.51.51472 > 192.168.1.132.5000: Flags [S], seq 920400580, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 943307883 ecr 0,sackOK,eol], length 0 13:28:11.375764 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [S.], seq 3103085782, ack 920400581, win 14480, options [mss 1460,sackOK,TS val 326480733 ecr 943307883,nop,wscale 3], length 0 13:28:11.376352 IP 192.168.1.51.51472 > 192.168.1.132.5000: Flags [.], ack 1, win 8235, options [nop,nop,TS val 943307884 ecr 326480733], length 0 13:28:11.378252 IP 192.168.1.51.51472 > 192.168.1.132.5000: Flags [P.], seq 1:351, ack 1, win 8235, options [nop,nop,TS val 943307886 ecr 326480733], length 350 13:28:11.379027 IP 192.168.1.51.51472 > 192.168.1.132.5000: Flags [P.], seq 351:353, ack 1, win 8235, options [nop,nop,TS val 943307886 ecr 326480733], length 2 13:28:11.379732 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [.], ack 351, win 1944, options [nop,nop,TS val 326480734 ecr 943307886], length 0 13:28:11.380592 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [.], ack 353, win 1944, options [nop,nop,TS val 326480734 ecr 943307886], length 0 13:28:11.384968 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [P.], seq 1:18, ack 353, win 1944, options [nop,nop,TS val 326480735 ecr 943307886], length 17 13:28:11.385044 IP 192.168.1.51.51472 > 192.168.1.132.5000: Flags [.], ack 18, win 8234, options [nop,nop,TS val 943307891 ecr 326480735], length 0 13:28:11.385586 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [FP.], seq 18:684, ack 353, win 1944, options [nop,nop,TS val 326480736 ecr 943307886], length 666 13:28:11.385743 IP 192.168.1.51.51472 > 192.168.1.132.5000: Flags [.], ack 685, win 8192, options [nop,nop,TS val 943307892 ecr 326480736], length 0 13:28:11.385966 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [R.], seq 685, ack 353, win 1944, options [nop,nop,TS val 326480736 ecr 943307886], length 0 13:28:11.387343 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [R], seq 3103085800, win 0, length 0 13:28:11.387344 IP 192.168.1.132.5000 > 192.168.1.51.51472: Flags [R], seq 3103086467, win 0, length 0 ~~~~~~~~~~~~~~~~~~~~ Bad2 -- Server Side ~~~~~~~~~~~~~~~~~~~~ 13:28:01.319185 IP 192.168.1.51.51473 > 192.168.1.132.5000: Flags [S], seq 1631526992, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 943307889 ecr 0,sackOK,eol], length 0 13:28:01.319197 IP 192.168.1.132.5000 > 192.168.1.51.51473: Flags [S.], seq 2524685719, ack 1631526993, win 14480, options [mss 1460,sackOK,TS val 326480735 ecr 943307889,nop,wscale 3], length 0 13:28:01.320692 IP 192.168.1.51.51473 > 192.168.1.132.5000: Flags [.], ack 1, win 8235, options [nop,nop,TS val 943307890 ecr 326480735], length 0 13:28:01.322219 IP 192.168.1.51.51473 > 192.168.1.132.5000: Flags [P.], seq 1:351, ack 1, win 8235, options [nop,nop,TS val 943307890 ecr 326480735], length 350 13:28:01.322336 IP 192.168.1.132.5000 > 192.168.1.51.51473: Flags [.], ack 351, win 1944, options [nop,nop,TS val 326480736 ecr 943307890], length 0 13:28:01.322689 IP 192.168.1.51.51473 > 192.168.1.132.5000: Flags [P.], seq 351:353, ack 1, win 8235, options [nop,nop,TS val 943307890 ecr 326480735], length 2 13:28:01.322700 IP 192.168.1.132.5000 > 192.168.1.51.51473: Flags [.], ack 353, win 1944, options [nop,nop,TS val 326480736 ecr 943307890], length 0 13:28:01.326307 IP 192.168.1.132.5000 > 192.168.1.51.51473: Flags [P.], seq 1:18, ack 353, win 1944, options [nop,nop,TS val 326480737 ecr 943307890], length 17 13:28:01.326614 IP 192.168.1.132.5000 > 192.168.1.51.51473: Flags [FP.], seq 18:684, ack 353, win 1944, options [nop,nop,TS val 326480737 ecr 943307890], length 666 13:28:01.326710 IP 192.168.1.132.5000 > 192.168.1.51.51473: Flags [R.], seq 685, ack 353, win 1944, options [nop,nop,TS val 326480737 ecr 943307890], length 0 13:28:01.328499 IP 192.168.1.51.51473 > 192.168.1.132.5000: Flags [.], ack 18, win 8234, options [nop,nop,TS val 943307896 ecr 326480737], length 0 13:28:01.328509 IP 192.168.1.132.5000 > 192.168.1.51.51473: Flags [R], seq 2524685737, win 0, length 0 13:28:01.328514 IP 192.168.1.51.51473 > 192.168.1.132.5000: Flags [.], ack 685, win 8192, options [nop,nop,TS val 943307896 ecr 326480737], length 0 13:28:01.328517 IP 192.168.1.132.5000 > 192.168.1.51.51473: Flags [R], seq 2524686404, win 0, length 0

    Read the article

< Previous Page | 41 42 43 44 45 46  | Next Page >