Search Results

Search found 20155 results on 807 pages for 'things'.

Page 123/807 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Some Power Supply Cable Questions?

    - by jasondavis
    I am building a new PC and I haven't done this in a few years. It will have all the latest tech stuff. I got my PSU in the mail and I am looking over the cables (it has a lot) I thought it was a mudular PSU so I could only use the cables I need but instead it is a hybrid (some wires attached and some can be added/removed instead of all of then being removable). 1) So I am curious, I believe all my hard drives and optical drives are powered off of a sata power cable so does that mean I probably do not need any of the 4 pin molex cables? Or are these used for other things? 2) I know the 24-pin cable goes to my motherboard. 3) I have some 6-pin cables that are labeled pci-e which is new to me. I read these are for some grapghic cards and stuff. I have 2 grapghic cards but they do not require a seperate pci-e power wire be hooked to them. So are these pci-e wires just to power pci-express cards? Or for other things as well? 4) I have a 4pin ATX 12v wire, what is this for? 5) 8 pin EPS, what is this for?

    Read the article

  • What software will tell me if I've already downloaded a video? [closed]

    - by dave
    (I use Linux KNOPPIX (distro 7.0.2, ver 3.3.7) on hard drive.) I download videos of TV programs from the 60s and 70s (mainly from youtube). I copy the youtube URL then paste it into www.keepvid.com to download it (usually .mp4 format). Having now got dozens of such video files (and growing) on my hard drive, I'd like to organise them. WHAT I'D LIKE TO DO IS: Say I find a new vid (of a TV prog) on youtube (or another site), and I'm about to download it. It's possible that I've WATCHED IT BEFORE but have forgotten. So is there software out there that I can run which will do the following: Check for me if I've ALREADY WATCHED the vid that I'm about to download. At the moment, once I've watched a vid on my hard drive, I move the file to another directory called "Watched". But this of course doesn't alert me in the immediate way that I want. . It would crudely suffice, if the software told me AFTER I've downloaded the vid, if I've ALREADY WATCHED it (ie if it's already in my "Watched" directory, or perhaps in a "watched" list). I sometimes alter the filename of the original video file on hard drive, so this might spoil a comparison. If the software alerts me to the fact that I've already watched the vid (preferably BEFORE I download it), then this will allow me to confidently download only new vids that I haven't watched before, and save me duplicating my effort. I'd be most grateful if anyone can suggest such a piece of software, or an alternative solution. I'll be honest, I avoid software that infringes your privacy and control - you know, software that automatically does things behind your back, like upgrades itself over the internet, puts things on your hard drive that you didn't ask for, or sends information from your hard drive to websites.

    Read the article

  • Renaming VLAN Interfaces in Linux

    - by rhololkeolke
    I need to know how to rename VLAN interfaces. I'm currently running Ubuntu 11.04. I'm running a networking application that takes frames on one interface applies things like delays and errors and then forwards the frames out another interface. The default naming convention which names things <interface>.<vlan> e.g. eth0.2 will not work for my purposes because the program which parses the configuration script for the networking application doesn't like the decimal in the interface name. I ran vconfig set_name_type VLAN_PLUS_VID which solves the decimal in the interface name problem, however, I can then no longer assign the same vlan id to multiple interfaces because they have the same name. I know how to change physical interface names using udev rules, but because the vlan's will have the same MAC address and they aren't physical interfaces I can't use those rules to rename the interfaces. Is there a way to rename any interface in linux, including the virtual ones? Is there a way to specify your own naming convention for config set_name_type option without having to recompile the source of vconfig?

    Read the article

  • Strange focus bug in Firefox (chrome vs content)

    - by Marius
    Here is a strange bug I'm experiencing in Firefox: I can only use either the chrome, or the content, not both at the same time! For example, I can click on tabs and the toolbar icons, focus the search bar and write in it as well as the address bar, but if I try to click on anything in the content (eg a link or a textfield to write something), then nothing happens. The mouse pointer doesn't change either, it just stays a pointer when I hover over things, and the links I hover don't react either. But if I alt-tab to another program (or click on it in the taskbar), then back to Firefox, then I can use the area that I click on. So if I click somewhere on the webpage to get focus back to Firefox, then I can click on links and write things (like this text), but I cannot click on tabs or refresh or anything else in the chrome. I can't even click on the minimize, restore and close icons! To get focus back on the chrome I have to alt-tab to another program, and then click on the chrome to get back to Firefox to be able to use the chrome again. I've tried closing and starting it again, but the bug is still there. I have experienced this before, but I don't remember what I did to fix it. This bug seems to occur sometimes when I wake up the computer from standby, but I leave by computer in standby all the time, so that is not the only factor.

    Read the article

  • What is the potential for a FUSE mount to destabilize a Linux server?

    - by 200_success
    I'm a sysadmin for a multi-user server, where students in our department have shell accounts. One of our users has requested that we install sshfs on it. I'm debating whether it would be wise to install sshfs as suggested. My main concern is whether a FUSE mount could make our server less reliable. In my experience, bad things can happen to servers when an NFS server suddenly becomes unavailable — the load average shoots up, and you might not be able to unmount it cleanly, to the point where a hard reboot might be necessary. If a FUSE-mounted server suddenly disappears, how hard might it be to clean up the mess? Are there any other likely catastrophes or gotchas I should consider? At least with NFS, only root can mount, and we can choose to mount NFS servers that we consider to be reasonably reliable. Let's assume that our users have no hostile intentions, but might do stupid things accidentally. Also, I'm not really worried about the contents of the filesystems they might mount, since our users already have shell access and can copy anything they want to their home directory.

    Read the article

  • Anyone have ideas for solving the "n items remaining" problem on Internet Explorer?

    - by CMPalmer
    In my ASP.Net app, which is javascript and jQuery heavy, but also uses master pages and .Net Ajax pieces, I am consistently seeing on the status bar of IE 6 (and occasionally IE 7) the message "2 items remaining" or "15 items remaining" followed by "loading somegraphicsfile.png|gif ." This message never goes away and may or may not prevent some page functionality from running (it certainly seems to bog down, but I'm not positive). I can cause this to happen 99% of the time by just refreshing an .aspx age, but the number of items and, sometimes, the file it mentions varies. Usually it is 2, 3, 12, 13, or 15. I've Googled for answers and there are several suggestions or explanations. Some of them haven't worked for us, and others aren't practical for us to implement or try. Here are some of the ideas/theories: IE isn't caching images right, so it repeatedly asks for the same image if the image is repeated on the page and the server assumes that it should be cached locally since it's already served it in that page context. IE displays the images correctly, but sits and waits for a server response that never comes. Typically the file it says it is waiting on is repeated on the page. The page is using PNG graphics with transparency. Indeed it is, but they are jQuery-UI Themeroller generated graphics which, according to the jQuery-UI folks, are IE safe. The jQuery-UI components are the only things using PNGs. All of our PNG references are in CSS, if that helps. I've changed some of the graphics from PNG to GIF, but it is just as likely to say it's waiting for somegraphicsfile.png as it is for somegraphicsfile.gif Images are being specified in CSS and/or JavaScript but are on things that aren't currently being displayed (display: none items for example). This may be true, but if it is, then I would think preloading images would work, but so far, adding a preloader doesn't do any good. IIS's caching policy is confusing the browser. If this is true, it is only Microsoft server SW having problems with Microsoft's browser (which doesn't surprise me at all). Unfortunately, I don't have much control over the IIS configuration that will be hosting the app. Has anyone seen this and found a way to combat it? Particularly on ASP.Net apps with jQuery and jQuery-UI? UPDATE One other data point: on at least one of the pages, just commenting out the jQuery-UI Datepicker component setup causes the problem to go away, but I don't think (or at least I'm not sure) if that fixes all of the pages. If it does "fix" them, I'll have to swap out plug-ins because that functionality needs to be there. There doesn't seem to be any open issues against jQuery-UI on IE6/7 currently... UPDATE 2 I checked the IIS settings and "enable content expiration" was not set on any of my folders. Unchecking that setting was a common suggestion for fixing this problem. I have another, simpler, page that I can consistently create the error on. I'm using the jQuery-UI 1.6rc6 file (although I've also tried jQuery-UI 1.7.1 with the same results). The problem only occurs when I refresh the page that contains the jQuery-UI Datepicker. If I comment out the Datepicker setup, the problem goes away. Here are a few things I notice when I do this: This page always says "(1 item remaining) Downloading picture http:///images/Calendar_scheduleHS.gif", but only when reloading. When I look at HTTP logging, I see that it requests that image from the server every time it is dynamically turned on, without regard to caching. All of the requests for that graphic are complete and return the graphic correctly. None are marked code 200 or 304 (indicating that the server is telling IE to use the cached version). Why it says waiting on that graphic when all of the requests have completed I have no idea. There is a single other graphic on the page (one of the UI PNG files) that has a code 304 (Not Modified). On another page where I managed to log HTTP traffic with "2 items remaining", two different graphic files (both UI PNGs) had a 304 as well (but neither was the one listed as "Downloading". This error is not innocuous - the page is not fully responsive. For example, if I click on one of the buttons which should execute a client-side action, the page refreshes. Going away from the page and coming back does not produce the error. I have moved the script and script references to the bottom of the content and this doesn't affect this problem. The script is still running in the $(document).ready() though (it's too hairy to divide out unless I absolutely have to). FINAL UPDATE AND ANSWER There were a lot of good answers and suggestions below, but none of them were exactly our problem. The closest one (and the one that led me to the solution) was the one about long running JavaScript, so I awarded the bounty there (I guess I could have answered it myself, but I'd rather reward info that leads to solutions). Here was our solution: We had multiple jQueryUI datepickers that were created on the $(document).ready event in script included from the ASP.Net master page. On this client page, a local script's $(document).ready event had script that destroyed the datepickers under certain conditions. We had to use "destroy" because the previous version of datepicker had a problem with "disable". When we upgraded to the latest version of jQuery UI (1.7.1) and replaced the "destroy"s with "disable"s for the datepickers, the problem went away (or mostly went away - if you do things too fast while the page is loading, it is still possible to get the "n items remaining" status). My theory as to what was happening goes like this: The page content loads and has 12 or so text boxes with the datepicker class. The master page script creates datepickers on those text boxes. IE queues up requests for each calendar graphic independently because IE doesn't know how to properly cache dynamic image requests. Before the requests get processed, the client area script destroys those datepickers so the graphics are no longer needed. IE is left with some number of orphaned requests that it doesn't know what to do with.

    Read the article

  • SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Op

    - by pinaldave
    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event in Indian subcontinents other than this. There are 21 Technical Tracks at Tech·Ed India 2010 that span more than 745 learning opportunities. I was fortunate enough to be a part of this whole event as a speaker and a delegate, as well. TechEd India Speaker Badge and A Token of Lifetime Hotel Selection I presented three different sessions at TechEd India and was also a part of panel discussion. (The details of the sessions are given at the end of this blog post.) Due to extensive traveling, I stay away from my family occasionally. For this reason, I took my wife – Nupur and daughter Shaivi (8 months old) to the event along with me. We stayed at the same hotel where the event was organized so as to maximize my time bonding with my family and to have more time in networking with technology community, at the same time. The hotel Lalit Ashok is the largest and most luxurious venue one can find in Bangalore, located in the middle of the city. The cost of the hotel was a bit pricey, but looking at all the advantages, I had decided to ask for a booking there. Hotel Lalit Ashok Nupur Dave and Shaivi Dave Arrival Day – DAY 0 – April 11, 2010 I reached the event a day earlier, and that was one wise decision for I was able to relax a bit and go over my presentation for the next day’s course. I am a kind of person who likes to get everything ready ahead of time. I was also able to enjoy a pleasant evening with several Microsoft employees and my family friends. I even checked out the location where I would be doing presentations the next day. I was fortunate enough to meet Bijoy Singhal from Microsoft who helped me out with a few of the logistics issues that occured the day before. I was not aware of the fact that the very next day he was going to be “The Man” of the TechEd 2010 event. Vinod Kumar from Microsoft was really very kind as he talked to me regarding my subsequent session. He gave me some suggestions which were really helpful that I was able to incorporate them during my presentation. Finally, I was able to meet Abhishek Kant from Microsoft; his valuable suggestions and unlimited passion have inspired many people like me to work with the Community. Pradipta from Microsoft was also around, being extremely busy with logistics; however, in those busy times, he did find some good spare time to have a chat with me and the other Community leaders. I also met Harish Ranganathan and Sachin Rathi, both from Microsoft. It was so interesting to listen to both of them talking about SharePoint. I just have no words to express my overwhelmed spirit because of all these passionate young guys - Pradipta,Vinod, Bijoy, Harish, Sachin and Ahishek (of course!). Map of TechEd India 2010 Event Day 1 – April 12, 2010 From morning until night time, today was truly a very busy day for me. I had two presentations and one panel discussion for the day. Needless to say, I had a few meetings to attend as well. The day started with a keynote from S. Somaseger where he announced the launch of Visual Studio 2010. The keynote area was really eye-catching because of the very large, bigger-than- life uniform screen. This was truly one to show. The title music of the keynote was very interesting and it featured Bijoy Singhal as the model. It was interesting to talk to him afterwards, when we laughed at jokes together about his modeling assignment. TechEd India Keynote Opening Featuring Bijoy TechEd India 2010 Keynote – S. Somasegar Time: 11:15pm – 11:45pm Session 1: True Lies of SQL Server – SQL Myth Buster Following the excellent keynote, I had my very first session on the subject of SQL Server Myth Buster. At first, I was a bit nervous as right after the keynote, for this was my very first session and during my presentation I saw lots of Microsoft Product Team members. Well, it really went well and I had a really good discussion with attendees of the session. I felt that a well begin was half-done and my confidence was regained. Right after the session, I met a few of my Community friends and had meaningful discussions with them on many subjects. The abstract of the session is as follows: In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myths and their resolutions as I back them up with some demo. This demo presentation is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet fun session. Pinal Presenting session at TechEd India 2010 Time: 1:00 PM – 2:00 PM Lunch with Somasegar After the session I went to see my daughter, and then I headed right away to the lunch with S. Somasegar – the keynote speaker and senior vice president of the Developer Division at Microsoft. I really thank to Abhishek who made it possible for us. Because of his efforts, all the MVPs had the opportunity to meet such a legendary person and had to talk with them on Microsoft Technology. Though Somasegar is currently holding such a high position in Microsoft, he is very polite and a real gentleman, and how I wish that everybody in industry is like him. Believe me, if you spread love and kindness, then that is what you will receive back. As soon as lunch time was over, I ran to the session hall as my second presentation was about to start. Time: 2:30pm – 3:30pm Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Business Intelligence is a subject which was widely talked about at TechEd. Everybody was interested in this subject, and I did not excuse myself from this great concept as well. I consider myself fortunate as I was presenting on the subject of Master Data Services at TechEd. When I had initially learned this subject, I had a bit of confusion about the usage of this tool. Later on, I decided that I would tackle about how we all developers and DBAs are not able to understand something so simple such as this, and even worst, creating confusion about the technology. During system designing, it is very important to have a reference material or master lookup tables. Well, I talked about the same subject and presented the session keeping that as my center talk. The session went very well and I received lots of interesting questions. I got many compliments for talking about this subject on the real-life scenario. I really thank Rushabh Mehta (CEO, Solid Quality Mentors India) for his supportive suggestions that helped me prepare the slide deck, as well as the subject. Pinal Presenting session at TechEd India 2010 The abstract of the session is as follows: SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in-depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision-making process by allowing you to create, manage and propagate changes from a single master view of your business entities. Also, MDS – Master Data-hub which is a vital component, helps ensure the consistency of reporting across systems and deliver faster and more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Pinal Presenting session at TechEd India 2010 The day was still not over for me. I had ran into several friends but we were not able keep our enthusiasm under control about all the rumors saying that SQL Server 2008 R2 was about to be launched tomorrow in the keynote. I then ran to my third and final technical event for the day- a panel discussion with the top technologies of India. Time: 5:00pm – 6:00pm Panel Discussion: Harness the power of Web – SEO and Technical Blogging As I have delivered two technical sessions by this time, I was a bit tired but  not less enthusiastic when I had to talk about Blog and Technology. We discussed many different topics there. I told them that the most important aspect for any blog is its content. We discussed in depth the issues with plagiarism and how to avoid it. Another topic of discussion was how we technology bloggers can create awareness in the Community about what the right kind of blogging is and what morally and technically wrong acts are. A couple of questions were raised about what type of liberty a person can have in terms of writing blogs. Well, it was generically agreed that a blog is mainly a representation of our ideas and thoughts; it should not be governed by external entities. As long as one is writing what they really want to say, but not providing incorrect information or not practicing plagiarism, a blogger should be allowed to express himself. This panel discussion was supposed to be over in an hour, but the interest of the participants was remarkable and so it was extended for 30 minutes more. Finally, we decided to bring to a close the discussion and agreed that we will continue the topic next year. TechEd India Panel Discussion on Web, Technology and SEO Surprisingly, the day was just beginning after doing all of these. By this time, I have almost met all the MVP who arrived at the event, as well as many Microsoft employees. There were lots of Community folks present, too. I decided that I would go to meet several friends from the Community and continue to communicate with me on SQLAuthority.com. I also met Abhishek Baxi and had a good talk with him regarding Win Mobile and Twitter. He also took a very quick video of me wherein I spoke in my mother’s tongue, Gujarati. It was funny that I talked in Gujarati almost all the day, but when I was talking in the interview I could not find the right Gujarati words to speak. I think we all think in English when we think about Technology, so as to address universality. After meeting them, I headed towards the Speakers’ Dinner. Time: 8:00 PM – onwards Speakers Dinner The Speakers’ dinner was indeed a wonderful opportunity for all the speakers to get together and relax. We talked so many different things, from XBOX to Hindi Movies, and from SQL to Samosas. I just could not express how much fun I had. After a long evening, when I returned tmy room and met Shaivi, I just felt instantly relaxed. Kids are really gifts from God. Today was a really long but exciting day. So many things happened in just one day: Visual Studio Lanch, lunch with Somasegar, 2 technical sessions, 1 panel discussion, community leaders meeting, speakers dinner and, last but not leas,t playing with my child! A perfect day! Day 2 – April 13, 2010 Today started with a bang with the excellent keynote by Kamal Hathi who launched SQL Server 2008 R2 in India and demonstrated the power of PowerPivot to all of us. 101 Million Rows in Excel brought lots of applause from the audience. Kamal Hathi Presenting Keynote at TechEd India 2010 The day was a bit easier one for me. I had no sessions today and no events planned. I had a few meetings planned for the second day of the event. I sat in the speaker’s lounge for half a day and met many people there. I attended nearly 9 different meetings today. The subjects of the meetings were very different. Here is a list of the topics of the Community-related meetings: SQL PASS and its involvement in India and subcontinents How to start community blogging Forums and developing aptitude towards technology Ahmedabad/Gandhinagar User Groups and their developments SharePoint and SQL Business Meeting – a client meeting Business Meeting – a potential performance tuning project Business Meeting – Solid Quality Mentors (SolidQ) And family friends Pinal Dave at TechEd India The day passed by so quickly during this meeting. In the evening, I headed to Partners Expo with friends and checked out few of the booths. I really wanted to talk about some of the products, but due to the freebies there was so much crowd that I finally decided to just take the contact details of the partner. I will now start sending them with my queries and, hopefully, I will have my questions answered. Nupur and Shaivi had also one meeting to attend; it was with our family friend Vijay Raj. Vijay is also a person who loves Technology and loves it more than anybody. I see him growing and learning every day, but still remaining as a ‘human’. I believe that if someone acquires as much knowledge as him, that person will become either a computer or cyborg. Here, Vijay is still a kind gentleman and is able to stay as our close family friend. Shaivi was really happy to play with Uncle Vijay. Pinal Dave and Vijay Raj Renuka Prasad, a Microsoft MVP, impressed me with his passion and knowledge of SQL. Every time he gives me credit for his success, I believe that he is very humble. He has way more certifications than me and has worked many more years with SQL compared to me. He is an excellent photographer as well. Most of the photos in this blog post have been taken by him. I told him if ever he wants to do a part time job, he can do the photography very well. Pinal Dave and Renuka Prasad I also met L Srividya from Microsoft, whom I was looking forward to meet. She is a bundle of knowledge that everyone would surely learn a lot from her. I was able to get a few minutes from her and well, I felt confident. She enlightened me with SQL Server BI concepts, domain management and SQL Server security and few other interesting details. I also had a wonderful time talking about SharePoint with fellow Solid Quality Mentor Joy Rathnayake. He is very passionate about SharePoint but when you talk .NET and SQL with him, he is still overwhelmingly knowledgeable. In fact, while talking to him, I figured out that the recent training he delivered was on SQL Server 2008 R2. I told him a joke that it hurts my ego as he is more popular now in SQL training and consulting than me. I am sure all of you agree that working with good people is a gift from God. I am fortunate enough to work with the best of the best Industry experts. It was a great pleasure to hang out with my Community friends – Ahswin Kini, HimaBindu Vejella, Vasudev G, Suprotim Agrawal, Dhananjay, Vikram Pendse, Mahesh Dhola, Mahesh Mitkari,  Manu Zacharia, Shobhan, Hardik Shah, Ashish Mohta, Manan, Subodh Sohani and Sanjay Shetty (of course!) .  (Please let me know if I have met you at the event and forgot your name to list here). Time: 8:00 PM – onwards Community Leaders Dinner After lots of meetings, I headed towards the Community Leaders dinner meeting and met almost all the folks I met in morning. The discussion was almost the same but the real good thing was that we were enjoying it. The food was really good. Nupur was invited in the event, but Shaivi could not come. When Nupur tried to enter the event, she was stopped as Shaivi did not have the pass to enter the dinner. Nupur expressed that Shaivi is only 8 months old and does not eat outside food as well and could not stay by herself at this age, but the door keeper did not agree and asked that without the entry details Shaivi could not go in, but Nupur could. Nupur called me on phone and asked me to help her out. By the time, I was outside; the organizer of the event reached to the door and happily approved Shaivi to join the party. Once in the party, Shaivi had lots of fun meeting so many people. Shaivi Dave and Abhishek Kant Dean Guida (Infragistics President and CEO) and Pinal Dave (SQLAuthority.com) Day 3 – April 14, 2010 Though, it was last day, I was very much excited today as I was about to present my very favorite session. Query Optimization and Performance Tuning is my domain expertise and I make my leaving by consulting and training the same. Today’s session was on the same subject and as an additional twist, another subject about Spatial Database was presented. I was always intrigued with Spatial Database and I have enjoyed learning about it; however, I have never thought about Spatial Indexing before it was decided that I will do this session. I really thank Solid Quality Mentor Dr. Greg Low for his assistance in helping me prepare the slide deck and also review the content. Furthermore, today was really what I call my ‘learning day’ . So far I had not attended any session in TechEd and I felt a bit down for that. Everybody spends their valuable time & money to learn something new and exciting in TechEd and I had not attended a single session at the moment thinking that it was already last day of the event. I did have a plan for the day and I attended two technical sessions before my session of spatial database. I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller and there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod is syhe speaker. He did not have a single time disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL. Time: 12:30pm-1:30pm SQL Server Query Optimization, Execution and Debugging Query Performance I really had a fun time attending this session. Vinod made this session very interactive. The entire audience really got into the presentation and started participating in the event. Vinod was presenting a small problem with Query Tuning, which any developer would have encountered and solved with their help in such a fashion that a developer feels he or she have already resolved it. In one question, I was the only one who was ready to answer and Vinod told me in a light tone that I am now allowed to answer it! The audience really found it very amusing. There was a huge crowd around Vinod after the session. Vinod – A master storyteller! Time: 3:45pm-4:45pm Data Recovery / consistency with CheckDB This session was much heavier than the earlier one, and I must say this is my most favorite session I EVER attended in India. In this TechEd I have only attended two sessions, but in my career, I have attended numerous technical sessions not only in India, but all over the world. This session had taken my breath away. One by one, Vinod took the different databases, and started to corrupt them in different ways. Each database has some unique ways to get corrupted. Once that was done, Vinod started to show the DBCC CEHCKDB and demonstrated how it can solve your problem. He finally fixed all the databases with this single tool. I do have a good knowledge of this subject, but let me honestly admit that I have learned a lot from this session. I enjoyed and cheered during this session along with other attendees. I had total satisfaction that, just like everyone, I took advantage of the event and learned something. I am now TECHnically EDucated. Pinal Dave and Vinod Kumar After two very interactive and informative SQL Sessions from Vinod Kumar, the next turn me presenting on Spatial Database and Indexing. I got once again nervous but Vinod told me to stay natural and do my presentation. Well, once I got a huge stage with a total of four projectors and a large crowd, I felt better. Time: 5:00pm-6:00pm Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Pinal Presenting session at TechEd India 2010 Pinal Presenting session at TechEd India 2010 I kicked off this session with Michael J Swart‘s beautiful spatial image. This session was the last one for the day but, to my surprise, I had more than 200+ attendees. Slowly, the rain was starting outside and I was worried that the hall would not be full; despite this, there was not a single seat available in the first five minutes of the session. Thanks to all of you for attending my presentation. I had demonstrated the map of world (and India) and quickly explained what  Geographic and Geometry data types in Spatial Database are. This session had interesting story of Indexing and Comparison, as well as how different traditional indexes are from spatial indexing. Pinal Presenting session at TechEd India 2010 Due to the heavy rain during this event, the power went off for about 22 minutes (just an accident – nobodies fault). During these minutes, there were no audio, no video and no light. I continued to address the mass of 200+ people without any audio device and PowerPoint. I must thank the audience because not a single person left from the session. They all stayed in their place, some moved closure to listen to me properly. I noticed that the curiosity and eagerness to learn new things was at the peak even though it was the very last session of the TechEd. Everybody wanted get the maximum knowledge out of this whole event. I was touched by the support from audience. They listened and participated in my session even without any kinds of technology (no ppt, no mike, no AC, nothing). During these 22 minutes, I had completed my theory verbally. Pinal Presenting session at TechEd India 2010 After a while, we got the projector back online and we continued with some exciting demos. Many thanks to Microsoft people who worked energetically in background to get the backup power for project up. I had a very interesting demo wherein I overlaid Bangalore and Hyderabad on the India Map and find their aerial distance between them. After finding the aerial distance, we browsed online and found that SQL Server estimates the exact aerial distance between these two cities, as compared to the factual distance. There was a huge applause from the crowd on the subject that SQL Server takes into the count of the curvature of the earth and finds the precise distances based on details. During the process of finding the distance, I demonstrated a few examples of the indexes where I expressed how one can use those indexes to find these distances and how they can improve the performance of similar query. I also demonstrated few examples wherein we were able to see in which data type the Index is most useful. We finished the demos with a few more internal stuff. Pinal Presenting session at TechEd India 2010 Despite all issues, I was mostly satisfied with my presentation. I think it was the best session I have ever presented at any conference. There was no help from Technology for a while, but I still got lots of appreciation at the end. When we ended the session, the applause from the audience was so loud that for a moment, the rain was not audible. I was truly moved by the dedication of the Technology enthusiasts. Pinal Dave After Presenting session at TechEd India 2010 The abstract of the session is as follows: The Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Time: 8:00 PM – onwards Dinner by Sponsors After the lively session during the day, there was another dinner party courtesy of one of the sponsors of TechEd. All the MVPs and several Community leaders were present at the dinner. I would like to express my gratitude to Abhishek Kant for organizing this wonderful event for us. It was a blast and really relaxing in all angles. We all stayed there for a long time and talked about our sweet and unforgettable memories of the event. Pinal Dave and Bijoy Singhal It was really one wonderful event. After writing this much, I say that I have no words to express about how much I enjoyed TechEd. However, it is true that I shared with you only 1% of the total activities I have done at the event. There were so many people I have met, yet were not mentioned here although I wanted to write their names here, too . Anyway, I have learned so many things and up until now, I am not able to get over all the fun I had in this event. Pinal Dave at TechEd India 2010 The Next Days – April 15, 2010 – till today I am still not able to get my mind out of the whole experience I had at TechEd India 2010. It was like a whole Microsoft Family working together to celebrate a happy occasion. TechEd India – Truly An Unforgettable Experience! Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Op

    - by pinaldave
    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event in Indian subcontinents other than this. There are 21 Technical Tracks at Tech·Ed India 2010 that span more than 745 learning opportunities. I was fortunate enough to be a part of this whole event as a speaker and a delegate, as well. TechEd India Speaker Badge and A Token of Lifetime Hotel Selection I presented three different sessions at TechEd India and was also a part of panel discussion. (The details of the sessions are given at the end of this blog post.) Due to extensive traveling, I stay away from my family occasionally. For this reason, I took my wife – Nupur and daughter Shaivi (8 months old) to the event along with me. We stayed at the same hotel where the event was organized so as to maximize my time bonding with my family and to have more time in networking with technology community, at the same time. The hotel Lalit Ashok is the largest and most luxurious venue one can find in Bangalore, located in the middle of the city. The cost of the hotel was a bit pricey, but looking at all the advantages, I had decided to ask for a booking there. Hotel Lalit Ashok Nupur Dave and Shaivi Dave Arrival Day – DAY 0 – April 11, 2010 I reached the event a day earlier, and that was one wise decision for I was able to relax a bit and go over my presentation for the next day’s course. I am a kind of person who likes to get everything ready ahead of time. I was also able to enjoy a pleasant evening with several Microsoft employees and my family friends. I even checked out the location where I would be doing presentations the next day. I was fortunate enough to meet Bijoy Singhal from Microsoft who helped me out with a few of the logistics issues that occured the day before. I was not aware of the fact that the very next day he was going to be “The Man” of the TechEd 2010 event. Vinod Kumar from Microsoft was really very kind as he talked to me regarding my subsequent session. He gave me some suggestions which were really helpful that I was able to incorporate them during my presentation. Finally, I was able to meet Abhishek Kant from Microsoft; his valuable suggestions and unlimited passion have inspired many people like me to work with the Community. Pradipta from Microsoft was also around, being extremely busy with logistics; however, in those busy times, he did find some good spare time to have a chat with me and the other Community leaders. I also met Harish Ranganathan and Sachin Rathi, both from Microsoft. It was so interesting to listen to both of them talking about SharePoint. I just have no words to express my overwhelmed spirit because of all these passionate young guys - Pradipta,Vinod, Bijoy, Harish, Sachin and Ahishek (of course!). Map of TechEd India 2010 Event Day 1 – April 12, 2010 From morning until night time, today was truly a very busy day for me. I had two presentations and one panel discussion for the day. Needless to say, I had a few meetings to attend as well. The day started with a keynote from S. Somaseger where he announced the launch of Visual Studio 2010. The keynote area was really eye-catching because of the very large, bigger-than- life uniform screen. This was truly one to show. The title music of the keynote was very interesting and it featured Bijoy Singhal as the model. It was interesting to talk to him afterwards, when we laughed at jokes together about his modeling assignment. TechEd India Keynote Opening Featuring Bijoy TechEd India 2010 Keynote – S. Somasegar Time: 11:15pm – 11:45pm Session 1: True Lies of SQL Server – SQL Myth Buster Following the excellent keynote, I had my very first session on the subject of SQL Server Myth Buster. At first, I was a bit nervous as right after the keynote, for this was my very first session and during my presentation I saw lots of Microsoft Product Team members. Well, it really went well and I had a really good discussion with attendees of the session. I felt that a well begin was half-done and my confidence was regained. Right after the session, I met a few of my Community friends and had meaningful discussions with them on many subjects. The abstract of the session is as follows: In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myths and their resolutions as I back them up with some demo. This demo presentation is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet fun session. Pinal Presenting session at TechEd India 2010 Time: 1:00 PM – 2:00 PM Lunch with Somasegar After the session I went to see my daughter, and then I headed right away to the lunch with S. Somasegar – the keynote speaker and senior vice president of the Developer Division at Microsoft. I really thank to Abhishek who made it possible for us. Because of his efforts, all the MVPs had the opportunity to meet such a legendary person and had to talk with them on Microsoft Technology. Though Somasegar is currently holding such a high position in Microsoft, he is very polite and a real gentleman, and how I wish that everybody in industry is like him. Believe me, if you spread love and kindness, then that is what you will receive back. As soon as lunch time was over, I ran to the session hall as my second presentation was about to start. Time: 2:30pm – 3:30pm Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Business Intelligence is a subject which was widely talked about at TechEd. Everybody was interested in this subject, and I did not excuse myself from this great concept as well. I consider myself fortunate as I was presenting on the subject of Master Data Services at TechEd. When I had initially learned this subject, I had a bit of confusion about the usage of this tool. Later on, I decided that I would tackle about how we all developers and DBAs are not able to understand something so simple such as this, and even worst, creating confusion about the technology. During system designing, it is very important to have a reference material or master lookup tables. Well, I talked about the same subject and presented the session keeping that as my center talk. The session went very well and I received lots of interesting questions. I got many compliments for talking about this subject on the real-life scenario. I really thank Rushabh Mehta (CEO, Solid Quality Mentors India) for his supportive suggestions that helped me prepare the slide deck, as well as the subject. Pinal Presenting session at TechEd India 2010 The abstract of the session is as follows: SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in-depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision-making process by allowing you to create, manage and propagate changes from a single master view of your business entities. Also, MDS – Master Data-hub which is a vital component, helps ensure the consistency of reporting across systems and deliver faster and more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Pinal Presenting session at TechEd India 2010 The day was still not over for me. I had ran into several friends but we were not able keep our enthusiasm under control about all the rumors saying that SQL Server 2008 R2 was about to be launched tomorrow in the keynote. I then ran to my third and final technical event for the day- a panel discussion with the top technologies of India. Time: 5:00pm – 6:00pm Panel Discussion: Harness the power of Web – SEO and Technical Blogging As I have delivered two technical sessions by this time, I was a bit tired but  not less enthusiastic when I had to talk about Blog and Technology. We discussed many different topics there. I told them that the most important aspect for any blog is its content. We discussed in depth the issues with plagiarism and how to avoid it. Another topic of discussion was how we technology bloggers can create awareness in the Community about what the right kind of blogging is and what morally and technically wrong acts are. A couple of questions were raised about what type of liberty a person can have in terms of writing blogs. Well, it was generically agreed that a blog is mainly a representation of our ideas and thoughts; it should not be governed by external entities. As long as one is writing what they really want to say, but not providing incorrect information or not practicing plagiarism, a blogger should be allowed to express himself. This panel discussion was supposed to be over in an hour, but the interest of the participants was remarkable and so it was extended for 30 minutes more. Finally, we decided to bring to a close the discussion and agreed that we will continue the topic next year. TechEd India Panel Discussion on Web, Technology and SEO Surprisingly, the day was just beginning after doing all of these. By this time, I have almost met all the MVP who arrived at the event, as well as many Microsoft employees. There were lots of Community folks present, too. I decided that I would go to meet several friends from the Community and continue to communicate with me on SQLAuthority.com. I also met Abhishek Baxi and had a good talk with him regarding Win Mobile and Twitter. He also took a very quick video of me wherein I spoke in my mother’s tongue, Gujarati. It was funny that I talked in Gujarati almost all the day, but when I was talking in the interview I could not find the right Gujarati words to speak. I think we all think in English when we think about Technology, so as to address universality. After meeting them, I headed towards the Speakers’ Dinner. Time: 8:00 PM – onwards Speakers Dinner The Speakers’ dinner was indeed a wonderful opportunity for all the speakers to get together and relax. We talked so many different things, from XBOX to Hindi Movies, and from SQL to Samosas. I just could not express how much fun I had. After a long evening, when I returned tmy room and met Shaivi, I just felt instantly relaxed. Kids are really gifts from God. Today was a really long but exciting day. So many things happened in just one day: Visual Studio Lanch, lunch with Somasegar, 2 technical sessions, 1 panel discussion, community leaders meeting, speakers dinner and, last but not leas,t playing with my child! A perfect day! Day 2 – April 13, 2010 Today started with a bang with the excellent keynote by Kamal Hathi who launched SQL Server 2008 R2 in India and demonstrated the power of PowerPivot to all of us. 101 Million Rows in Excel brought lots of applause from the audience. Kamal Hathi Presenting Keynote at TechEd India 2010 The day was a bit easier one for me. I had no sessions today and no events planned. I had a few meetings planned for the second day of the event. I sat in the speaker’s lounge for half a day and met many people there. I attended nearly 9 different meetings today. The subjects of the meetings were very different. Here is a list of the topics of the Community-related meetings: SQL PASS and its involvement in India and subcontinents How to start community blogging Forums and developing aptitude towards technology Ahmedabad/Gandhinagar User Groups and their developments SharePoint and SQL Business Meeting – a client meeting Business Meeting – a potential performance tuning project Business Meeting – Solid Quality Mentors (SolidQ) And family friends Pinal Dave at TechEd India The day passed by so quickly during this meeting. In the evening, I headed to Partners Expo with friends and checked out few of the booths. I really wanted to talk about some of the products, but due to the freebies there was so much crowd that I finally decided to just take the contact details of the partner. I will now start sending them with my queries and, hopefully, I will have my questions answered. Nupur and Shaivi had also one meeting to attend; it was with our family friend Vijay Raj. Vijay is also a person who loves Technology and loves it more than anybody. I see him growing and learning every day, but still remaining as a ‘human’. I believe that if someone acquires as much knowledge as him, that person will become either a computer or cyborg. Here, Vijay is still a kind gentleman and is able to stay as our close family friend. Shaivi was really happy to play with Uncle Vijay. Pinal Dave and Vijay Raj Renuka Prasad, a Microsoft MVP, impressed me with his passion and knowledge of SQL. Every time he gives me credit for his success, I believe that he is very humble. He has way more certifications than me and has worked many more years with SQL compared to me. He is an excellent photographer as well. Most of the photos in this blog post have been taken by him. I told him if ever he wants to do a part time job, he can do the photography very well. Pinal Dave and Renuka Prasad I also met L Srividya from Microsoft, whom I was looking forward to meet. She is a bundle of knowledge that everyone would surely learn a lot from her. I was able to get a few minutes from her and well, I felt confident. She enlightened me with SQL Server BI concepts, domain management and SQL Server security and few other interesting details. I also had a wonderful time talking about SharePoint with fellow Solid Quality Mentor Joy Rathnayake. He is very passionate about SharePoint but when you talk .NET and SQL with him, he is still overwhelmingly knowledgeable. In fact, while talking to him, I figured out that the recent training he delivered was on SQL Server 2008 R2. I told him a joke that it hurts my ego as he is more popular now in SQL training and consulting than me. I am sure all of you agree that working with good people is a gift from God. I am fortunate enough to work with the best of the best Industry experts. It was a great pleasure to hang out with my Community friends – Ahswin Kini, HimaBindu Vejella, Vasudev G, Suprotim Agrawal, Dhananjay, Vikram Pendse, Mahesh Dhola, Mahesh Mitkari,  Manu Zacharia, Shobhan, Hardik Shah, Ashish Mohta, Manan, Subodh Sohani and Sanjay Shetty (of course!) .  (Please let me know if I have met you at the event and forgot your name to list here). Time: 8:00 PM – onwards Community Leaders Dinner After lots of meetings, I headed towards the Community Leaders dinner meeting and met almost all the folks I met in morning. The discussion was almost the same but the real good thing was that we were enjoying it. The food was really good. Nupur was invited in the event, but Shaivi could not come. When Nupur tried to enter the event, she was stopped as Shaivi did not have the pass to enter the dinner. Nupur expressed that Shaivi is only 8 months old and does not eat outside food as well and could not stay by herself at this age, but the door keeper did not agree and asked that without the entry details Shaivi could not go in, but Nupur could. Nupur called me on phone and asked me to help her out. By the time, I was outside; the organizer of the event reached to the door and happily approved Shaivi to join the party. Once in the party, Shaivi had lots of fun meeting so many people. Shaivi Dave and Abhishek Kant Dean Guida (Infragistics President and CEO) and Pinal Dave (SQLAuthority.com) Day 3 – April 14, 2010 Though, it was last day, I was very much excited today as I was about to present my very favorite session. Query Optimization and Performance Tuning is my domain expertise and I make my leaving by consulting and training the same. Today’s session was on the same subject and as an additional twist, another subject about Spatial Database was presented. I was always intrigued with Spatial Database and I have enjoyed learning about it; however, I have never thought about Spatial Indexing before it was decided that I will do this session. I really thank Solid Quality Mentor Dr. Greg Low for his assistance in helping me prepare the slide deck and also review the content. Furthermore, today was really what I call my ‘learning day’ . So far I had not attended any session in TechEd and I felt a bit down for that. Everybody spends their valuable time & money to learn something new and exciting in TechEd and I had not attended a single session at the moment thinking that it was already last day of the event. I did have a plan for the day and I attended two technical sessions before my session of spatial database. I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller and there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod is syhe speaker. He did not have a single time disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL. Time: 12:30pm-1:30pm SQL Server Query Optimization, Execution and Debugging Query Performance I really had a fun time attending this session. Vinod made this session very interactive. The entire audience really got into the presentation and started participating in the event. Vinod was presenting a small problem with Query Tuning, which any developer would have encountered and solved with their help in such a fashion that a developer feels he or she have already resolved it. In one question, I was the only one who was ready to answer and Vinod told me in a light tone that I am now allowed to answer it! The audience really found it very amusing. There was a huge crowd around Vinod after the session. Vinod – A master storyteller! Time: 3:45pm-4:45pm Data Recovery / consistency with CheckDB This session was much heavier than the earlier one, and I must say this is my most favorite session I EVER attended in India. In this TechEd I have only attended two sessions, but in my career, I have attended numerous technical sessions not only in India, but all over the world. This session had taken my breath away. One by one, Vinod took the different databases, and started to corrupt them in different ways. Each database has some unique ways to get corrupted. Once that was done, Vinod started to show the DBCC CEHCKDB and demonstrated how it can solve your problem. He finally fixed all the databases with this single tool. I do have a good knowledge of this subject, but let me honestly admit that I have learned a lot from this session. I enjoyed and cheered during this session along with other attendees. I had total satisfaction that, just like everyone, I took advantage of the event and learned something. I am now TECHnically EDucated. Pinal Dave and Vinod Kumar After two very interactive and informative SQL Sessions from Vinod Kumar, the next turn me presenting on Spatial Database and Indexing. I got once again nervous but Vinod told me to stay natural and do my presentation. Well, once I got a huge stage with a total of four projectors and a large crowd, I felt better. Time: 5:00pm-6:00pm Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Pinal Presenting session at TechEd India 2010 Pinal Presenting session at TechEd India 2010 I kicked off this session with Michael J Swart‘s beautiful spatial image. This session was the last one for the day but, to my surprise, I had more than 200+ attendees. Slowly, the rain was starting outside and I was worried that the hall would not be full; despite this, there was not a single seat available in the first five minutes of the session. Thanks to all of you for attending my presentation. I had demonstrated the map of world (and India) and quickly explained what  Geographic and Geometry data types in Spatial Database are. This session had interesting story of Indexing and Comparison, as well as how different traditional indexes are from spatial indexing. Pinal Presenting session at TechEd India 2010 Due to the heavy rain during this event, the power went off for about 22 minutes (just an accident – nobodies fault). During these minutes, there were no audio, no video and no light. I continued to address the mass of 200+ people without any audio device and PowerPoint. I must thank the audience because not a single person left from the session. They all stayed in their place, some moved closure to listen to me properly. I noticed that the curiosity and eagerness to learn new things was at the peak even though it was the very last session of the TechEd. Everybody wanted get the maximum knowledge out of this whole event. I was touched by the support from audience. They listened and participated in my session even without any kinds of technology (no ppt, no mike, no AC, nothing). During these 22 minutes, I had completed my theory verbally. Pinal Presenting session at TechEd India 2010 After a while, we got the projector back online and we continued with some exciting demos. Many thanks to Microsoft people who worked energetically in background to get the backup power for project up. I had a very interesting demo wherein I overlaid Bangalore and Hyderabad on the India Map and find their aerial distance between them. After finding the aerial distance, we browsed online and found that SQL Server estimates the exact aerial distance between these two cities, as compared to the factual distance. There was a huge applause from the crowd on the subject that SQL Server takes into the count of the curvature of the earth and finds the precise distances based on details. During the process of finding the distance, I demonstrated a few examples of the indexes where I expressed how one can use those indexes to find these distances and how they can improve the performance of similar query. I also demonstrated few examples wherein we were able to see in which data type the Index is most useful. We finished the demos with a few more internal stuff. Pinal Presenting session at TechEd India 2010 Despite all issues, I was mostly satisfied with my presentation. I think it was the best session I have ever presented at any conference. There was no help from Technology for a while, but I still got lots of appreciation at the end. When we ended the session, the applause from the audience was so loud that for a moment, the rain was not audible. I was truly moved by the dedication of the Technology enthusiasts. Pinal Dave After Presenting session at TechEd India 2010 The abstract of the session is as follows: The Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Time: 8:00 PM – onwards Dinner by Sponsors After the lively session during the day, there was another dinner party courtesy of one of the sponsors of TechEd. All the MVPs and several Community leaders were present at the dinner. I would like to express my gratitude to Abhishek Kant for organizing this wonderful event for us. It was a blast and really relaxing in all angles. We all stayed there for a long time and talked about our sweet and unforgettable memories of the event. Pinal Dave and Bijoy Singhal It was really one wonderful event. After writing this much, I say that I have no words to express about how much I enjoyed TechEd. However, it is true that I shared with you only 1% of the total activities I have done at the event. There were so many people I have met, yet were not mentioned here although I wanted to write their names here, too . Anyway, I have learned so many things and up until now, I am not able to get over all the fun I had in this event. Pinal Dave at TechEd India 2010 The Next Days – April 15, 2010 – till today I am still not able to get my mind out of the whole experience I had at TechEd India 2010. It was like a whole Microsoft Family working together to celebrate a happy occasion. TechEd India – Truly An Unforgettable Experience! Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Do’s and Don’ts Building SharePoint Applications

    - by Bil Simser
    SharePoint is a great platform for building quick LOB applications. Simple things from employee time trackers to server and software inventory to full blown Help Desks can be crafted up using SharePoint from just customizing Lists. No programming necessary. However there are a few tricks I’ve painfully learned over the years that you can use for your own solutions. DO What’s In A Name? When you create a new list, column, or view you’ll commonly name it something like “Expense Reports”. However this has the ugly effect of creating a url to the list as “Expense%20Reports”. Or worse, an internal field name of “Expense_x0x0020_Reports” which is not only cryptic but hard to remember when you’re trying to find the column by internal name. While “Expense Reports 2011” is user friendly, “ExpenseReports2011” is not (unless you’re a programmer). So that’s not the solution. Well, not entirely. Instead when you create your column or list or view use the scrunched up name (I can’t think of the technical term for it right now) of “ExpenseReports2011”, “WomenAtTheOfficeThatAreMen” or “KoalaMeatIsGoodWhenBroiled”. After you’ve created it, go back and change the name to the more friendly “Silly Expense Reports That Nobody Reads”. The original internal name will be the url and code friendly one without spaces while the one used on data entry forms and view headers will be the human version. Smart Columns When building a view include columns that make sense. By default when you add a column the “Add to default view” is checked. Resist the urge to be lazy and leave it checked. Uncheck that puppy and decide consciously what columns should be included in the view. Pick columns that make sense to what the user is trying to do. This means you have to talk to the user. Yes, I know. That can be trying at times and even painful. Go ahead, talk to them. You might learn something. Find out what’s important to them and why. If they’re doing something repetitively as part of their job, try to make their life easier by including what’s most important to them. Do they really need to see the Created *and* Modified date of a document or do they just need the title and author? You’ll only find out after talking to them (or getting them drunk in a bar and leaving them in the back alley handcuffed to a garbage bin, don’t ask). Gotta Keep it Separated Hey, views are there for a reason. Use them. While “All Items” is a fine way to present a list of well, all items, it’s hardly sufficient to present a list of servers built before the Y2K bug hit. You’ll be scrolling the list for hours finally arriving at Page 387 of 12,591 and cursing that SharePoint guy for convincing you that putting your hardware into a list would be of any use to anyone. Next to collecting the data, presenting it is just as important. Views are often overlooked and many times ignored or misused. They’re the way you can slice and dice the data up so that you’re not trying to consume 3,000 years of human evolution on a single web page. Remember views can be filtered so feel free to create a view for each status or one for each operating system or one for each species of Information Worker you might be putting in that list or document library. Not only will it reduce the number of items someone sees at one time, it’ll also make the information that much more relevant. Also remember that each view is a separate page. Use it in navigation by creating a menu on the Quick Launch to each view. The discoverability of the Views menu isn’t overly obvious and if you violate the rule of columns (see Horizontally Scrolling below) the view menu doesn’t even show up until you shuffle the scroll bar to the left. Navigation links, big giant buttons, a screaming flashing “CLICK ME NOW” will help your users find their way. Sort It! Views are great so we’re building nice, rich views for the user. Awesomesauce. However sort is not very discoverable by the user. For example when you’re looking at a view how do you know if it’s ascending or descending and what is it sorted on. Maybe it’s sorted using two fields so what’s that all about? Help your users by letting them know the information they’re looking at is sorted. Maybe you name the view something appropriate like “Bogus Expense Claims Sorted By Deadbeats”. If you use the naming strategy just make sure you keep the name consistent with the description. In the previous example their better be a Deadbeat column so I can see the sort in action. Having a “Loser” column, while equally correct, is a little obtuse to the average Information Worker. Remember, they usually don’t use acronyms and even if they knew how to, it’s not immediately obvious to them that’s what you’re trying to convey. Another option is to simply drop a Content Editor Web Part above the list and explain exactly the view they’re looking at. Each view is it’s own page so one CEWP won’t be used across the board. Be descriptive in what the user is seeing but try to keep it brief. Dumping the first chapter of I, Claudius might be informative to the data but can gobble up screen real estate and miss the point of having the list. DO NOT Useless Attachments The attachments column is, in a word, useless. For the most part. Sure it indicates there’s an attachment on the list item but in the grand scheme of things that’s not overly informative. Maybe it is and by all means, if it makes sense to you include it. Colour it. Make it shine and stand like the Return of Clippy on every SharePoint list. Without it being functional it can be boring. EndUserSharePoint.com has an article to make the son of Clippy that much more useful so feel free to head over and check out this blog post by Paul Grenier on the task (Warning code ahead! Danger Will Robinson!) In any case, I would suggest you remove it from your views. Again if it’s important then include it but consider the jQuery solution above to make it functional. It’s added by default to views and one of things that people forget to clean up. Horizontal Scrolling Screen real estate is premium so building a list that contains 8,000 columns and stretches horizontally across 15 screens probably isn’t the most user friendly experience. Most users can’t figure out how to scroll vertically let alone horizontally so don’t make it even that more confusing for them. Take the Steve Krug approach in your view designs and try not to make the user think. Again views are your friend. Consider splitting up the data into views where one view contains 10 columns and other view contains the other 10. Okay, maybe your information doesn’t work that way but humans can only process 7 pieces of data at a time, 10 at most (then their heads explode and you don’t want to clean that mess up, especially on a Friday night before the big dance). It drives me batshit crazy when I see a view with 80 columns of data. I often ask the user “So what do you do with all this information”. The response is usually “With this data [the first 10 columns] I decide if I’m going to fire everyone, and with this data [the next 10 columns] I decide if I’m going to set the building on fire and collect the insurance”. It’s at that point I show them how to create two new views “People Who Are About To Get The Axe” and “Beach Time For The Executives”. Again, talk to your users and try to reason with them on cutting down the number of columns they see at once. Vertical Scrolling Another big faux pas I find is the use of multi-line comment fields in views. It’s not so bad when you have a statement like this in your view: “I really like, oh my god, thought I was going to scream when I saw this turtle then I decided what I was going to have for dinner and frankly I hate having to work late so when I was talking to the customer I thought, oh my god, what if the customer has turtles and then it appeared to me that I really was hungry so I'm going to have lunch now.” It’s fine if that’s the only column along with two or three others, but once you slap those 20 columns of data into the list, the comment field wraps and forms a new multi-page novel that takes up your entire screen. Do everyone a favour and just avoid adding the column to views. Train the user to just click through to the item if they need to see the contents. Duplicate Information Duplication is never good. Views and great as you can group data together. For example create a view of project status reports grouped by author. Then you can see what project manager is being a dip and not submitting their report. However if you group by author do you really need the Created By field as well in the view? Or if the view is grouped by Project then Author do you need both. Horizontal real estate is always at a premium so try not to clutter up the view with duplicate data like this. Oh  yeah, if you’re scratching your head saying “But Bil, if I don’t include the Project name in the view and I have a lot of items then how do I know which one I’m looking at”. That’s a hint that your grouping is too vague or you have too much data in the view based on that criteria. Filter it down a notch, create some views, and try to keep the group down to a single screen where you can see the group header at the top of the page. Again it’s just managing the information you have. Redundant, See Redundant This partially relates to duplicate information and smart columns but basically remember to not include the obvious in a view. Remember, don’t make me think. If you’ve gone to the trouble (and it was a lot of trouble wasn’t it?) to create separate views of your data by creating a “September Zombie Brain Sales”, “October Zombie Brain Sales”, etc. then please for the love of all that is holy do not include the Month and Product columns in your view. Similarly if you create a “My” view of anything (“My Favourite Brands of Spandex”, “My Co-Workers I Find The Urge To Disinfect”) then again, do not include the owner or author field (or whatever field you use to identify “My”). That’s just silly. Hope that helps! Happy customizing!

    Read the article

  • Simple Branching and Merging with SVN

    Its a good idea not to do too much work without checking something into source control.  By too much work I mean typically on the order of a couple of hours at most, and certainly its a good practice to check in anything you have before you leave the office for the day.  But what if your changes break the build (on the build server you do have a build server dont you?) or would cause problems for others on your team if they get the latest code?  The solution with Subversion is branching and merging (incidentally, if youre using Microsoft Visual Studio Team System, you can shelve your changes and share shelvesets with others, which accomplishes many of the same things as branching and merging, but is a bit simpler to do). Getting Started Im going to assume you have Subversion installed along with the nearly ubiquitous client, TortoiseSVN.  See my previous post on installing SVN server if you want to get it set up real quick (you can put it on your workstation/laptop just to learn how it works easily enough). Overview When you know you are going to be working on something that you wont be able to check in quickly, its a good idea to start a branch.  Its also perfectly fine to create the branch after-the-fact (have you ever started something thinking it would be an hour and 4 hours later realized you were nowhere near done?).  In any event, the first thing you need to do is create a branch.  A branch is simply a copy of the current trunk (a typical subversion setup has root directories called trunk, tags, and branches its a good idea to keep this and to put your branches in the branches folder).  Once you have a new branch, you need to switch your working copy so that it is bound to your branch.  As you work,  you may want to merge in changes that are happening in the trunk to your branch, and ultimately when you are done youll want to merge your branch back into the trunk.  When done, you can delete your branch (or not, but it may add clutter).  To sum up: Create a new branch Switch your local working copy to the new branch Develop in the branch (commit changes, etc.) Merge changes from trunk into your branch Merge changes from branch into trunk Delete the branch Create a new branch From the root of your repository, right-click and select TortoiseSVN > Branch/tag as shown at right (click to enlarge).  This will bring up the Copy (Branch / Tag) interface.  By default the From WC at URL: should be pointing at the trunk of your repository.  I recommend (after ensuring that you have the latest version) that you choose to make the copy from the HEAD revision in the repository (the first radio button).  In the To URL: textbox, you should change the URL from /trunk to /branches/NAME_OF_BRANCH.  You can name the branch anything you like, but its often useful to give it your name (if its just for your use) or some useful information (such as a datestamp or a bug/issue ID from that it relates to, or perhaps just the name of the feature you are adding. When youre done with that, enter in a log message for your new branch.  If you want to immediately switch your local working copy to the new branch/tag, check the box at the bottom of the dialog (Switch working copy to new branch/tag).  You can see an example at right. Assuming everything works, you should very quickly see a window telling you the Copy finished, like the one shown below: Switch Local Working Copy to New Branch If you followed the instructions above and checked the box when you created your branch, you dont need to do this step.  However, if you have a branch that already exists and you would like to switch over to working on it, you can do so by using the Switch command.  Youll find it in the explorer context menu under TortoiseSVN > Switch: This brings up a dialog that shows you your current binding, and lets you enter in a new URL to switch to: In the screenshot above, you can see that Im currently bound to a branch, and so I could switch back to the trunk or to another branch.  If youre not sure what to enter here, you can click the [] next to the URL textbox to explore your repository and find the appropriate root URL to use.  Also, the dropdown will show you URLs that might be a good fit (such as the trunk of the current repository). Develop in the Branch Once you have created a branch and switched your working copy to use it,  you can make changes and Commit them as usual.  Your commits are now going into the branch, so they wont impact other users or the build server that are working off of the trunk (or their own branches).  In theory you can keep on doing this forever, but practically its a good idea to periodically merge the trunk into your branch, and/or keep your branches short-lived and merge them back into the trunk before they get too far out of sync. Merge Changes from Trunk into your Branch Once you have been working in a branch for a little while, change to the trunk will have occurred that youll want to merge into your branch.  Its much safer and easier to integrate changes in small increments than to wait for weeks or months and then try to merge in two very different codebases.  To perform the merge, simply go to the root of your branch working copy and right click, select TortoiseSVN->Merge.  Youll be presented with this dialog: In this case you want to leave the default setting, Merge a range of revisions.  Click Next.  Now choose the URL to merge from.  You should select the trunk of your current repository (which should be in the dropdownlist, or you can click the [] to browse your repository for the correct URL).  You can leave everything else blank since you want to merge everything: Click Next.  Again you can leave the default settings.  If you want to do something more granular than everything in the trunk, you can select a different Merge depth, to include merging just one item in the tree.  You can also perform a Test merge to see what changes will take place before you click Merge (which is often a good idea).  Heres what the dialog should look like before you click Merge: After clicking Merge (or Test merge) you should see a confirmation like this (it will say Test Only in the title if you click Test merge): Now you should build your solution, run all of your tests, and verify that your branch still works the way it should, given the updates that youve just integrated from the trunk.  Once everything works, Commit your changes, and then continue with your work on the branch.  Note that until you commit, nothing has actually changed in your branch on the server.  Other team members who may also be working in this branch wont be impacted, etc.  The Merge is purely a client-side operation until you perform a Commit. In a more real-world scenario, you may have conflicts.  When you do, youll be presented with a dialog like this one: Its up to you which option you want to go with.  The more frequently you Merge, the fewer of these youll have to deal with.  Also, be very sure that youre merging the right folders together.  If you try and merge your trunk with some subfolder in your branchs structure, youll end up with all kinds of conflicts and problems.  Fortunately, theyre only on your working copy (unless you commit them!) but if you see something like that, be sure to doublecheck your URL and your local file location. Merge Your Branch Back Into Trunk When youre done working in your branch, its time to pull it back into the trunk.  The first thing you should do is follow the previous steps instructions for merging the latest from the trunk into your branch.  This lets you ensure that what you have in your branch works correctly with the current trunk.  Once youve done that and committed your changes to your branch, youre ready to proceed with this step. Once youre confident your branch is good to go, you should go to its root folder and select TortoiseSVN->Merge (as above) from the explorer right-click menu.  This time, select Reintegrate a branch as shown below: Click Next.  Youll want it to merge with the trunk, which should be the default: Click Next. Leave the default settings: Click Test merge to see a test, and then if all looks good, click Merge.  Note that if you havent checked in your working copy changes, youll see something like this: If on the other hand things are successful: After this step, its likely you are finished working in your branch.  Dont forget to use the ToroiseSVN->Switch command to change your working copy back to the trunk. Delete the Branch You dont have to delete the branch, but over time your branches area of your repository will get cluttered, and in any event if theyre not actively being worked on the branches are just taking up space and adding to later confusion.  Keeping your branches limited to things youre actively working on is simply a good habit to get into, just like making sure your codebase itself remains tidy and not filled with old commented out bits of code. To delete the branch after youre finished with it, the simplest thing to do is choose TortoiseSVN->Repo Browser.  From there, assuming you did this from your branch, it should already be highlighted.  In any event, navigate to your branch in the treeview on the left, and then right-click and select Delete.  Enter a log message if youd like: Click OK, and its gone.  Dont be too afraid of this, though.  You can still get to the files by viewing the log for branches, and selecting a previous revision (anything before the delete action): If for some reason you needed something that was previously in this branch, you could easily get back to any changeset you checked in, so you should have absolutely no fear when it comes to deleting branches youre done with.   Resources If youre using Eclipse, theres a nice write-up of the steps required by Zach Cox that I found helpful here. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The Low Down Dirty Azure Blues

    - by SGWellens
    Remember the SETI screen savers that used to be on everyone's computer? As far I as know, it was the first bona-fide use of "Cloud" computing…albeit an ad hoc cloud. I still think it was a brilliant leveraging of computing power. My interest in clouds was re-piqued when I went to a technical seminar at the local .Net User Group. The speaker was Mike Benkovitch and he expounded magnificently on the virtues of the Azure platform. Mike always does a good job. One killer reason he gave for cloud computing is instant scalability. Not applicable for most applications, but it is there if needed. I have a bunch of files stored on Microsoft's SkyDrive platform which is cloud storage. It is painfully slow. Accessing a file means going through layers and layers of software, redirections and security. Am I complaining? Hell no! It's free! So my opinions of Cloud Computing are both skeptical and appreciative. What intrigued me at the seminar, in addition to its other features, is that Azure can serve as a web hosting platform. I have a client with an Asp.Net web site I developed who is not happy with the performance of their current hosting service. I checked the cost of Azure and since the site has low bandwidth/space requirements the cost would be competitive with the existing host provider: Azure Pricing Calculator. And, Azure has a three month free trial. Perfect! I could try moving the website and see how it works for free. I went through the signup process. Everything was proceeding fine until I went to the MS SQL database management screen. A popup window informed me that I needed to install Silverlight on my machine. Silverlight? No thanks. Buh-Bye. I half-heartedly found the Azure support button and logged a ticket telling them I didn't want Silverlight on my machine. Within 4 to 6 hours (and a myriad (5) of automated support emails) they sent me a link to a database management page that did not require Silverlight. Thanks! I was able to create a database immediately. One really nice feature was that after creating the database, I was given a list of connection strings. I went to the current host provider, made a backup of the database and saved it to my machine. I attached to the remote database using SQL Server Studio 2012 and looked for the Restore menu item. It was missing. So I tried using the SQL command: RESTORE DATABASE MyDatabase FROM DISK ='C:\temp\MyBackup.bak' Msg 40510, Level 16, State 1, Line 1 Statement 'RESTORE DATABASE' is not supported in this version of SQL Server. Are you kidding me? Why on earth…? This can't be happening! I opened both the source database and destination database in SQL Management Studio. I right clicked the source database, selected "Tasks" and noticed a menu selection called "Deploy Database to SQL Azure" Are you kidding me? Could it be? Oh yes, it be! There was a small problem because the database already existed on the Azure machine, I deployed to a new name, deleted the existing database and renamed the deployed database to what I needed. It was ridiculously easy. Being able to attach SQL Management Studio to remote databases is an awesome but scary feature. You can limit the IP addresses that can access the database which enhances security but when you give people, any people, me included, that much power, one errant mouse click could bring a live system down. My Advice: Tread softly and carry a large backup thumb-drive. Then I created a web site, the URL it returned look something like this: http://MyWebSite.azurewebsites.net/ Azure supports FTP, but I couldn't figure out the settings until I downloaded the publishing profile. It was an XML file that contained the needed information. I still couldn't connect with my FTP client (FileZilla). After about an hour of messing around, I deleted the port number from the FileZilla setup page….and voila, I was in like Flynn.   There are other options of deploying directly from Visual Studio, TFS, etc. but I do not like integrated tools that do things without my asking: It's usually hard to figure out what they did and how to undo it. I uploaded the aspx , cs , webconfig, etc. files. Bu it didn't run. The site I ported was in .NET 3.5. The Azure website configuration page gave me a choice between .NET 2.0 and 4.0. So, I switched to Visual Studio 2010, chose .NET 4.0 and upgraded the site. Of course I have the original version completely backed up and stored in a granite cave beneath the Nevada desert. And I have a backup CD under my pillow. The site uses ReportViewer to generate PDF documents. Of course it was the wrong version. I removed the old references to version 9 and added new references to version 10 (*see note below). Since the DLLs were not on the Azure Server, I uploaded them to the bin directory, crossed my fingers, burned some incense and gave it a try. After some fiddling around it ran. I don't know if I did anything particular to make it work or it just needed time to sort things out. However, one critical feature didn't work: ReportViewer could not programmatically generate PDF documents. I was getting this exception: "An error occurred during local report processing. Parameter is not valid." Rats. I did some searching and found other people were having the same problem, so I added a post saying I was having the same problem: http://social.msdn.microsoft.com/Forums/en-US/windowsazurewebsitespreview/thread/b4a6eb43-0013-435f-9d11-00ee26a8d017 Currently they are looking into this problem and I am waiting for the results. Hence I had the time to write this BLOG entry. How lucky you are. This was the last message I got from the Microsoft person: Hi Steve, Windows Azure Web Sites is a multi-tenant environment. For security issue, we limited some API calls. Unfortunately, some GDI APIS required by the PDF converting function are in this list. We have noticed this issue, and still investigation the best way to go. At this moment, there is no news to share. Sorry about this. Will keep you posted. If I had to guess, I would say they are concerned with people uploading images and doing intensive graphics programming which would hog CPU time.  But that is just a guess. Another problem. While trying to resolve the ReportViewer problem, I tried to write a file to the PDF directory to see if there was a permissions problem with some test code: String MyPath = MapPath(@"~\PDFs\Test.txt"); File.WriteAllText(MyPath, "Hello Azure");     I got this message: Access to the path <my path> is denied. After some research, I understood that since Azure is a cloud based platform, it can't allow web applications to save files to local directories. The application could be moved or replicated as scaling occurs and trying to manage local files would be problematic to say the least. There are other options: Use the Azure APIs to get a path. That way the location of the storage is separated from the application. However, the web site is then tied Azure and can't be moved to another hosting platform. Use the ApplicationData folder (not recommended). Write to BLOB storage. Or, I could try and stream the PDF output directly to the email and not save a file. I'm not going to work on a final solution until the ReportViewer is fixed. I am just sharing some of the things you need to be aware of if you decide to use Azure. I got this information from here. (Note the author of the BLOG added a comment saying he has updated his entry). Is my memory faulty? While getting this BLOG ready, I tried to write the test file again. And it worked. My memory is incorrect, or much more likely, something changed on the server…perhaps while they are trying to get ReportViewer to work. (Anyway, that's my story and I'm sticking to it). *Note: Since Visual Studio 2010 Express doesn't include a Report Editor, I downloaded and installed SQL Server Report Builder 2.0. It is a standalone Report Editor to replace the one not in Visual Studio 2010 Express. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

  • Windows Azure AppFabric: ServiceBus Queue WPF Sample

    - by xamlnotes
    The latest version of the AppFabric ServiceBus now has support for queues and topics. Today I will show you a bit about using queues and also talk about some of the best practices in using them. If you are just getting started, you can check out this site for more info on Windows Azure. One of the 1st things I thought if when Azure was announced back when was how we handle fault tolerance. Web sites hosted in Azure are no much of an issue unless they are using SQL Azure and then you must account for potential fault or latency issues. Today I want to talk a bit about ServiceBus and how to handle fault tolerance.  And theres stuff like connecting to the servicebus and so on you have to take care of. To demonstrate some of the things you can do, let me walk through this sample WPF app that I am posting for you to download. To start off, the application is going to need things like the servicenamespace, issuer details and so forth to make everything work.  To facilitate this I created settings in the wpf app for all of these items. Then I mapped a static class to them and set the values when the program loads like so: StaticElements.ServiceNamespace = Convert.ToString(Properties.Settings.Default["ServiceNamespace"]); StaticElements.IssuerName = Convert.ToString(Properties.Settings.Default["IssuerName"]); StaticElements.IssuerKey = Convert.ToString(Properties.Settings.Default["IssuerKey"]); StaticElements.QueueName = Convert.ToString(Properties.Settings.Default["QueueName"]);   Now I can get to each of these elements plus some other common values or instances directly from the StaticElements class. Now, lets look at the application.  The application looks like this when it starts:   The blue graphic represents the queue we are going to use.  The next figure shows the form after items were added and the queue stats were updated . You can see how the queue has grown: To add an item to the queue, click the Add Order button which displays the following dialog: After you fill in the form and press OK, the order is published to the ServiceBus queue and the form closes. The application also allows you to read the queued items by clicking the Process Orders button. As you can see below, the form shows the queued items in a list and the  queue has disappeared as its now empty. In real practice we normally would use a Windows Service or some other automated process to subscribe to the queue and pull items from it. I created a class named ServiceBusQueueHelper that has the core queue features we need. There are three public methods: * GetOrCreateQueue – Gets an instance of the queue description if the queue exists. if not, it creates the queue and returns a description instance. * SendMessageToQueue = This method takes an order instance and sends it to the queue. The call to the queue is wrapped in the ExecuteAction method from the Transient Fault Tolerance Framework and handles all the retry logic for the queue send process. * GetOrderFromQueue – Grabs an order from the queue and returns a typed order from the queue. It also marks the message complete so the queue can remove it.   Now lets turn to the WPF window code (MainWindow.xaml.cs). The constructor contains the 4 lines shown about to setup the static variables and to perform other initialization tasks. The next few lines setup certain features we need for the ServiceBus: TokenProvider credentials = TokenProvider.CreateSharedSecretTokenProvider(StaticElements.IssuerName, StaticElements.IssuerKey); Uri serviceUri = ServiceBusEnvironment.CreateServiceUri("sb", StaticElements.ServiceNamespace, string.Empty); StaticElements.CurrentNamespaceManager = new NamespaceManager(serviceUri, credentials); StaticElements.CurrentMessagingFactory = MessagingFactory.Create(serviceUri, credentials); The next two lines update the queue name label and also set the timer to 20 seconds.             QueueNameLabel.Content = StaticElements.QueueName;             _timer.Interval = TimeSpan.FromSeconds(20);             Next I call the UpdateQueueStats to initialize the UI for the queue:             UpdateQueueStats();             _timer.Tick += new EventHandler(delegate(object s, EventArgs a)                         {                      UpdateQueueStats();                  });             _timer.Start();         } The UpdateQueueStats method shown below. You can see that it uses the GetOrCreateQueue method mentioned earlier to grab the queue description, then it can get the MessageCount property.         private void UpdateQueueStats()         {             _queueDescription = _serviceBusQueueHelper.GetOrCreateQueue();             QueueCountLabel.Content = "(" + _queueDescription.MessageCount + ")";             long count = _queueDescription.MessageCount;             long queueWidth = count * 20;             QueueRectangle.Width = queueWidth;             QueueTickCount += 1;             TickCountlabel.Content = QueueTickCount.ToString();         }   The ReadQueueItemsButton_Click event handler calls the GetOrderFromQueue method and adds the order to the listbox. If you look at the SendQueueMessageController, you can see the SendMessage method that sends an order to the queue. Its pretty simple as it just creates a new CustomerOrderEntity instance,fills it and then passes it to the SendMessageToQueue. As you can see, all of our interaction with the queue is done through the helper class (ServiceBusQueueHelper). Now lets dig into the helper class. First, before you create anything like this, download the Transient Fault Handling Framework. Microsoft provides this free and they also provide the C# source. Theres a great article that shows how to use this framework with ServiceBus. I included the entire ServiceBusQueueHelper class in List 1. Notice the using statements for TransientFaultHandling: using Microsoft.AzureCAT.Samples.TransientFaultHandling; using Microsoft.AzureCAT.Samples.TransientFaultHandling.ServiceBus; The SendMessageToQueue in Listing 1 shows how to use the async send features of ServiceBus with them wrapped in the Transient Fault Handling Framework.  It is not much different than plain old ServiceBus calls but it sure makes it easy to have the fault tolerance added almost for free. The GetOrderFromQueue uses the standard synchronous methods to access the queue. The best practices article walks through using the async approach for a receive operation also.  Notice that this method makes a call to Receive to get the message then makes a call to GetBody to get a new strongly typed instance of CustomerOrderEntity to return. Listing 1 using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.AzureCAT.Samples.TransientFaultHandling; using Microsoft.AzureCAT.Samples.TransientFaultHandling.ServiceBus; using Microsoft.ServiceBus; using Microsoft.ServiceBus.Messaging; using System.Xml.Serialization; using System.Diagnostics; namespace WPFServicebusPublishSubscribeSample {     class ServiceBusQueueHelper     {         RetryPolicy currentPolicy = new RetryPolicy<ServiceBusTransientErrorDetectionStrategy>(RetryPolicy.DefaultClientRetryCount);         QueueClient currentQueueClient;         public QueueDescription GetOrCreateQueue()         {                        QueueDescription queue = null;             bool createNew = false;             try             {                 // First, let's see if a queue with the specified name already exists.                 queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.GetQueue(StaticElements.QueueName); });                 createNew = (queue == null);             }             catch (MessagingEntityNotFoundException)             {                 // Looks like the queue does not exist. We should create a new one.                 createNew = true;             }             // If a queue with the specified name doesn't exist, it will be auto-created.             if (createNew)             {                 try                 {                     var newqueue = new QueueDescription(StaticElements.QueueName);                     queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.CreateQueue(newqueue); });                 }                 catch (MessagingEntityAlreadyExistsException)                 {                     // A queue under the same name was already created by someone else,                     // perhaps by another instance. Let's just use it.                     queue = currentPolicy.ExecuteAction<QueueDescription>(() => { return StaticElements.CurrentNamespaceManager.GetQueue(StaticElements.QueueName); });                 }             }             currentQueueClient = StaticElements.CurrentMessagingFactory.CreateQueueClient(StaticElements.QueueName);             return queue;         }         public void SendMessageToQueue(CustomerOrderEntity Order)         {             BrokeredMessage msg = null;             GetOrCreateQueue();             // Use a retry policy to execute the Send action in an asynchronous and reliable fashion.             currentPolicy.ExecuteAction             (                 (cb) =>                 {                     // A new BrokeredMessage instance must be created each time we send it. Reusing the original BrokeredMessage instance may not                     // work as the state of its BodyStream cannot be guaranteed to be readable from the beginning.                     msg = new BrokeredMessage(Order);                     // Send the event asynchronously.                     currentQueueClient.BeginSend(msg, cb, null);                 },                 (ar) =>                 {                     try                     {                         // Complete the asynchronous operation.                         // This may throw an exception that will be handled internally by the retry policy.                         currentQueueClient.EndSend(ar);                     }                     finally                     {                         // Ensure that any resources allocated by a BrokeredMessage instance are released.                         if (msg != null)                         {                             msg.Dispose();                             msg = null;                         }                     }                 },                 (ex) =>                 {                     // Always dispose the BrokeredMessage instance even if the send                     // operation has completed unsuccessfully.                     if (msg != null)                     {                         msg.Dispose();                         msg = null;                     }                     // Always log exceptions.                     Trace.TraceError(ex.Message);                 }             );         }                 public CustomerOrderEntity GetOrderFromQueue()         {             CustomerOrderEntity Order = new CustomerOrderEntity();             QueueClient myQueueClient = StaticElements.CurrentMessagingFactory.CreateQueueClient(StaticElements.QueueName, ReceiveMode.PeekLock);             BrokeredMessage message;             ServiceBusQueueHelper serviceBusQueueHelper = new ServiceBusQueueHelper();             QueueDescription queueDescription;             queueDescription = serviceBusQueueHelper.GetOrCreateQueue();             if (queueDescription.MessageCount > 0)             {                 message = myQueueClient.Receive(TimeSpan.FromSeconds(90));                 if (message != null)                 {                     try                     {                         Order = message.GetBody<CustomerOrderEntity>();                         message.Complete();                     }                     catch (Exception ex)                     {                         throw ex;                     }                 }                 else                 {                     throw new Exception("Did not receive the messages");                 }             }             return Order;         }     } } I will post a link to the download demo in a separate post soon.

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Cannot create Java VM on OpenVZ

    - by Stephen Searles
    I'm constantly encountering an error related to Java and certificates on my Ubuntu server running in OpenVZ when installing things from apt-get. I'm pretty sure it has to do with how Java allocates memory. I know the fail counter for privvmpages is very high, so the problem must be that Java is hitting this limit. I have read that the server VM will allocate a lot of memory up front to preempt performance issues, but that the client VM doesn't do this and might be better for what I'm doing. I messed with jvm.cfg to make the system go to the client VM, but get an error that it can't find the client VM. I have tried replacing the Java binary with a script calling Java with -Xms and -Xmx settings, and that solves the issue for when I call basic things from the command line, but not for when doing things like having apt-get configure certificates. I'm at a loss for what to try next. I need to get this working, but simply increasing privvmpages is not an available option. I have the actual error pasted below. Setting up ca-certificates-java (20100412) ... creating /etc/ssl/certs/java/cacerts... Could not create the Java virtual machine. error adding brasil.gov.br/brasil.gov.br.crt error adding cacert.org/cacert.org.crt error adding debconf.org/ca.crt error adding gouv.fr/cert_igca_dsa.crt error adding gouv.fr/cert_igca_rsa.crt error adding mozilla/ABAecom_=sub.__Am._Bankers_Assn.=_Root_CA.crt error adding mozilla/AOL_Time_Warner_Root_Certification_Authority_1.crt error adding mozilla/AOL_Time_Warner_Root_Certification_Authority_2.crt error adding mozilla/AddTrust_External_Root.crt error adding mozilla/AddTrust_Low-Value_Services_Root.crt error adding mozilla/AddTrust_Public_Services_Root.crt error adding mozilla/AddTrust_Qualified_Certificates_Root.crt error adding mozilla/America_Online_Root_Certification_Authority_1.crt error adding mozilla/America_Online_Root_Certification_Authority_2.crt error adding mozilla/Baltimore_CyberTrust_Root.crt error adding mozilla/COMODO_Certification_Authority.crt error adding mozilla/COMODO_ECC_Certification_Authority.crt error adding mozilla/Camerfirma_Chambers_of_Commerce_Root.crt error adding mozilla/Camerfirma_Global_Chambersign_Root.crt error adding mozilla/Certplus_Class_2_Primary_CA.crt error adding mozilla/Certum_Root_CA.crt error adding mozilla/Comodo_AAA_Services_root.crt error adding mozilla/Comodo_Secure_Services_root.crt error adding mozilla/Comodo_Trusted_Services_root.crt error adding mozilla/DST_ACES_CA_X6.crt error adding mozilla/DST_Root_CA_X3.crt error adding mozilla/DigiCert_Assured_ID_Root_CA.crt error adding mozilla/DigiCert_Global_Root_CA.crt error adding mozilla/DigiCert_High_Assurance_EV_Root_CA.crt Could not create the Java virtual machine. error adding mozilla/Digital_Signature_Trust_Co._Global_CA_1.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_2.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_3.crt error adding mozilla/Digital_Signature_Trust_Co._Global_CA_4.crt error adding mozilla/Entrust.net_Global_Secure_Personal_CA.crt error adding mozilla/Entrust.net_Global_Secure_Server_CA.crt error adding mozilla/Entrust.net_Premium_2048_Secure_Server_CA.crt error adding mozilla/Entrust.net_Secure_Personal_CA.crt error adding mozilla/Entrust.net_Secure_Server_CA.crt error adding mozilla/Entrust_Root_Certification_Authority.crt error adding mozilla/Equifax_Secure_CA.crt error adding mozilla/Equifax_Secure_Global_eBusiness_CA.crt error adding mozilla/Equifax_Secure_eBusiness_CA_1.crt error adding mozilla/Equifax_Secure_eBusiness_CA_2.crt error adding mozilla/Firmaprofesional_Root_CA.crt error adding mozilla/GTE_CyberTrust_Global_Root.crt error adding mozilla/GTE_CyberTrust_Root_CA.crt error adding mozilla/GeoTrust_Global_CA.crt error adding mozilla/GeoTrust_Global_CA_2.crt error adding mozilla/GeoTrust_Primary_Certification_Authority.crt error adding mozilla/GeoTrust_Universal_CA.crt error adding mozilla/GeoTrust_Universal_CA_2.crt error adding mozilla/GlobalSign_Root_CA.crt error adding mozilla/GlobalSign_Root_CA_-_R2.crt error adding mozilla/Go_Daddy_Class_2_CA.crt error adding mozilla/IPS_CLASE1_root.crt error adding mozilla/IPS_CLASE3_root.crt error adding mozilla/IPS_CLASEA1_root.crt error adding mozilla/IPS_CLASEA3_root.crt error adding mozilla/IPS_Chained_CAs_root.crt error adding mozilla/IPS_Servidores_root.crt error adding mozilla/IPS_Timestamping_root.crt error adding mozilla/NetLock_Business_=Class_B=_Root.crt error adding mozilla/NetLock_Express_=Class_C=_Root.crt error adding mozilla/NetLock_Notary_=Class_A=_Root.crt error adding mozilla/NetLock_Qualified_=Class_QA=_Root.crt error adding mozilla/Network_Solutions_Certificate_Authority.crt error adding mozilla/QuoVadis_Root_CA.crt error adding mozilla/QuoVadis_Root_CA_2.crt error adding mozilla/QuoVadis_Root_CA_3.crt error adding mozilla/RSA_Root_Certificate_1.crt error adding mozilla/RSA_Security_1024_v3.crt error adding mozilla/RSA_Security_2048_v3.crt error adding mozilla/SecureTrust_CA.crt error adding mozilla/Secure_Global_CA.crt error adding mozilla/Security_Communication_Root_CA.crt error adding mozilla/Sonera_Class_1_Root_CA.crt error adding mozilla/Sonera_Class_2_Root_CA.crt error adding mozilla/Staat_der_Nederlanden_Root_CA.crt error adding mozilla/Starfield_Class_2_CA.crt error adding mozilla/StartCom_Certification_Authority.crt error adding mozilla/StartCom_Ltd..crt error adding mozilla/SwissSign_Gold_CA_-_G2.crt error adding mozilla/SwissSign_Platinum_CA_-_G2.crt error adding mozilla/SwissSign_Silver_CA_-_G2.crt error adding mozilla/Swisscom_Root_CA_1.crt error adding mozilla/TC_TrustCenter__Germany__Class_2_CA.crt error adding mozilla/TC_TrustCenter__Germany__Class_3_CA.crt error adding mozilla/TDC_Internet_Root_CA.crt error adding mozilla/TDC_OCES_Root_CA.crt error adding mozilla/TURKTRUST_Certificate_Services_Provider_Root_1.crt error adding mozilla/TURKTRUST_Certificate_Services_Provider_Root_2.crt error adding mozilla/Taiwan_GRCA.crt error adding mozilla/Thawte_Personal_Basic_CA.crt error adding mozilla/Thawte_Personal_Freemail_CA.crt error adding mozilla/Thawte_Personal_Premium_CA.crt error adding mozilla/Thawte_Premium_Server_CA.crt error adding mozilla/Thawte_Server_CA.crt error adding mozilla/Thawte_Time_Stamping_CA.crt error adding mozilla/UTN-USER_First-Network_Applications.crt error adding mozilla/UTN_DATACorp_SGC_Root_CA.crt error adding mozilla/UTN_USERFirst_Email_Root_CA.crt error adding mozilla/UTN_USERFirst_Hardware_Root_CA.crt error adding mozilla/ValiCert_Class_1_VA.crt error adding mozilla/ValiCert_Class_2_VA.crt error adding mozilla/VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.crt error adding mozilla/Verisign_Class_1_Public_Primary_Certification_Authority.crt error adding mozilla/Verisign_Class_1_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_1_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_Class_2_Public_Primary_Certification_Authority.crt error adding mozilla/Verisign_Class_2_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_2_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_Class_3_Public_Primary_Certification_Authority.crt error adding mozilla/Verisign_Class_3_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_Class_4_Public_Primary_Certification_Authority_-_G2.crt error adding mozilla/Verisign_Class_4_Public_Primary_Certification_Authority_-_G3.crt error adding mozilla/Verisign_RSA_Secure_Server_CA.crt error adding mozilla/Verisign_Time_Stamping_Authority_CA.crt error adding mozilla/Visa_International_Global_Root_2.crt error adding mozilla/Visa_eCommerce_Root.crt error adding mozilla/WellsSecure_Public_Root_Certificate_Authority.crt error adding mozilla/Wells_Fargo_Root_CA.crt error adding mozilla/XRamp_Global_CA_Root.crt error adding mozilla/beTRUSTed_Root_CA-Baltimore_Implementation.crt error adding mozilla/beTRUSTed_Root_CA.crt error adding mozilla/beTRUSTed_Root_CA_-_Entrust_Implementation.crt error adding mozilla/beTRUSTed_Root_CA_-_RSA_Implementation.crt error adding mozilla/thawte_Primary_Root_CA.crt error adding signet.pl/signet_ca1_pem.crt error adding signet.pl/signet_ca2_pem.crt error adding signet.pl/signet_ca3_pem.crt error adding signet.pl/signet_ocspklasa2_pem.crt error adding signet.pl/signet_ocspklasa3_pem.crt error adding signet.pl/signet_pca2_pem.crt error adding signet.pl/signet_pca3_pem.crt error adding signet.pl/signet_rootca_pem.crt error adding signet.pl/signet_tsa1_pem.crt error adding spi-inc.org/spi-ca-2003.crt error adding spi-inc.org/spi-cacert-2008.crt error adding telesec.de/deutsche-telekom-root-ca-2.crt failed (VM used: java-6-openjdk). dpkg: error processing ca-certificates-java (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: ca-certificates-java E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • T-SQL Tuesday #33: Trick Shots: Undocumented, Underdocumented, and Unknown Conspiracies!

    - by Most Valuable Yak (Rob Volk)
    Mike Fal (b | t) is hosting this month's T-SQL Tuesday on Trick Shots.  I love this choice because I've been preoccupied with sneaky/tricky/evil SQL Server stuff for a long time and have been presenting on it for the past year.  Mike's directives were "Show us a cool trick or process you developed…It doesn’t have to be useful", which most of my blogging definitely fits, and "Tell us what you learned from this trick…tell us how it gave you insight in to how SQL Server works", which is definitely a new concept.  I've done a lot of reading and watching on SQL Server Internals and even attended training, but sometimes I need to go explore on my own, using my own tools and techniques.  It's an itch I get every few months, and, well, it sure beats workin'. I've found some people to be intimidated by SQL Server's internals, and I'll admit there are A LOT of internals to keep track of, but there are tons of excellent resources that clearly document most of them, and show how knowing even the basics of internals can dramatically improve your database's performance.  It may seem like rocket science, or even brain surgery, but you don't have to be a genius to understand it. Although being an "evil genius" can help you learn some things they haven't told you about. ;) This blog post isn't a traditional "deep dive" into internals, it's more of an approach to find out how a program works.  It utilizes an extremely handy tool from an even more extremely handy suite of tools, Sysinternals.  I'm not the only one who finds Sysinternals useful for SQL Server: Argenis Fernandez (b | t), Microsoft employee and former T-SQL Tuesday host, has an excellent presentation on how to troubleshoot SQL Server using Sysinternals, and I highly recommend it.  Argenis didn't cover the Strings.exe utility, but I'll be using it to "hack" the SQL Server executable (DLL and EXE) files. Please note that I'm not promoting software piracy or applying these techniques to attack SQL Server via internal knowledge. This is strictly educational and doesn't reveal any proprietary Microsoft information.  And since Argenis works for Microsoft and demonstrated Sysinternals with SQL Server, I'll just let him take the blame for it. :P (The truth is I've used Strings.exe on SQL Server before I ever met Argenis.) Once you download and install Strings.exe you can run it from the command line.  For our purposes we'll want to run this in the Binn folder of your SQL Server instance (I'm referencing SQL Server 2012 RTM): cd "C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn" C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.dll > sqldll.txt C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.exe > sqlexe.txt   I've limited myself to DLLs and EXEs that have "sql" in their names.  There are quite a few more but I haven't examined them in any detail. (Homework assignment for you!) If you run this yourself you'll get 2 text files, one with all the extracted strings from every SQL DLL file, and the other with the SQL EXE strings.  You can open these in Notepad, but you're better off using Notepad++, EditPad, Emacs, Vim or another more powerful text editor, as these will be several megabytes in size. And when you do open it…you'll find…a TON of gibberish.  (If you think that's bad, just try opening the raw DLL or EXE file in Notepad.  And by the way, don't do this in production, or even on a running instance of SQL Server.)  Even if you don't clean up the file, you can still use your editor's search function to find a keyword like "SELECT" or some other item you expect to be there.  As dumb as this sounds, I sometimes spend my lunch break just scanning the raw text for anything interesting.  I'm boring like that. Sometimes though, having these files available can lead to some incredible learning experiences.  For me the most recent time was after reading Joe Sack's post on non-parallel plan reasons.  He mentions a new SQL Server 2012 execution plan element called NonParallelPlanReason, and demonstrates a query that generates "MaxDOPSetToOne".  Joe (formerly on the Microsoft SQL Server product team, so he knows this stuff) mentioned that this new element was not currently documented and tried a few more examples to see what other reasons could be generated. Since I'd already run Strings.exe on the SQL Server DLLs and EXE files, it was easy to run grep/find/findstr for MaxDOPSetToOne on those extracts.  Once I found which files it belonged to (sqlmin.dll) I opened the text to see if the other reasons were listed.  As you can see in my comment on Joe's blog, there were about 20 additional non-parallel reasons.  And while it's not "documentation" of this underdocumented feature, the names are pretty self-explanatory about what can prevent parallel processing. I especially like the ones about cursors – more ammo! - and am curious about the PDW compilation and Cloud DB replication reasons. One reason completely stumped me: NoParallelHekatonPlan.  What the heck is a hekaton?  Google and Wikipedia were vague, and the top results were not in English.  I found one reference to Greek, stating "hekaton" can be translated as "hundredfold"; with a little more Wikipedia-ing this leads to hecto, the prefix for "one hundred" as a unit of measure.  I'm not sure why Microsoft chose hekaton for such a plan name, but having already learned some Greek I figured I might as well dig some more in the DLL text for hekaton.  Here's what I found: hekaton_slow_param_passing Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path The reason why Hekaton parameter passing code took the slow code path hekaton_slow_param_pass_reason sp_deploy_hekaton_database sp_undeploy_hekaton_database sp_drop_hekaton_database sp_checkpoint_hekaton_database sp_restore_hekaton_database e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\hkproc.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matgen.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matquery.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\sqlmeta.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\resultset.cpp Interesting!  The first 4 entries (in red) mention parameters and "slow code".  Could this be the foundation of the mythical DBCC RUNFASTER command?  Have I been passing my parameters the slow way all this time? And what about those sp_xxxx_hekaton_database procedures (in blue)? Could THEY be the secret to a faster SQL Server? Could they promise a "hundredfold" improvement in performance?  Are these special, super-undocumented DIB (databases in black)? I decided to look in the SQL Server system views for any objects with hekaton in the name, or references to them, in hopes of discovering some new code that would answer all my questions: SELECT name FROM sys.all_objects WHERE name LIKE '%hekaton%' SELECT name FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%hekaton%' Which revealed: name ------------------------ (0 row(s) affected) name ------------------------ sp_createstats sp_recompile sp_updatestats (3 row(s) affected)   Hmm.  Well that didn't find much.  Looks like these procedures are seriously undocumented, unknown, perhaps forbidden knowledge. Maybe a part of some unspeakable evil? (No, I'm not paranoid, I just like mysteries and thought that punching this up with that kind of thing might keep you reading.  I know I'd fall asleep without it.) OK, so let's check out those 3 procedures and see what they reveal when I search for "Hekaton": sp_createstats: -- filter out local temp tables, Hekaton tables, and tables for which current user has no permissions -- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables   OK, that's interesting, let's go looking down a little further: ((@table_type<>'U') or (0 = OBJECTPROPERTY(@table_id, 'TableIsInMemory'))) and -- Hekaton table   Wellllll, that tells us a few new things: There's such a thing as Hekaton tables (UPDATE: I'm not the only one to have found them!) They are not standard user tables and probably not in memory UPDATE: I misinterpreted this because I didn't read all the code when I wrote this blog post. The OBJECTPROPERTY function has an undocumented TableIsInMemory option Let's check out sp_recompile: -- (3) Must not be a Hekaton procedure.   And once again go a little further: if (ObjectProperty(@objid, 'IsExecuted') <> 0 AND ObjectProperty(@objid, 'IsInlineFunction') = 0 AND ObjectProperty(@objid, 'IsView') = 0 AND -- Hekaton procedure cannot be recompiled -- Make them go through schema version bumping branch, which will fail ObjectProperty(@objid, 'ExecIsCompiledProc') = 0)   And now we learn that hekaton procedures also exist, they can't be recompiled, there's a "schema version bumping branch" somewhere, and OBJECTPROPERTY has another undocumented option, ExecIsCompiledProc.  (If you experiment with this you'll find this option returns null, I think it only works when called from a system object.) This is neat! Sadly sp_updatestats doesn't reveal anything new, the comments about hekaton are the same as sp_createstats.  But we've ALSO discovered undocumented features for the OBJECTPROPERTY function, which we can now search for: SELECT name, object_definition(OBJECT_ID) FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%OBJECTPROPERTY(%'   I'll leave that to you as more homework.  I should add that searching the system procedures was recommended long ago by the late, great Ken Henderson, in his Guru's Guide books, as a great way to find undocumented features.  That seems to be really good advice! Now if you're a programmer/hacker, you've probably been drooling over the last 5 entries for hekaton (in green), because these are the names of source code files for SQL Server!  Does this mean we can access the source code for SQL Server?  As The Oracle suggested to Neo, can we return to The Source??? Actually, no. Well, maybe a little bit.  While you won't get the actual source code from the compiled DLL and EXE files, you'll get references to source files, debugging symbols, variables and module names, error messages, and even the startup flags for SQL Server.  And if you search for "DBCC" or "CHECKDB" you'll find a really nice section listing all the DBCC commands, including the undocumented ones.  Granted those are pretty easy to find online, but you may be surprised what those web sites DIDN'T tell you! (And neither will I, go look for yourself!)  And as we saw earlier, you'll also find execution plan elements, query processing rules, and who knows what else.  It's also instructive to see how Microsoft organizes their source directories, how various components (storage engine, query processor, Full Text, AlwaysOn/HADR) are split into smaller modules. There are over 2000 source file references, go do some exploring! So what did we learn?  We can pull strings out of executable files, search them for known items, browse them for unknown items, and use the results to examine internal code to learn even more things about SQL Server.  We've even learned how to use command-line utilities!  We are now 1337 h4X0rz!  (Not really.  I hate that leetspeak crap.) Although, I must confess I might've gone too far with the "conspiracy" part of this post.  I apologize for that, it's just my overactive imagination.  There's really no hidden agenda or conspiracy regarding SQL Server internals.  It's not The Matrix.  It's not like you'd find anything like that in there: Attach Matrix Database DM_MATRIX_COMM_PIPELINES MATRIXXACTPARTICIPANTS dm_matrix_agents   Alright, enough of this paranoid ranting!  Microsoft are not really evil!  It's not like they're The Borg from Star Trek: ALTER FEDERATION DROP ALTER FEDERATION SPLIT DROP FEDERATION   #tsql2sday

    Read the article

  • First toe in the water with Object Databases : DB4O

    - by REA_ANDREW
    I have been wanting to have a play with Object Databases for a while now, and today I have done just that.  One of the obvious choices I had to make was which one to use.  My criteria for choosing one today was simple, I wanted one which I could literally wack in and start using, which means I wanted one which either had a .NET API or was designed/ported to .NET.  My decision was between two being: db4o MongoDb I went for db4o for the single reason that it looked like I could get it running and integrated the quickest.  I am making a Blogging application and front end as a project with which I can test and learn with these object databases.  Another requirement which I thought I would mention is that I also want to be able to use the said database in a shared hosting environment where I cannot install, run and maintain a server instance of said object database.  I can do exactly this with db4o. I have not tried to do this with MongoDb at time of writing.  There are quite a few in the industry now and you read an interesting post about different ones and how they are used with some of the heavy weights in the industry here : http://blog.marcua.net/post/442594842/notes-from-nosql-live-boston-2010 In the example which I am building I am using StructureMap as my IOC.  To inject the object for db4o I went with a Singleton instance scope as I am using a single file and I need this to be available to any thread on in the process as opposed to using the server implementation where I could open and close client connections with the server handling each one respectively.  Again I want to point out that I have chosen to stick with the non server implementation of db4o as I wanted to use this in a shared hosting environment where I cannot have such servers installed and run.     public static class Bootstrapper    {        public static void ConfigureStructureMap()        {            ObjectFactory.Initialize(x => x.AddRegistry(new MyApplicationRegistry()));        }    }    public class MyApplicationRegistry : Registry    {        public const string DB4O_FILENAME = "blog123";        public string DbPath        {            get            {                return Path.Combine(Path.GetDirectoryName(Assembly.GetAssembly(typeof(IBlogRepository)).Location), DB4O_FILENAME);            }        }        public MyApplicationRegistry()        {            For<IObjectContainer>().Singleton().Use(                () => Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), DbPath));            Scan(assemblyScanner =>            {                assemblyScanner.TheCallingAssembly();                assemblyScanner.WithDefaultConventions();            });        }    } So my code above is the structure map plumbing which I use for the application.  I am doing this simply as a quick scratch pad to play around with different things so I am simply segregating logical layers with folder structure as opposed to different assemblies.  It will be easy if I want to do this with any segment but for the purposes of example I have literally just wacked everything in the one assembly.  You can see an example file structure I have on the right.  I am planning on testing out a few implementations of the object databases out there so I can program to an interface of IBlogRepository One of the things which I was unsure about was how it performed under a multi threaded environment which it will undoubtedly be used 9 times out of 10, and for the reason that I am using the db context as a singleton, I assumed that the library was of course thread safe but I did not know as I have not read any where in the documentation, again this is probably me not reading things correctly.  In short though I threw together a simple test where I simply iterate to a limit each time kicking a common task off with a thread from a thread pool.  This task simply created and added an random Post and added it to the storage. The execution of the threads I put inside the Setup of the Test and then simply ensure the number of posts committed to the database is equal to the number of iterations I made; here is the code I used to do the multi thread jobs: [TestInitialize] public void Setup() { var sw = new System.Diagnostics.Stopwatch(); sw.Start(); var resetEvent = new ManualResetEvent(false); ThreadPool.SetMaxThreads(20, 20); for (var i = 0; i < MAX_ITERATIONS; i++) { ThreadPool.QueueUserWorkItem(delegate(object state) { var eventToReset = (ManualResetEvent)state; var post = new Post { Author = MockUser, Content = "Mock Content", Title = "Title" }; Repository.Put(post); var counter = Interlocked.Decrement(ref _threadCounter); if (counter == 0) eventToReset.Set(); }, resetEvent); } WaitHandle.WaitAll(new[] { resetEvent }); sw.Stop(); Console.WriteLine("{0:00}.{1:00} seconds", sw.Elapsed.Seconds, sw.Elapsed.Milliseconds); }   I was not doing this to test out the speed performance of db4o but while I was doing this I could not help but put in a StopWatch and see out of sheer interest how fast it would take to insert a number of Posts.  I tested it out in this case with 10000 inserts of a small, simple POCO and it resulted in an average of:  899.36 object inserts / second.  Again this is just  simple crude test which came out of my curiosity at how it performed under many threads when using the non server implementation of db4o. The spec summary of the computer I used is as follows: With regards to the actual Repository implementation itself, it really is quite straight forward and I have to say I am very surprised at how easy it was to integrate and get up and running.  One thing I have noticed in the exposure I have had so far is that the Query returns IList<T> as opposed to IQueryable<T> but again I have not looked into this in depth and this could be there already and if not they have provided everything one needs to make there own repository.  An example of a couple of methods from by db4o implementation of the BlogRepository is below: public class BlogRepository : IBlogRepository { private readonly IObjectContainer _db; public BlogRepository(IObjectContainer db) { _db = db; } public void Put(DomainObject obj) { _db.Store(obj); } public void Delete(DomainObject obj) { _db.Delete(obj); } public Post GetByKey(object key) { return _db.Query<Post>(post => post.Key == key).FirstOrDefault(); } … Anyways I hope to get a few more implementations going of the object databases and literally just get familiarized with them and the concept of no sql databases. Cheers for now, Andrew

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • That Escalated Quickly

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2014/05/17/that-escalated-quickly.aspxI have been working remotely out of my home for over 4 years now. All of my coworkers during that time have also worked remotely. Lots of folks have written about the challenges inherent in facilitating communication on remote teams and strategies for overcoming them. A popular theme around this topic is the notion of “escalating communication”. In this context “escalating” means taking a conversation from one mode of communication to a different, higher fidelity mode of communication. Here are the five modes of communication I use at work in order of increasing fidelity: Email – This is the “lowest fidelity” mode of communication that I use. I usually only check it a few times a day (and I’m trying to check it even less frequently than that) and I only keep items in my inbox if they represent an item I need to take action on that I haven’t tracked anywhere else. Forums / Message boards – Being a developer, I’ve gotten into the habit of having other people look over my code before it becomes part of the product I’m working on. These code reviews often happen in “real time” via screen sharing, but I also always have someone else give all of the changes another look using pull requests. A pull request takes my code and lets someone else see the changes I’ve made side-by-side with the existing code so they can see if I did anything dumb. Pull requests can facilitate a conversation about the code changes in an online-forum like style. Some teams I’ve worked on also liked using tools like Trello or Google Groups to have on-going conversations about a topic or task that was being worked on. Chat & Instant Messaging  - Chat and instant messaging are the real workhorses for communication on the remote teams I’ve been a part of. I know some teams that are co-located that also use it pretty extensively for quick messages that don’t warrant walking across the office to talk with someone but reqire more immediacy than an e-mail. For the purposes of this post I think it’s important to note that the terms “chat” and “instant messaging” might insinuate that the conversation is happening in real time, but that’s not always true. Modern chat and IM applications maintain a searchable history so people can easily see what might have been discussed while they were away from their computers. Voice, Video and Screen sharing – Everyone’s got a camera and microphone on their computers now, and there are an abundance of services that will let you use them to talk to other people who have cameras and microphones on their computers. I’m including screen sharing here as well because, in my experience, these discussions typically involve one or more people showing the other participants something that’s happening on their screen. Obviously, this mode of communication is much higher-fidelity than any of the ones listed above. Scheduled meetings are typically conducted using this mode of communication. In Person – No matter how great communication tools become, there’s no substitute for meeting with someone face-to-face. However, opportunities for this kind of communcation are few and far between when you work on a remote team. When a conversation gets escalated that usually means it moves up one or more positions on this list. A lot of people advocate jumping to #4 sooner than later. Like them, I used to believe that, if it was possible, organizing a call with voice and video was automatically better than any kind of text-based communication could be. Lately, however, I’m becoming less convinced that escalating is always the right move. Working Asynchronously Last year I attended a talk at our local code camp given by Drew Miller. Drew works at GitHub and was talking about how they use GitHub internally. Many of the folks at GitHub work remotely, so communication was one of the main themes in Drew’s talk. During the talk Drew used the phrase, “asynchronous communication” to describe their use of chat and pull request comments. That phrase stuck in my head because I hadn’t heard it before but I think it perfectly describes the way in which remote teams often need to communicate. You don’t always know when your co-workers are at their computers or what hours (if any) they are working that day. In order to work this way you need to assume that the person you’re talking to might not respond right away. You can’t always afford to wait until everyone required is online and available to join a voice call, so you need to use text-based, persistent forms of communication so that people can receive and respond to messages when they are available. Going back to my list from the beginning of this post for a second, I characterize items #1-3 as being “asynchronous” modes of communication while we could call items #4 and #5 “synchronous”. When communication gets escalated it’s almost always moving from an asynchronous mode of communication to a synchronous one. Now, to the point of this post: I’ve become increasingly reluctant to escalate from asynchronous to synchronous communication for two primary reasons: 1 – You can often find a higher fidelity way to convey your message without holding a synchronous conversation 2 - Asynchronous modes of communication are (usually) persistent and searchable. You Don’t Have to Broadcast Live Let’s start with the first reason I’ve listed. A lot of times you feel like you need to escalate to synchronous communication because you’re having difficulty describing something that you’re seeing in words. You want to provide the people you’re conversing with some audio-visual aids to help them understand the point that you’re trying to make and you think that getting on Skype and sharing your screen with them is the best way to do that. Firing up a screen sharing session does work well, but you can usually accomplish the same thing in an asynchronous manner. For example, you could take a screenshot and annotate it with some text and drawings to illustrate what it is you’re seeing. If a screenshot won’t work, taking a short screen recording while your narrate over it and posting the video to your forum or chat system along with a text-based description of what’s in the recording that can be searched for later can be a great way to effectively communicate with your team asynchronously. I Said What?!? Now for the second reason I listed: most asynchronous modes of communication provide a transcript of what was said and what decisions might have been made during the conversation. There have been many occasions where I’ve used the search feature of my team’s chat application to find a conversation that happened several weeks or months ago to remember what was decided. Unfortunately, I think the benefits associated with the persistence of communicating asynchronously often get overlooked when people decide to escalate to a in-person meeting or voice/video call. I’m becoming much more reluctant to suggest a voice or video call if I suspect that it might lead to codifying some kind of design decision because everyone involved is going to hang up the call and immediately forget what was decided. I recognize that you can record and archive these types of interactions, but without being able to search them the recordings aren’t terribly useful. When and How To Escalate I don’t mean to imply that communicating via voice/video or in person is never a good idea. I probably jump on a Skype call with a co-worker at least once a day to quickly hash something out or show them a bit of code that I’m working on. Also, meeting in person periodically is really important for remote teams. There’s no way around the fact that sometimes it’s easier to jump on a call and show someone my screen so they can see what I’m seeing. So when is it right to escalate? I think the simplest way to answer that is when the communication starts to feel painful. Everyone’s tolerance for that pain is different, but I think you need to let it hurt a little bit before jumping to synchronous communication. When you do escalate from asynchronous to synchronous communication, there are a couple of things you can do to maximize the effectiveness of the communication: Takes notes – This is huge and yet I’ve found that a lot of teams don’t do this. If you’re holding a meeting with  > 2 people you should have someone taking notes. Taking notes while participating in a meeting can be difficult but there are a few strategies to deal with this challenge that probably deserve a short post of their own. After the meeting, make sure the notes are posted to a place where all concerned parties (including those that might not have attended the meeting) can review and search them. Persist decisions made ASAP – If any decisions were made during the meeting, persist those decisions to a searchable medium as soon as possible following the conversation. All the teams I’ve worked on used a web-based system for tracking the on-going work and a backlog of work to be done in the future. I always try to make sure that all of the cards/stories/tasks/whatever in these systems always reflect the latest decisions that were made as the work was being planned and executed. If held a quick call with your team lead and decided that it wasn’t worth the effort to build real-time validation into that new UI you were working on, go and codify that decision in the story associated with that work immediately after you hang up. Even better, write it up in the story while you are both still on the phone. That way when the folks from your QA team pick up the story to test a few days later they’ll know why the real-time validation isn’t there without having to invoke yet another conversation about the work. Communicating Well is Hard At this point you might be thinking that communicating asynchronously is more difficult than having a live conversation. You’re right: it is more difficult. In order to communicate effectively this way you need to very carefully think about the message that you’re trying to convey and craft it in a way that’s easy for your audience to understand. This is almost always harder than just talking through a problem in real time with someone; this is why escalating communication is such a popular idea. Why wouldn’t we want to do the thing that’s easier? Easier isn’t always better. If you and your team can get in the habit of communicating effectively in an asynchronous manner you’ll find that, over time, all of your communications get less painful because you don’t need to re-iterate previously made points over and over again. If you communicate right the first time, you often don’t need to rehash old conversations because you can go back and find the decisions that were made laid out in plain language. You’ll also find that you get better at doing things like writing useful comments in your code, creating written documentation about how the feature that you just built works, or persuading your team to do things in a certain way.

    Read the article

  • Caching NHibernate Named Queries

    - by TStewartDev
    I recently started a new job and one of my first tasks was to implement a "popular products" design. The parameters were that it be done with NHibernate and be cached for 24 hours at a time because the query will be pretty taxing and the results do not need to be constantly up to date. This ended up being tougher than it sounds. The database schema meant a minimum of four joins with filtering and ordering criteria. I decided to use a stored procedure rather than letting NHibernate create the SQL for me. Here is a summary of what I learned (even if I didn't ultimately use all of it): You can't, at the time of this writing, use Fluent NHibernate to configure SQL named queries or imports You can return persistent entities from a stored procedure and there are a couple ways to do that You can populate POCOs using the results of a stored procedure, but it isn't quite as obvious You can reuse your named query result mapping other places (avoid duplication) Caching your query results is not at all obvious Testing to see if your cache is working is a pain NHibernate does a lot of things right. Having unified, up-to-date, comprehensive, and easy-to-find documentation is not one of them. By the way, if you're new to this, I'll use the terms "named query" and "stored procedure" (from NHibernate's perspective) fairly interchangeably. Technically, a named query can execute any SQL, not just a stored procedure, and a stored procedure doesn't have to be executed from a named query, but for reusability, it seems to me like the best practice. If you're here, chances are good you're looking for answers to a similar problem. You don't want to read about the path, you just want the result. So, here's how to get this thing going. The Stored Procedure NHibernate has some guidelines when using stored procedures. For Microsoft SQL Server, you have to return a result set. The scalar value that the stored procedure returns is ignored as are any result sets after the first. Other than that, it's nothing special. CREATE PROCEDURE GetPopularProducts @StartDate DATETIME, @MaxResults INT AS BEGIN SELECT [ProductId], [ProductName], [ImageUrl] FROM SomeTableWithJoinsEtc END The Result Class - PopularProduct You have two options to transport your query results to your view (or wherever is the final destination): you can populate an existing mapped entity class in your model, or you can create a new entity class. If you go with the existing model, the advantage is that the query will act as a loader and you'll get full proxied access to the domain model. However, this can be a disadvantage if you require access to the related entities that aren't loaded by your results. For example, my PopularProduct has image references. Unless I tie them into the query (thus making it even more complicated and expensive to run), they'll have to be loaded on access, requiring more trips to the database. Since we're trying to avoid trips to the database by using a second-level cache, we should use the second option, which is to create a separate entity for results. This approach is (I believe) in the spirit of the Command-Query Separation principle, and it allows us to flatten our data and optimize our report-generation process from data source to view. public class PopularProduct { public virtual int ProductId { get; set; } public virtual string ProductName { get; set; } public virtual string ImageUrl { get; set; } } The NHibernate Mappings (hbm) Next up, we need to let NHibernate know about the query and where the results will go. Below is the markup for the PopularProduct class. Notice that I'm using the <resultset> element and that it has a name attribute. The name allows us to drop this into our query map and any others, giving us reusability. Also notice the <import> element which lets NHibernate know about our entity class. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <import class="PopularProduct, Infrastructure.NHibernate, Version=1.0.0.0"/> <resultset name="PopularProductResultSet"> <return-scalar column="ProductId" type="System.Int32"/> <return-scalar column="ProductName" type="System.String"/> <return-scalar column="ImageUrl" type="System.String"/> </resultset> </hibernate-mapping>  And now the PopularProductsMap: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="GetPopularProducts" resultset-ref="PopularProductResultSet" cacheable="true" cache-mode="normal"> <query-param name="StartDate" type="System.DateTime" /> <query-param name="MaxResults" type="System.Int32" /> exec GetPopularProducts @StartDate = :StartDate, @MaxResults = :MaxResults </sql-query> </hibernate-mapping>  The two most important things to notice here are the resultset-ref attribute, which links in our resultset mapping, and the cacheable attribute. The Query Class – PopularProductsQuery So far, this has been fairly obvious if you're familiar with NHibernate. This next part, maybe not so much. You can implement your query however you want to; for me, I wanted a self-encapsulated Query class, so here's what it looks like: public class PopularProductsQuery : IPopularProductsQuery { private static readonly IResultTransformer ResultTransformer; private readonly ISessionBuilder _sessionBuilder;   static PopularProductsQuery() { ResultTransformer = Transformers.AliasToBean<PopularProduct>(); }   public PopularProductsQuery(ISessionBuilder sessionBuilder) { _sessionBuilder = sessionBuilder; }   public IList<PopularProduct> GetPopularProducts(DateTime startDate, int maxResults) { var session = _sessionBuilder.GetSession(); var popularProducts = session .GetNamedQuery("GetPopularProducts") .SetCacheable(true) .SetCacheRegion("PopularProductsCacheRegion") .SetCacheMode(CacheMode.Normal) .SetReadOnly(true) .SetResultTransformer(ResultTransformer) .SetParameter("StartDate", startDate.Date) .SetParameter("MaxResults", maxResults) .List<PopularProduct>();   return popularProducts; } }  Okay, so let's look at each line of the query execution. The first, GetNamedQuery, matches up with our NHibernate mapping for the sql-query. Next, we set it as cacheable (this is probably redundant since our mapping also specified it, but it can't hurt, right?). Then we set the cache region which we'll get to in the next section. Set the cache mode (optional, I believe), and my cache is read-only, so I set that as well. The result transformer is very important. This tells NHibernate how to transform your query results into a non-persistent entity. You can see I've defined ResultTransformer in the static constructor using the AliasToBean transformer. The name is obviously leftover from Java/Hibernate. Finally, set your parameters and then call a result method which will execute the query. Because this is set to cached, you execute this statement every time you run the query and NHibernate will know based on your parameters whether to use its cached version or a fresh version. The Configuration – hibernate.cfg.xml and Web.config You need to explicitly enable second-level caching in your hibernate configuration: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> [...] <property name="dialect">NHibernate.Dialect.MsSql2005Dialect</property> <property name="cache.provider_class">NHibernate.Caches.SysCache.SysCacheProvider,NHibernate.Caches.SysCache</property> <property name="cache.use_query_cache">true</property> <property name="cache.use_second_level_cache">true</property> [...] </session-factory> </hibernate-configuration> Both properties "use_query_cache" and "use_second_level_cache" are necessary. As this is for a web deployement, we're using SysCache which relies on ASP.NET's caching. Be aware of this if you're not deploying to the web! You'll have to use a different cache provider. We also need to tell our cache provider (in this cache, SysCache) about our caching region: <syscache> <cache region="PopularProductsCacheRegion" expiration="86400" priority="5" /> </syscache> Here I've set the cache to be valid for 24 hours. This XML snippet goes in your Web.config (or in a separate file referenced by Web.config, which helps keep things tidy). The Payoff That should be it! At this point, your queries should run once against the database for a given set of parameters and then use the cache thereafter until it expires. You can, of course, adjust settings to work in your particular environment. Testing Testing your application to ensure it is using the cache is a pain, but if you're like me, you want to know that it's actually working. It's a bit involved, though, so I'll create a separate post for it if comments indicate there is interest.

    Read the article

  • Ogre 3d and bullet physics interaction

    - by Tim
    I have been playing around with Ogre3d and trying to integrate bullet physics. I have previously somewhat successfully got this functionality working with irrlicht and bullet and I am trying to base this on what I had done there, but modifying it to fit with Ogre. It is working but not correctly and I would like some help to understand what it is I am doing wrong. I have a state system and when I enter the "gamestate" I call some functions such as setting up a basic scene, creating the physics simulation. I am doing that as follows. void GameState::enter() { ... // Setup Physics btBroadphaseInterface *BroadPhase = new btAxisSweep3(btVector3(-1000,-1000,-1000), btVector3(1000,1000,1000)); btDefaultCollisionConfiguration *CollisionConfiguration = new btDefaultCollisionConfiguration(); btCollisionDispatcher *Dispatcher = new btCollisionDispatcher(CollisionConfiguration); btSequentialImpulseConstraintSolver *Solver = new btSequentialImpulseConstraintSolver(); World = new btDiscreteDynamicsWorld(Dispatcher, BroadPhase, Solver, CollisionConfiguration); ... createScene(); } In the createScene method I add a light and try to setup a "ground" plane to act as the ground for things to collide with.. as follows. I expect there is issues with this as I get objects colliding with the ground but half way through it and they glitch around like crazy on collision. void GameState::createScene() { m_pSceneMgr->createLight("Light")->setPosition(75,75,75); // Physics // As a test we want a floor plane for things to collide with Ogre::Entity *ent; Ogre::Plane p; p.normal = Ogre::Vector3(0,1,0); p.d = 0; Ogre::MeshManager::getSingleton().createPlane( "FloorPlane", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, p, 200000, 200000, 20, 20, true, 1, 9000,9000,Ogre::Vector3::UNIT_Z); ent = m_pSceneMgr->createEntity("floor", "FloorPlane"); ent->setMaterialName("Test/Floor"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(ent); btTransform Transform; Transform.setIdentity(); Transform.setOrigin(btVector3(0,1,0)); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); btCollisionShape *Shape = new btStaticPlaneShape(btVector3(0,1,0),0); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(0, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(0, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; // End Physics } I then have a method to create a cube and give it rigid body physics properties. I know there will be errors here as I get the items colliding with the ground but not with each other properly. So I would appreciate some input on what I am doing wrong. void GameState::CreateBox(const btVector3 &TPosition, const btVector3 &TScale, btScalar TMass) { Ogre::Vector3 size = Ogre::Vector3::ZERO; Ogre::Vector3 pos = Ogre::Vector3::ZERO; Ogre::Vector3 scale = Ogre::Vector3::ZERO; pos.x = TPosition.getX(); pos.y = TPosition.getY(); pos.z = TPosition.getZ(); scale.x = TScale.getX(); scale.y = TScale.getY(); scale.z = TScale.getZ(); Ogre::Entity *entity = m_pSceneMgr->createEntity( "Box" + Ogre::StringConverter::toString(m_pNumEntities), "cube.mesh"); entity->setCastShadows(true); Ogre::AxisAlignedBox boundingB = entity->getBoundingBox(); size = boundingB.getSize(); //size /= 2.0f; // Only the half needed? //size *= 0.96f; // Bullet margin is a bit bigger so we need a smaller size entity->setMaterialName("Test/Cube"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(entity); node->setPosition(pos); //node->scale(scale); // Physics btTransform Transform; Transform.setIdentity(); Transform.setOrigin(TPosition); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); btVector3 HalfExtents(TScale.getX()*0.5f,TScale.getY()*0.5f,TScale.getZ()*0.5f); btCollisionShape *Shape = new btBoxShape(HalfExtents); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(TMass, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(TMass, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; } Then in the GameState::update() method which which runs every frame to handle input and render etc I call an UpdatePhysics method to update the physics simulation. void GameState::UpdatePhysics(unsigned int TDeltaTime) { World->stepSimulation(TDeltaTime * 0.001f, 60); btRigidBody *TObject; for(std::vector<btRigidBody *>::iterator it = Objects.begin(); it != Objects.end(); ++it) { // Update renderer Ogre::SceneNode *node = static_cast<Ogre::SceneNode *>((*it)->getUserPointer()); TObject = *it; // Set position btVector3 Point = TObject->getCenterOfMassPosition(); node->setPosition(Ogre::Vector3((float)Point[0], (float)Point[1], (float)Point[2])); // set rotation btVector3 EulerRotation; QuaternionToEuler(TObject->getOrientation(), EulerRotation); node->setOrientation(1,(Ogre::Real)EulerRotation[0], (Ogre::Real)EulerRotation[1], (Ogre::Real)EulerRotation[2]); //node->rotate(Ogre::Vector3(EulerRotation[0], EulerRotation[1], EulerRotation[2])); } } void GameState::QuaternionToEuler(const btQuaternion &TQuat, btVector3 &TEuler) { btScalar W = TQuat.getW(); btScalar X = TQuat.getX(); btScalar Y = TQuat.getY(); btScalar Z = TQuat.getZ(); float WSquared = W * W; float XSquared = X * X; float YSquared = Y * Y; float ZSquared = Z * Z; TEuler.setX(atan2f(2.0f * (Y * Z + X * W), -XSquared - YSquared + ZSquared + WSquared)); TEuler.setY(asinf(-2.0f * (X * Z - Y * W))); TEuler.setZ(atan2f(2.0f * (X * Y + Z * W), XSquared - YSquared - ZSquared + WSquared)); TEuler *= RADTODEG; } I seem to have issues with the cubes not colliding with each other and colliding strangely with the ground. I have tried to capture the effect with the attached image. I would appreciate any help in understanding what I have done wrong. Thanks. EDIT : Solution The following code shows the changes I made to get accurate physics. void GameState::createScene() { m_pSceneMgr->createLight("Light")->setPosition(75,75,75); // Physics // As a test we want a floor plane for things to collide with Ogre::Entity *ent; Ogre::Plane p; p.normal = Ogre::Vector3(0,1,0); p.d = 0; Ogre::MeshManager::getSingleton().createPlane( "FloorPlane", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, p, 200000, 200000, 20, 20, true, 1, 9000,9000,Ogre::Vector3::UNIT_Z); ent = m_pSceneMgr->createEntity("floor", "FloorPlane"); ent->setMaterialName("Test/Floor"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(ent); btTransform Transform; Transform.setIdentity(); // Fixed the transform vector here for y back to 0 to stop the objects sinking into the ground. Transform.setOrigin(btVector3(0,0,0)); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); btCollisionShape *Shape = new btStaticPlaneShape(btVector3(0,1,0),0); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(0, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(0, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; // End Physics } void GameState::CreateBox(const btVector3 &TPosition, const btVector3 &TScale, btScalar TMass) { Ogre::Vector3 size = Ogre::Vector3::ZERO; Ogre::Vector3 pos = Ogre::Vector3::ZERO; Ogre::Vector3 scale = Ogre::Vector3::ZERO; pos.x = TPosition.getX(); pos.y = TPosition.getY(); pos.z = TPosition.getZ(); scale.x = TScale.getX(); scale.y = TScale.getY(); scale.z = TScale.getZ(); Ogre::Entity *entity = m_pSceneMgr->createEntity( "Box" + Ogre::StringConverter::toString(m_pNumEntities), "cube.mesh"); entity->setCastShadows(true); Ogre::AxisAlignedBox boundingB = entity->getBoundingBox(); // The ogre bounding box is slightly bigger so I am reducing it for // use with the rigid body. size = boundingB.getSize()*0.95f; entity->setMaterialName("Test/Cube"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(entity); node->setPosition(pos); node->showBoundingBox(true); //node->scale(scale); // Physics btTransform Transform; Transform.setIdentity(); Transform.setOrigin(TPosition); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); // I got the size of the bounding box above but wasn't using it to set // the size for the rigid body. This now does. btVector3 HalfExtents(size.x*0.5f,size.y*0.5f,size.z*0.5f); btCollisionShape *Shape = new btBoxShape(HalfExtents); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(TMass, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(TMass, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; } void GameState::UpdatePhysics(unsigned int TDeltaTime) { World->stepSimulation(TDeltaTime * 0.001f, 60); btRigidBody *TObject; for(std::vector<btRigidBody *>::iterator it = Objects.begin(); it != Objects.end(); ++it) { // Update renderer Ogre::SceneNode *node = static_cast<Ogre::SceneNode *>((*it)->getUserPointer()); TObject = *it; // Set position btVector3 Point = TObject->getCenterOfMassPosition(); node->setPosition(Ogre::Vector3((float)Point[0], (float)Point[1], (float)Point[2])); // Convert the bullet Quaternion to an Ogre quaternion btQuaternion btq = TObject->getOrientation(); Ogre::Quaternion quart = Ogre::Quaternion(btq.w(),btq.x(),btq.y(),btq.z()); // use the quaternion with setOrientation node->setOrientation(quart); } } The QuaternionToEuler function isn't needed so that was removed from code and header files. The objects now collide with the ground and each other appropriately.

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • The Red Gate and .NET Reflector Debacle

    - by Rick Strahl
    About a month ago Red Gate – the company who owns the NET Reflector tool most .NET devs use at one point or another – decided to change their business model for Reflector and take the product from free to a fully paid for license model. As a bit of history: .NET Reflector was originally created by Lutz Roeder as a free community tool to inspect .NET assemblies. Using Reflector you can examine the types in an assembly, drill into type signatures and quickly disassemble code to see how a particular method works.  In case you’ve been living under a rock and you’ve never looked at Reflector, here’s what it looks like drilled into an assembly from disk with some disassembled source code showing: Note that you get tons of information about each element in the tree, and almost all related types and members are clickable both in the list and source view so it’s extremely easy to navigate and follow the code flow even in this static assembly only view. For many year’s Lutz kept the the tool up to date and added more features gradually improving an already amazing tool and making it better. Then about two and a half years ago Red Gate bought the tool from Lutz. A lot of ruckus and noise ensued in the community back then about what would happen with the tool and… for the most part very little did. Other than the incessant update notices with prominent Red Gate promo on them life with Reflector went on. The product didn’t die and and it didn’t go commercial or to a charge model. When .NET 4.0 came out it still continued to work mostly because the .NET feature set doesn’t drastically change how types behave.  Then a month back Red Gate started making noise about a new Version Version 7 which would be commercial. No more free version - and a shit storm broke out in the community. Now normally I’m not one to be critical of companies trying to make money from a product, much less for a product that’s as incredibly useful as Reflector. There isn’t day in .NET development that goes by for me where I don’t fire up Reflector. Whether it’s for examining the innards of the .NET Framework, checking out third party code, or verifying some of my own code and resources. Even more so recently I’ve been doing a lot of Interop work with a non-.NET application that needs to access .NET components and Reflector has been immensely valuable to me (and my clients) if figuring out exact type signatures required to calling .NET components in assemblies. In short Reflector is an invaluable tool to me. Ok, so what’s the problem? Why all the fuss? Certainly the $39 Red Gate is trying to charge isn’t going to kill any developer. If there’s any tool in .NET that’s worth $39 it’s Reflector, right? Right, but that’s not the problem here. The problem is how Red Gate went about moving the product to commercial which borders on the downright bizarre. It’s almost as if somebody in management wrote a slogan: “How can we piss off the .NET community in the most painful way we can?” And that it seems Red Gate has a utterly succeeded. People are rabid, and for once I think that this outrage isn’t exactly misplaced. Take a look at the message thread that Red Gate dedicated from a link off the download page. Not only is Version 7 going to be a paid commercial tool, but the older versions of Reflector won’t be available any longer. Not only that but older versions that are already in use also will continually try to update themselves to the new paid version – which when installed will then expire unless registered properly. There have also been reports of Version 6 installs shutting themselves down and failing to work if the update is refused (I haven’t seen that myself so not sure if that’s true). In other words Red Gate is trying to make damn sure they’re getting your money if you attempt to use Reflector. There’s a lot of temptation there. Think about the millions of .NET developers out there and all of them possibly upgrading – that’s a nice chunk of change that Red Gate’s sitting on. Even with all the community backlash these guys are probably making some bank right now just because people need to get life to move on. Red Gate also put up a Feedback link on the download page – which not surprisingly is chock full with hate mail condemning the move. Oddly there’s not a single response to any of those messages by the Red Gate folks except when it concerns license questions for the full version. It puzzles me what that link serves for other yet than another complete example of failure to understand how to handle customer relations. There’s no doubt that that all of this has caused some serious outrage in the community. The sad part though is that this could have been handled so much less arrogantly and without pissing off the entire community and causing so much ill-will. People are pissed off and I have no doubt that this negative publicity will show up in the sales numbers for their other products. I certainly hope so. Stupidity ought to be painful! Why do Companies do boneheaded stuff like this? Red Gate’s original decision to buy Reflector was hotly debated but at that the time most of what would happen was mostly speculation. But I thought it was a smart move for any company that is in need of spreading its marketing message and corporate image as a vendor in the .NET space. Where else do you get to flash your corporate logo to hordes of .NET developers on a regular basis?  Exploiting that marketing with some goodwill of providing a free tool breeds positive feedback that hopefully has a good effect on the company’s visibility and the products it sells. Instead Red Gate seems to have taken exactly the opposite tack of corporate bullying to try to make a quick buck – and in the process ruined any community goodwill that might have come from providing a service community for free while still getting valuable marketing. What’s so puzzling about this boneheaded escapade is that the company doesn’t need to resort to underhanded tactics like what they are trying with Reflector 7. The tools the company makes are very good. I personally use SQL Compare, Sql Data Compare and ANTS Profiler on a regular basis and all of these tools are essential in my toolbox. They certainly work much better than the tools that are in the box with Visual Studio. Chances are that if Reflector 7 added useful features I would have been more than happy to shell out my $39 to upgrade when the time is right. It’s Expensive to give away stuff for Free At the same time, this episode shows some of the big problems that come with ‘free’ tools. A lot of organizations are realizing that giving stuff away for free is actually quite expensive and the pay back is often very intangible if any at all. Those that rely on donations or other voluntary compensation find that they amount contributed is absolutely miniscule as to not matter at all. Yet at the same time I bet most of those clamouring the loudest on that Red Gate Reflector feedback page that Reflector won’t be free anymore probably have NEVER made a donation to any open source project or free tool ever. The expectation of Free these days is just too great – which is a shame I think. There’s a lot to be said for paid software and having somebody to hold to responsible to because you gave them some money. There’s an incentive –> payback –> responsibility model that seems to be missing from free software (not all of it, but a lot of it). While there certainly are plenty of bad apples in paid software as well, money tends to be a good motivator for people to continue working and improving products. Reasons for giving away stuff are many but often it’s a naïve desire to share things when things are simple. At first it might be no problem to volunteer time and effort but as products mature the fun goes out of it, and as the reality of product maintenance kicks in developers want to get something back for the time and effort they’re putting in doing non-glamorous work. It’s then when products die or languish and this is painful for all to watch. For Red Gate however, I think there was always a pretty good payback from the Reflector acquisition in terms of marketing: Visibility and possible positioning of their products although they seemed to have mostly ignored that option. On the other hand they started this off pretty badly even 2 and a half years back when they aquired Reflector from Lutz with the same arrogant attitude that is evident in the latest episode. You really gotta wonder what folks are thinking in management – the sad part is from advance emails that were circulating, they were fully aware of the shit storm they were inciting with this and I suspect they are banking on the sheer numbers of .NET developers to still make them a tidy chunk of change from upgrades… Alternatives are coming For me personally the single license isn’t a problem, but I actually have a tool that I sell (an interop Web Service proxy generation tool) to customers and one of the things I recommend to use with has been Reflector to view assembly information and to find which Interop classes to instantiate from the non-.NET environment. It’s been nice to use Reflector for this with its small footprint and zero-configuration installation. But now with V7 becoming a paid tool that option is not going to be available anymore. Luckily it looks like the .NET community is jumping to it and trying to fill the void. Amidst the Red Gate outrage a new library called ILSpy has sprung up and providing at least some of the core functionality of Reflector with an open source library. It looks promising going forward and I suspect there will be a lot more support and interest to support this project now that Reflector has gone over to the ‘dark side’…© Rick Strahl, West Wind Technologies, 2005-2011

    Read the article

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >