Search Results

Search found 6099 results on 244 pages for 'mass effect'.

Page 229/244 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Enabling Kerberos Authentication for Reporting Services

    - by robcarrol
    Recently, I’ve helped several customers with Kerberos authentication problems with Reporting Services and Analysis Services, so I’ve decided to write this blog post and pull together some useful resources in one place (there are 2 whitepapers in particular that I found invaluable configuring Kerberos authentication, and these can be found in the references section at the bottom of this post). In most of these cases, the problem has manifested itself with the Login failed for User ‘NT Authority\Anonymous’ (“double-hop”) error. By default, Reporting Services uses Windows Integrated Authentication, which includes the Kerberos and NTLM protocols for network authentication. Additionally, Windows Integrated Authentication includes the negotiate security header, which prompts the client to select Kerberos or NTLM for authentication. The client can access reports which have the appropriate permissions by using Kerberos for authentication. Servers that use Kerberos authentication can impersonate those clients and use their security context to access network resources. You can configure Reporting Services to use both Kerberos and NTLM authentication; however this may lead to a failure to authenticate. With negotiate, if Kerberos cannot be used, the authentication method will default to NTLM. When negotiate is enabled, the Kerberos protocol is always used except when: Clients/servers that are involved in the authentication process cannot use Kerberos. The client does not provide the information necessary to use Kerberos. An in-depth discussion of Kerberos authentication is beyond the scope of this post, however when users execute reports that are configured to use Windows Integrated Authentication, their logon credentials are passed from the report server to the server hosting the data source. Delegation needs to be set on the report server and Service Principle Names (SPNs) set for the relevant services. When a user processes a report, the request must go through a Web server on its way to a database server for processing. Kerberos authentication enables the Web server to request a service ticket from the domain controller; impersonate the client when passing the request to the database server; and then restrict the request based on the user’s permissions. Each time a server is required to pass the request to another server, the same process must be used. Kerberos authentication is supported in both native and SharePoint integrated mode, but I’ll focus on native mode for the purpose of this post (I’ll explain configuring SharePoint integrated mode and Kerberos authentication in a future post). Configuring Kerberos avoids the authentication failures due to double-hop issues. These double-hop errors occur when a users windows domain credentials can’t be passed to another server to complete the user’s request. In the case of my customers, users were executing Reporting Services reports that were configured to query Analysis Services cubes on a separate machine using Windows Integrated security. The double-hop issue occurs as NTLM credentials are valid for only one network hop, subsequent hops result in anonymous authentication. The client attempts to connect to the report server by making a request from a browser (or some other application), and the connection process begins with authentication. With NTLM authentication, client credentials are presented to Computer 2. However Computer 2 can’t use the same credentials to access Computer 3 (so we get the Anonymous login error). To access Computer 3 it is necessary to configure the connection string with stored credentials, which is what a number of customers I have worked with have done to workaround the double-hop authentication error. However, to get the benefits of Windows Integrated security, a better solution is to enable Kerberos authentication. Again, the connection process begins with authentication. With Kerberos authentication, the client and the server must demonstrate to one another that they are genuine, at which point authentication is successful and a secure client/server session is established. In the illustration above, the tiers represent the following: Client tier (computer 1): The client computer from which an application makes a request. Middle tier (computer 2): The Web server or farm where the client’s request is directed. Both the SharePoint and Reporting Services server(s) comprise the middle tier (but we’re only concentrating on native deployments just now). Back end tier (computer 3): The Database/Analysis Services server/Cluster where the requested data is stored. In order to enable Kerberos authentication for Reporting Services it’s necessary to configure the relevant SPNs, configure trust for delegation for server accounts, configure Kerberos with full delegation and configure the authentication types for Reporting Services. Service Principle Names (SPNs) are unique identifiers for services and identify the account’s type of service. If an SPN is not configured for a service, a client account will be unable to authenticate to the servers using Kerberos. You need to be a domain administrator to add an SPN, which can be added using the SetSPN utility. For Reporting Services in native mode, the following SPNs need to be registered --SQL Server Service SETSPN -S mssqlsvc/servername:1433 Domain\SQL For named instances, or if the default instance is running under a different port, then the specific port number should be used. --Reporting Services Service SETSPN -S http/servername Domain\SSRS SETSPN -S http/servername.domain.com Domain\SSRS The SPN should be set for the NETBIOS name of the server and the FQDN. If you access the reports using a host header or DNS alias, then that should also be registered SETSPN -S http/www.reports.com Domain\SSRS --Analysis Services Service SETSPN -S msolapsvc.3/servername Domain\SSAS Next, you need to configure trust for delegation, which refers to enabling a computer to impersonate an authenticated user to services on another computer: Location Description Client 1. The requesting application must support the Kerberos authentication protocol. 2. The user account making the request must be configured on the domain controller. Confirm that the following option is not selected: Account is sensitive and cannot be delegated. Servers 1. The service accounts must be trusted for delegation on the domain controller. 2. The service accounts must have SPNs registered on the domain controller. If the service account is a domain user account, the domain administrator must register the SPNs. In Active Directory Users and Computers, verify that the domain user accounts used to access reports have been configured for delegation (the ‘Account is sensitive and cannot be delegated’ option should not be selected): We then need to configure the Reporting Services service account and computer to use Kerberos with full delegation:   We also need to do the same for the SQL Server or Analysis Services service accounts and computers (depending on what type of data source you are connecting to in your reports). Finally, and this is the part that sometimes gets over-looked, we need to configure the authentication type correctly for reporting services to use Kerberos authentication. This is configured in the Authentication section of the RSReportServer.config file on the report server. <Authentication> <AuthenticationTypes>           <RSWindowsNegotiate/> </AuthenticationTypes> <EnableAuthPersistence>true</EnableAuthPersistence> </Authentication> This will enable Kerberos authentication for Internet Explorer. For other browsers, see the link below. The report server instance must be restarted for these changes to take effect. Once these changes have been made, all that’s left to do is test to make sure Kerberos authentication is working properly by running a report from report manager that is configured to use Windows Integrated authentication (either connecting to Analysis Services or SQL Server back-end). Resources: Manage Kerberos Authentication Issues in a Reporting Services Environment http://download.microsoft.com/download/B/E/1/BE1AABB3-6ED8-4C3C-AF91-448AB733B1AF/SSRSKerberos.docx Configuring Kerberos Authentication for Microsoft SharePoint 2010 Products http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23176 How to: Configure Windows Authentication in Reporting Services http://msdn.microsoft.com/en-us/library/cc281253.aspx RSReportServer Configuration File http://msdn.microsoft.com/en-us/library/ms157273.aspx#Authentication Planning for Browser Support http://msdn.microsoft.com/en-us/library/ms156511.aspx

    Read the article

  • SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Op

    - by pinaldave
    TechEd India was one of the largest Technology events in India led by Microsoft. This event was attended by more than 3,000 technology enthusiasts, making it one of the most well-organized events of the year. Though I attempted to attend almost all the technology events here, I have not seen any bigger or better event in Indian subcontinents other than this. There are 21 Technical Tracks at Tech·Ed India 2010 that span more than 745 learning opportunities. I was fortunate enough to be a part of this whole event as a speaker and a delegate, as well. TechEd India Speaker Badge and A Token of Lifetime Hotel Selection I presented three different sessions at TechEd India and was also a part of panel discussion. (The details of the sessions are given at the end of this blog post.) Due to extensive traveling, I stay away from my family occasionally. For this reason, I took my wife – Nupur and daughter Shaivi (8 months old) to the event along with me. We stayed at the same hotel where the event was organized so as to maximize my time bonding with my family and to have more time in networking with technology community, at the same time. The hotel Lalit Ashok is the largest and most luxurious venue one can find in Bangalore, located in the middle of the city. The cost of the hotel was a bit pricey, but looking at all the advantages, I had decided to ask for a booking there. Hotel Lalit Ashok Nupur Dave and Shaivi Dave Arrival Day – DAY 0 – April 11, 2010 I reached the event a day earlier, and that was one wise decision for I was able to relax a bit and go over my presentation for the next day’s course. I am a kind of person who likes to get everything ready ahead of time. I was also able to enjoy a pleasant evening with several Microsoft employees and my family friends. I even checked out the location where I would be doing presentations the next day. I was fortunate enough to meet Bijoy Singhal from Microsoft who helped me out with a few of the logistics issues that occured the day before. I was not aware of the fact that the very next day he was going to be “The Man” of the TechEd 2010 event. Vinod Kumar from Microsoft was really very kind as he talked to me regarding my subsequent session. He gave me some suggestions which were really helpful that I was able to incorporate them during my presentation. Finally, I was able to meet Abhishek Kant from Microsoft; his valuable suggestions and unlimited passion have inspired many people like me to work with the Community. Pradipta from Microsoft was also around, being extremely busy with logistics; however, in those busy times, he did find some good spare time to have a chat with me and the other Community leaders. I also met Harish Ranganathan and Sachin Rathi, both from Microsoft. It was so interesting to listen to both of them talking about SharePoint. I just have no words to express my overwhelmed spirit because of all these passionate young guys - Pradipta,Vinod, Bijoy, Harish, Sachin and Ahishek (of course!). Map of TechEd India 2010 Event Day 1 – April 12, 2010 From morning until night time, today was truly a very busy day for me. I had two presentations and one panel discussion for the day. Needless to say, I had a few meetings to attend as well. The day started with a keynote from S. Somaseger where he announced the launch of Visual Studio 2010. The keynote area was really eye-catching because of the very large, bigger-than- life uniform screen. This was truly one to show. The title music of the keynote was very interesting and it featured Bijoy Singhal as the model. It was interesting to talk to him afterwards, when we laughed at jokes together about his modeling assignment. TechEd India Keynote Opening Featuring Bijoy TechEd India 2010 Keynote – S. Somasegar Time: 11:15pm – 11:45pm Session 1: True Lies of SQL Server – SQL Myth Buster Following the excellent keynote, I had my very first session on the subject of SQL Server Myth Buster. At first, I was a bit nervous as right after the keynote, for this was my very first session and during my presentation I saw lots of Microsoft Product Team members. Well, it really went well and I had a really good discussion with attendees of the session. I felt that a well begin was half-done and my confidence was regained. Right after the session, I met a few of my Community friends and had meaningful discussions with them on many subjects. The abstract of the session is as follows: In this 30-minute demo session, I am going to briefly demonstrate few SQL Server Myths and their resolutions as I back them up with some demo. This demo presentation is a must-attend for all developers and administrators who would come to the event. This is going to be a very quick yet fun session. Pinal Presenting session at TechEd India 2010 Time: 1:00 PM – 2:00 PM Lunch with Somasegar After the session I went to see my daughter, and then I headed right away to the lunch with S. Somasegar – the keynote speaker and senior vice president of the Developer Division at Microsoft. I really thank to Abhishek who made it possible for us. Because of his efforts, all the MVPs had the opportunity to meet such a legendary person and had to talk with them on Microsoft Technology. Though Somasegar is currently holding such a high position in Microsoft, he is very polite and a real gentleman, and how I wish that everybody in industry is like him. Believe me, if you spread love and kindness, then that is what you will receive back. As soon as lunch time was over, I ran to the session hall as my second presentation was about to start. Time: 2:30pm – 3:30pm Session 2: Master Data Services in Microsoft SQL Server 2008 R2 Business Intelligence is a subject which was widely talked about at TechEd. Everybody was interested in this subject, and I did not excuse myself from this great concept as well. I consider myself fortunate as I was presenting on the subject of Master Data Services at TechEd. When I had initially learned this subject, I had a bit of confusion about the usage of this tool. Later on, I decided that I would tackle about how we all developers and DBAs are not able to understand something so simple such as this, and even worst, creating confusion about the technology. During system designing, it is very important to have a reference material or master lookup tables. Well, I talked about the same subject and presented the session keeping that as my center talk. The session went very well and I received lots of interesting questions. I got many compliments for talking about this subject on the real-life scenario. I really thank Rushabh Mehta (CEO, Solid Quality Mentors India) for his supportive suggestions that helped me prepare the slide deck, as well as the subject. Pinal Presenting session at TechEd India 2010 The abstract of the session is as follows: SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in-depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision-making process by allowing you to create, manage and propagate changes from a single master view of your business entities. Also, MDS – Master Data-hub which is a vital component, helps ensure the consistency of reporting across systems and deliver faster and more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Pinal Presenting session at TechEd India 2010 The day was still not over for me. I had ran into several friends but we were not able keep our enthusiasm under control about all the rumors saying that SQL Server 2008 R2 was about to be launched tomorrow in the keynote. I then ran to my third and final technical event for the day- a panel discussion with the top technologies of India. Time: 5:00pm – 6:00pm Panel Discussion: Harness the power of Web – SEO and Technical Blogging As I have delivered two technical sessions by this time, I was a bit tired but  not less enthusiastic when I had to talk about Blog and Technology. We discussed many different topics there. I told them that the most important aspect for any blog is its content. We discussed in depth the issues with plagiarism and how to avoid it. Another topic of discussion was how we technology bloggers can create awareness in the Community about what the right kind of blogging is and what morally and technically wrong acts are. A couple of questions were raised about what type of liberty a person can have in terms of writing blogs. Well, it was generically agreed that a blog is mainly a representation of our ideas and thoughts; it should not be governed by external entities. As long as one is writing what they really want to say, but not providing incorrect information or not practicing plagiarism, a blogger should be allowed to express himself. This panel discussion was supposed to be over in an hour, but the interest of the participants was remarkable and so it was extended for 30 minutes more. Finally, we decided to bring to a close the discussion and agreed that we will continue the topic next year. TechEd India Panel Discussion on Web, Technology and SEO Surprisingly, the day was just beginning after doing all of these. By this time, I have almost met all the MVP who arrived at the event, as well as many Microsoft employees. There were lots of Community folks present, too. I decided that I would go to meet several friends from the Community and continue to communicate with me on SQLAuthority.com. I also met Abhishek Baxi and had a good talk with him regarding Win Mobile and Twitter. He also took a very quick video of me wherein I spoke in my mother’s tongue, Gujarati. It was funny that I talked in Gujarati almost all the day, but when I was talking in the interview I could not find the right Gujarati words to speak. I think we all think in English when we think about Technology, so as to address universality. After meeting them, I headed towards the Speakers’ Dinner. Time: 8:00 PM – onwards Speakers Dinner The Speakers’ dinner was indeed a wonderful opportunity for all the speakers to get together and relax. We talked so many different things, from XBOX to Hindi Movies, and from SQL to Samosas. I just could not express how much fun I had. After a long evening, when I returned tmy room and met Shaivi, I just felt instantly relaxed. Kids are really gifts from God. Today was a really long but exciting day. So many things happened in just one day: Visual Studio Lanch, lunch with Somasegar, 2 technical sessions, 1 panel discussion, community leaders meeting, speakers dinner and, last but not leas,t playing with my child! A perfect day! Day 2 – April 13, 2010 Today started with a bang with the excellent keynote by Kamal Hathi who launched SQL Server 2008 R2 in India and demonstrated the power of PowerPivot to all of us. 101 Million Rows in Excel brought lots of applause from the audience. Kamal Hathi Presenting Keynote at TechEd India 2010 The day was a bit easier one for me. I had no sessions today and no events planned. I had a few meetings planned for the second day of the event. I sat in the speaker’s lounge for half a day and met many people there. I attended nearly 9 different meetings today. The subjects of the meetings were very different. Here is a list of the topics of the Community-related meetings: SQL PASS and its involvement in India and subcontinents How to start community blogging Forums and developing aptitude towards technology Ahmedabad/Gandhinagar User Groups and their developments SharePoint and SQL Business Meeting – a client meeting Business Meeting – a potential performance tuning project Business Meeting – Solid Quality Mentors (SolidQ) And family friends Pinal Dave at TechEd India The day passed by so quickly during this meeting. In the evening, I headed to Partners Expo with friends and checked out few of the booths. I really wanted to talk about some of the products, but due to the freebies there was so much crowd that I finally decided to just take the contact details of the partner. I will now start sending them with my queries and, hopefully, I will have my questions answered. Nupur and Shaivi had also one meeting to attend; it was with our family friend Vijay Raj. Vijay is also a person who loves Technology and loves it more than anybody. I see him growing and learning every day, but still remaining as a ‘human’. I believe that if someone acquires as much knowledge as him, that person will become either a computer or cyborg. Here, Vijay is still a kind gentleman and is able to stay as our close family friend. Shaivi was really happy to play with Uncle Vijay. Pinal Dave and Vijay Raj Renuka Prasad, a Microsoft MVP, impressed me with his passion and knowledge of SQL. Every time he gives me credit for his success, I believe that he is very humble. He has way more certifications than me and has worked many more years with SQL compared to me. He is an excellent photographer as well. Most of the photos in this blog post have been taken by him. I told him if ever he wants to do a part time job, he can do the photography very well. Pinal Dave and Renuka Prasad I also met L Srividya from Microsoft, whom I was looking forward to meet. She is a bundle of knowledge that everyone would surely learn a lot from her. I was able to get a few minutes from her and well, I felt confident. She enlightened me with SQL Server BI concepts, domain management and SQL Server security and few other interesting details. I also had a wonderful time talking about SharePoint with fellow Solid Quality Mentor Joy Rathnayake. He is very passionate about SharePoint but when you talk .NET and SQL with him, he is still overwhelmingly knowledgeable. In fact, while talking to him, I figured out that the recent training he delivered was on SQL Server 2008 R2. I told him a joke that it hurts my ego as he is more popular now in SQL training and consulting than me. I am sure all of you agree that working with good people is a gift from God. I am fortunate enough to work with the best of the best Industry experts. It was a great pleasure to hang out with my Community friends – Ahswin Kini, HimaBindu Vejella, Vasudev G, Suprotim Agrawal, Dhananjay, Vikram Pendse, Mahesh Dhola, Mahesh Mitkari,  Manu Zacharia, Shobhan, Hardik Shah, Ashish Mohta, Manan, Subodh Sohani and Sanjay Shetty (of course!) .  (Please let me know if I have met you at the event and forgot your name to list here). Time: 8:00 PM – onwards Community Leaders Dinner After lots of meetings, I headed towards the Community Leaders dinner meeting and met almost all the folks I met in morning. The discussion was almost the same but the real good thing was that we were enjoying it. The food was really good. Nupur was invited in the event, but Shaivi could not come. When Nupur tried to enter the event, she was stopped as Shaivi did not have the pass to enter the dinner. Nupur expressed that Shaivi is only 8 months old and does not eat outside food as well and could not stay by herself at this age, but the door keeper did not agree and asked that without the entry details Shaivi could not go in, but Nupur could. Nupur called me on phone and asked me to help her out. By the time, I was outside; the organizer of the event reached to the door and happily approved Shaivi to join the party. Once in the party, Shaivi had lots of fun meeting so many people. Shaivi Dave and Abhishek Kant Dean Guida (Infragistics President and CEO) and Pinal Dave (SQLAuthority.com) Day 3 – April 14, 2010 Though, it was last day, I was very much excited today as I was about to present my very favorite session. Query Optimization and Performance Tuning is my domain expertise and I make my leaving by consulting and training the same. Today’s session was on the same subject and as an additional twist, another subject about Spatial Database was presented. I was always intrigued with Spatial Database and I have enjoyed learning about it; however, I have never thought about Spatial Indexing before it was decided that I will do this session. I really thank Solid Quality Mentor Dr. Greg Low for his assistance in helping me prepare the slide deck and also review the content. Furthermore, today was really what I call my ‘learning day’ . So far I had not attended any session in TechEd and I felt a bit down for that. Everybody spends their valuable time & money to learn something new and exciting in TechEd and I had not attended a single session at the moment thinking that it was already last day of the event. I did have a plan for the day and I attended two technical sessions before my session of spatial database. I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller and there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod is syhe speaker. He did not have a single time disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL. Time: 12:30pm-1:30pm SQL Server Query Optimization, Execution and Debugging Query Performance I really had a fun time attending this session. Vinod made this session very interactive. The entire audience really got into the presentation and started participating in the event. Vinod was presenting a small problem with Query Tuning, which any developer would have encountered and solved with their help in such a fashion that a developer feels he or she have already resolved it. In one question, I was the only one who was ready to answer and Vinod told me in a light tone that I am now allowed to answer it! The audience really found it very amusing. There was a huge crowd around Vinod after the session. Vinod – A master storyteller! Time: 3:45pm-4:45pm Data Recovery / consistency with CheckDB This session was much heavier than the earlier one, and I must say this is my most favorite session I EVER attended in India. In this TechEd I have only attended two sessions, but in my career, I have attended numerous technical sessions not only in India, but all over the world. This session had taken my breath away. One by one, Vinod took the different databases, and started to corrupt them in different ways. Each database has some unique ways to get corrupted. Once that was done, Vinod started to show the DBCC CEHCKDB and demonstrated how it can solve your problem. He finally fixed all the databases with this single tool. I do have a good knowledge of this subject, but let me honestly admit that I have learned a lot from this session. I enjoyed and cheered during this session along with other attendees. I had total satisfaction that, just like everyone, I took advantage of the event and learned something. I am now TECHnically EDucated. Pinal Dave and Vinod Kumar After two very interactive and informative SQL Sessions from Vinod Kumar, the next turn me presenting on Spatial Database and Indexing. I got once again nervous but Vinod told me to stay natural and do my presentation. Well, once I got a huge stage with a total of four projectors and a large crowd, I felt better. Time: 5:00pm-6:00pm Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Pinal Presenting session at TechEd India 2010 Pinal Presenting session at TechEd India 2010 I kicked off this session with Michael J Swart‘s beautiful spatial image. This session was the last one for the day but, to my surprise, I had more than 200+ attendees. Slowly, the rain was starting outside and I was worried that the hall would not be full; despite this, there was not a single seat available in the first five minutes of the session. Thanks to all of you for attending my presentation. I had demonstrated the map of world (and India) and quickly explained what  Geographic and Geometry data types in Spatial Database are. This session had interesting story of Indexing and Comparison, as well as how different traditional indexes are from spatial indexing. Pinal Presenting session at TechEd India 2010 Due to the heavy rain during this event, the power went off for about 22 minutes (just an accident – nobodies fault). During these minutes, there were no audio, no video and no light. I continued to address the mass of 200+ people without any audio device and PowerPoint. I must thank the audience because not a single person left from the session. They all stayed in their place, some moved closure to listen to me properly. I noticed that the curiosity and eagerness to learn new things was at the peak even though it was the very last session of the TechEd. Everybody wanted get the maximum knowledge out of this whole event. I was touched by the support from audience. They listened and participated in my session even without any kinds of technology (no ppt, no mike, no AC, nothing). During these 22 minutes, I had completed my theory verbally. Pinal Presenting session at TechEd India 2010 After a while, we got the projector back online and we continued with some exciting demos. Many thanks to Microsoft people who worked energetically in background to get the backup power for project up. I had a very interesting demo wherein I overlaid Bangalore and Hyderabad on the India Map and find their aerial distance between them. After finding the aerial distance, we browsed online and found that SQL Server estimates the exact aerial distance between these two cities, as compared to the factual distance. There was a huge applause from the crowd on the subject that SQL Server takes into the count of the curvature of the earth and finds the precise distances based on details. During the process of finding the distance, I demonstrated a few examples of the indexes where I expressed how one can use those indexes to find these distances and how they can improve the performance of similar query. I also demonstrated few examples wherein we were able to see in which data type the Index is most useful. We finished the demos with a few more internal stuff. Pinal Presenting session at TechEd India 2010 Despite all issues, I was mostly satisfied with my presentation. I think it was the best session I have ever presented at any conference. There was no help from Technology for a while, but I still got lots of appreciation at the end. When we ended the session, the applause from the audience was so loud that for a moment, the rain was not audible. I was truly moved by the dedication of the Technology enthusiasts. Pinal Dave After Presenting session at TechEd India 2010 The abstract of the session is as follows: The Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Time: 8:00 PM – onwards Dinner by Sponsors After the lively session during the day, there was another dinner party courtesy of one of the sponsors of TechEd. All the MVPs and several Community leaders were present at the dinner. I would like to express my gratitude to Abhishek Kant for organizing this wonderful event for us. It was a blast and really relaxing in all angles. We all stayed there for a long time and talked about our sweet and unforgettable memories of the event. Pinal Dave and Bijoy Singhal It was really one wonderful event. After writing this much, I say that I have no words to express about how much I enjoyed TechEd. However, it is true that I shared with you only 1% of the total activities I have done at the event. There were so many people I have met, yet were not mentioned here although I wanted to write their names here, too . Anyway, I have learned so many things and up until now, I am not able to get over all the fun I had in this event. Pinal Dave at TechEd India 2010 The Next Days – April 15, 2010 – till today I am still not able to get my mind out of the whole experience I had at TechEd India 2010. It was like a whole Microsoft Family working together to celebrate a happy occasion. TechEd India – Truly An Unforgettable Experience! Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • WinInet Apps failing when Internet Explorer is set to Offline Mode

    - by Rick Strahl
    Ran into a nasty issue last week when all of a sudden many of my old applications that are using WinInet for HTTP access started failing. Specifically, the WinInet HttpSendRequest() call started failing with an error of 2, which when retrieving the error boils down to: WinInet Error 2: The system cannot find the file specified Now this error can pop up in many legitimate scenarios with WinInet such as when no Internet connection is available or the HTTP configuration (usually configured in Internet Explorer’s options) is misconfigured. The error typically means that the server in question cannot be found or more specifically an Internet connection can’t be established. In this case the problem started suddenly and was causing some of my own applications (old Visual FoxPro apps using my own wwHttp library) and all Adobe Air applications (which apparently uses WinInet for its basic HTTP stack) along with a few more oddball applications to fail instantly when trying to connect via HTTP. Most other applications – all of my installed browsers, email clients, various social network updaters all worked just fine. It seems it was only WinInet apps that were failing. Yet oddly Internet Explorer appeared to be working. So the problem seemed to be isolated to those ‘classic’ applications using WinInet. WinInet’s base configuration uses the Internet Explorer options dialog. To check this out I typically go to the Internet Explorer options and find the Connection tab, and check out the LAN Setup. Make sure there are no rogue proxy settings or configuration scripts that are invalid. Trying with Auto-configuration on and off also can often fix ‘real’ configuration errors. This time however this wasn’t a problem – nothing in the LAN configuration was set (all default). I also played with the Automatic detection of settings which also had no effect. I also tried to use Fiddler to see if that would tell me something. Fiddler has a few additional WinInet configuration options in its configuration. Running Fiddler and hitting an HTTP request using WinInet would never actually hit Fiddler – the failure would occur before WinInet ever fired up the HTTP connection to go through the Fiddler HTTP proxy. And the Culprit is: Internet Explorer’s Work Offline Option The culprit in this situation was Internet Explorer which at some point, unknown to me switched into Offline Mode and was then shut down: When this Offline mode is checked when IE is running *or* if IE gets shut down with this flag set, all applications using WinInet by default assume that it’s running in offline mode. Depending on your caching HTTP headers and whether the page was cached previously you may or may not get a response or an error. For an independent non-browser application this will be highly unpredictable and likely result in failures getting online – especially if the application forces requests to always reload by disabling HTTP caching (as I do on most of my dynamic HTTP clients). What makes this especially tricky is that even when IE is in offline mode in the browser, you can still browse around the Web *if* you have a connection. IE will try to load anything it has cached from the local cache, but as soon as you hit a URL that isn’t cached it will automatically try to access that URL and uncheck the Work Offline option. Conversely if you get knocked off the Internet and browse in IE 9, IE will automatically go into offline mode. I never explicitly set offline mode – it just automatically sets itself on and off depending on the connection. Problem is if you’re not using IE all the time (as I do – rarely and just for testing so usually a few commonly used URLs) and you left it in offline mode when you exit, offline mode stays set which results in the above head scratcher. Ack. This isn’t new behavior in IE 9 BTW – this behavior has always been there, but I think what’s different is that IE now automatically switches between online and offline modes without notifying you at all, so it’s hard to tell when you are offline. Fixing the Issue in your Code If you have an application that is using WinInet, there’s a WinInet option called INTERNET_OPTION_IGNORE_OFFLINE. I just checked this out in my own applications and Internet Explorer 9 and it works, but apparently it’s been broken for some older releases (I can’t confirm how far back though) – lots of posts seem to suggest the flag doesn’t work. However, in IE 9 at least it does seem to work if you call InternetSetOption before you call HttpOpenRequest with the Http Session handle. In FoxPro code I use: DECLARE INTEGER InternetSetOption ;    IN WININET.DLL ;    INTEGER HINTERNET,;    INTEGER dwFlags,;    INTEGER @dwValue,;    INTEGER cbSize lnOptionValue = 1   && BOOL TRUE pass by reference   *** Set needed SSL flags lnResult=InternetSetOption(this.hHttpSession,;    INTERNET_OPTION_IGNORE_OFFLINE ,;  && 77    @lnOptionValue ,4)   DECLARE INTEGER HttpOpenRequest ;    IN WININET.DLL ;    INTEGER hHTTPHandle,;    STRING lpzReqMethod,;    STRING lpzPage,;    STRING lpzVersion,;    STRING lpzReferer,;    STRING lpzAcceptTypes,;    INTEGER dwFlags,;    INTEGER dwContextw     hHTTPResult=HttpOpenRequest(THIS.hHttpsession,;    lcVerb,;    tcPage,;    NULL,NULL,NULL,;    INTERNET_FLAG_RELOAD + ;    IIF(THIS.lsecurelink,INTERNET_FLAG_SECURE,0) + ;    this.nHTTPServiceFlags,0) …  And this fixes the issue at least for IE 9… In my FoxPro wwHttp class I now call this by default to never get bitten by this again… This solves the problem permanently for my HTTP client. I never want to see offline operation in an HTTP client API – it’s just too unpredictable in handling errors and the last thing you want is getting unpredictably stale data. Problem solved but this behavior is – well ugly. But then that’s to be expected from an API that’s based on Internet Explorer, eh?© Rick Strahl, West Wind Technologies, 2005-2011Posted in HTTP  Windows  

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

  • 17 new features in Visual Studio 2010

    - by vik20000in
    Visual studio 2010 has been released to RTM a few days back. This release of Visual studio 2010 comes with a big number of improvements on many fronts. In this post I will try and point out some of the major improvements in Visual Studio 2010. 1)      Visual studio IDE Improvement. Visual studio IDE has been rewritten in WPF. The look and feel of the studio has been improved for improved readability. Start page has been redesigned and template so that anyone can change the start page as they wish. 2)      Multiple Monitor - Support for Multiple Monitor was already there in Visual studio. But in this edition it has been improved as much that we can now place the document, design and code window outside the IDE in another monitor. 3)      ZOOM in Code Editor – Making the editors in WPF has made significant improvement for them. The best one that I like is the ZOOM feature. We can now zoom in the code editor with the help of the ctrl + Mouse scroll. The zoom feature does not work on the Design surface or windows with icon like solution view and toolbox. 4)      Box Selection - Another Important improvement in the Visual studio 2010 is the box selection. We can select a rectangular by holding down the Alt Key and selecting with mouse.  Now in the rectangular selection we can insert text, Paste same code in different line etc. This is helpful if you want to convert a number of variables from public to private etc… 5)      New Improved Search – One of the best productivity improvements in Visual studio 2010 is its new search as you type support. This has been done in the Navigate To window which can be brought up by pressing (Ctrl + ,). The navigate To windows also take help of the Camel casing and will be able to search with the help of camel casing when character is entered in upper case. For example we can search AOH for AddOrederHeader. 6)      Call Hierarchy – This feature is only available to the Visual C# and Visual C++ editor. The call hierarchy windows displays the calls made to and from (yes both to and from) a selected method property or a constructor. The call hierarchy also shows the implementation of interface and the overrides of virtual or abstract methods. This window is very helpful in understanding the code flow, and evaluating the effect of making changes. The best part is it is available at design time and not at runtime only like a debugger. 7)      Highlighting references – One of the very cool stuff in Visual Studio 2010 is the fact if you select a variable then all the use of that variable will be highlighted alongside. This should work for all the result of symbols returned by Find all reference. This also works for Name of class, objects variable, properties and methods. We can also use the Ctrl + Shift + Down Arrow or Up Arror to move through them. 8)      Generate from usage - The Generate from usage feature lets you use classes and members before you define them. You can generate a stub for any undefined class, constructor, method, property, field, or enum that you want to use but have not yet defined. You can generate new types and members without leaving your current location in code, This minimizes interruption to your workflow.9)      IntelliSense Suggestion Mode - IntelliSense now provides two alternatives for IntelliSense statement completion, completion mode and suggestion mode. Use suggestion mode for situations where classes and members are used before they are defined. In suggestion mode, when you type in the editor and then commit the entry, the text you typed is inserted into the code. When you commit an entry in completion mode, the editor shows the entry that is highlighted on the members list. When an IntelliSense window is open, you can press CTRL+ALT+SPACEBAR to toggle between completion mode and suggestion mode. 10)   Application Lifecycle Management – A client application for management of application lifecycle like version control, work item tracking, build automation, team portal etc is available for free (this is not available for express edition.). 11)   Start Page – The start page has been redesigned with WPF for new functionality and look. Tabbed areas are provided for content from different source including MSDN. Once you open some project the start page closes automatically. The list of recent project also lets you remove project from the list. And above all the start page is customizable enough to be changed as per individual requirement. 12)   Extension Manager – Visual Studio 2010 has provided good ways to be extended. We can also use MEF to extend most of the features of Visual Studio. The new extension manager now can go the visual studio gallery and install the extension without even opening any explorer. 13)   Code snippets – Visual studio 2010 for HTML, Jscript and Asp.net also. 14)   Improved Intelligence for JavaScript has been improved vastly (around 2-5 times). Intelligence now also shows the XML documentation comment on the go. 15)   Web Deployment – Web Deployment has been vastly improved. We can package and publish the web application in one click. Three major supported deployment scenarios are Web packages, one click deployment and Web configuration Transformation. 16)   SharePoint - Visual Studio 2010 also brings vastly improved development experience for SharePoint. We can create, edit, debug, package, deploy and activate SharePoint project from within Visual Studio. Deployment of Site is as easy as hitting F5. 17)   Azure – Visual Studio 2010 also comes with handy improvement for developing on windows Azure environment. Vikram

    Read the article

  • Upgrading Windows 8 boot to VHD to Windows 8.1&ndash;Step by step guide

    - by Liam Westley
    Originally posted on: http://geekswithblogs.net/twickers/archive/2013/10/19/upgrading-windows-8-boot-to-vhd-to-windows-8.1ndashstep-by.aspxBoot to VHD – dual booting Windows 7 and Windows 8 became easy When Windows 8 arrived, quite a few people decided that they would still dual boot their machines, and instead of mucking about with resizing disk partitions to free up space for Windows 8 they decided to use the boot from VHD feature to create a huge hard disc image into which Windows 8 could be installed.  Scott Hanselman wrote this installation guide, while I myself used the installation guide from Ed Bott of ZD net fame. Boot to VHD is a great solution, it achieves a dual boot, can be backed up easily and had virtually no effect on the original Windows 7 partition. As a developer who has dual booted Windows operating systems for years, hacking boot.ini files, the boot to VHD was a much easier solution. Upgrade to Windows 8.1 – ah, you can’t do that on a virtual disk installation (boot to VHD) Last week the final version of Windows 8.1 arrived, and I went into the Windows Store to upgrade.  Luckily I’m on a fast download service, and use an SSD, because once the upgrade was downloaded and prepared Windows informed that This PC can’t run Windows 8.1, and provided the reason, You can’t install Windows on a virtual drive.  You can see an image of the message and discussion that sparked my search for a solution in this Microsoft Technet forum post. I was determined not to have to resize partitions yet again and fiddle with VHD to disk utilities and back again, and in the end I did succeed in upgrading to a Windows 8.1 boot to VHD partition.  It takes quite a bit of effort though … tldr; Simple steps of how you upgrade Boot into Windows 7 – make a copy of your Windows 8 VHD, to become Windows 8.1 Enable Hyper-V in your Windows 8 (the original boot to VHD partition) Create a new virtual machine, attaching the copy of your Windows 8 VHD Start the virtual machine, upgrade it via the Windows Store to Windows 8.1 Shutdown the virtual machine Boot into Windows 7 – use the bcedit tool to create a new Windows 8.1 boot to VHD option (pointing at the copy) Boot into the new Windows 8.1 option Reactivate Windows 8.1 (it will have become deactivated by running under Hyper-V) Remove the original Windows 8 VHD, and in Windows 7 use bcedit to remove it from the boot menu Things you’ll need A system that can run Hyper-V under Windows 8 (Intel i5, i7 class CPU) Enough space to have your original Windows 8 boot to VHD and a copy at the same time An ISO or DVD for Windows 8 to create a bootable Windows 8 partition Step by step guide Boot to your base o/s, the real one, Windows 7. Make a copy of the Windows 8 VHD file that you use to boot Windows 8 (via boot from VHD) – I copied it from a folder on C: called VHD-Win8 to VHD-Win8.1 on my N: drive. Reboot your system into Windows 8, and enable Hyper-V if not already present (this may require reboot) Use the Hyper-V manager , create a new Hyper-V machine, using half your system memory, and use the option to attach an existing VHD on the main IDE controller – this will be the new copy you made in Step 2. Start the virtual machine, use Connect to view it, and you’ll probably discover it cannot boot as there is no boot record If this is the case, go to Hyper-V manager, edit the Settings for the virtual machine to attach an ISO of a Windows 8 DVD to the second IDE controller. Start the virtual machine, use Connect to view it, and it should now attempt a fresh installation of Windows 8.  You should select Advanced Options and choose Repair - this will make VHD bootable When the setup reboots your virtual machine, turn off the virtual machine, and remove the ISO of the Windows 8 DVD from the virtual machine settings. Start virtual machine, use Connect to view it.  You will see the devices to be re-discovered (including your quad CPU becoming single CPU).  Eventually you should see the Windows Login screen. You may notice that your desktop background (Win+D) will have turned black as your Windows installation has become deactivate due to the hardware changes between your real PC and Hyper-V. Fortunately becoming deactivated, does not stop you using the Windows Store, where you can select the update to Windows 8.1. You can now watch the progress joy of the Windows 8 update; downloading, preparing to update, checking compatibility, gathering info, preparing to restart, and finally, confirm restart - remember that you are restarting your virtual machine sitting on the copy of the VHD, not the Windows 8 boot to VHD you are currently using to run Hyper-V (confused yet?) After the reboot you get the real upgrade messages; setting up x%, xx%, (quite slow) After a while, Getting ready Applying PC Settings x%, xx% (really slow) Updating your system (fast) Setting up a few more things x%, (quite slow) Getting ready, again Accept license terms Express settings Confirmed previous password Next, I had to set up a Microsoft account – which is possibly now required, and not optional Using the Microsoft account required a 2 factor authorization, via text message, a 7 digit code for me Finalising settings Blank screen, HI .. We're setting up things for you (similar to original Windows 8 install) 'You can get new apps from the Store', below which is ’Installing your apps’ - I had Windows Media Center which is counts as an app from the Store ‘Taking care of a few things’, below which is ‘Installing your apps’ ‘Taking care of a few things’, below ‘Don't turn off your PC’ ‘Getting your apps ready’, below ‘Don't turn off your PC’ ‘Almost ready’, below ‘Don't turn off your PC’ … finally, we get the Windows 8.1 start menu, and a quick Win+D to check the desktop confirmed all the application icons I expected, pinned items on the taskbar, and one app moaning about a missing drive At this point the upgrade is complete – you can shutdown the virtual machine Reboot from the original Windows 8 and return to Windows 7 to configure booting to the Windows 8.1 copy of the VHD In an administrator command prompt do following use the bcdedit tool (from an MSDN blog about configuring VHD to boot in Windows 7) Type bcedit to list the current boot options, so you can copy the GUID (complete with brackets/braces) for the original Windows 8 boot to VHD Create a new menu option, copy of the Windows 8 option; bcdedit /copy {originalguid} /d "Windows 8.1" Point the new Windows 8.1 option to the copy of the VHD; bcdedit /set {newguid} device vhd=[D:]\Image.vhd Point the new Windows 8.1 option to the copy of the VHD; bcdedit /set {newguid} osdevice vhd=[D:]\Image.vhd Set autodetection of the HAL (may already be set); bcdedit /set {newguid} detecthal on Reboot from Windows 7 and select the new option 'Windows 8.1' on the boot menu, and you’ll have some messages to look at, as your hardware is redetected (as you are back from 1 CPU to 4 CPUs) ‘Getting devices ready, blank then %xx, with occasional blank screen, for the graphics driver, (fast-ish) Getting Ready message (fast) You will have to suffer one final reboots, choose 'Windows 8.1' and you can now login to a lovely Windows 8.1 start screen running on non virtualized hardware via boot to VHD After checking everything is running fine, you can now choose to Activate Windows, which for me was a toll free phone call to the automated system where you type in lots of numbers to be given a whole bunch of new activation codes. Once you’re happy with your new Windows 8.1 boot to VHD, and no longer need the Windows 8 boot to VHD, feel free to delete the old one.  I do believe once you upgrade, you are no longer licensed to use it anyway. There, that was simple wasn’t it? Looking at the huge list of steps it took to perform this upgrade, you may wonder whether I think this is worth it.  Well, I think it is worth booting to VHD.  It makes backups a snap (go to Windows 7, copy the VHD, you backed up the o/s) and helps with disk management – want to move the o/s, you can move the VHD and repoint the boot menu to the new location. The downside is that Microsoft has complete neglected to support boot to VHD as an upgradable option.  Quite a poor decision in my opinion, and if you read twitter and the forums quite a few people agree with that view.  It’s a shame this got missed in the work on creating the upgrade packages for Windows 8.1.

    Read the article

  • usb mouse/keyboard doesn't work with 3.11.0-12-generic kernel

    - by x-yuri
    I can't use my usb keyboard/mouse after upgrade from raring to saucy. The keyboard works in grub menu and if I boot with the previous kernel version (3.8.0-31-generic). My new kernel version is 3.11.0-12-generic. I've got Mad Catz R.A.T.7 wired USB mouse, Canyon CNL-MBMSO02 wired usb mouse and Logitech diNovo Edge wireless keyboard, connected to computer through Logitech Unifying Receiver. Using PS/2 keyboard I've managed to get some information. dmesg says: [ 0.166273] ACPI: bus type USB registered [ 0.166273] usbcore: registered new interface driver usbfs [ 0.166273] usbcore: registered new interface driver hub [ 0.166273] usbcore: registered new device driver usb ... [ 3.534226] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 3.534228] ehci-pci: EHCI PCI platform driver [ 3.534291] ehci-pci 0000:00:1a.7: setting latency timer to 64 [ 3.534299] ehci-pci 0000:00:1a.7: EHCI Host Controller [ 3.534304] ehci-pci 0000:00:1a.7: new USB bus registered, assigned bus number 1 [ 3.534315] ehci-pci 0000:00:1a.7: debug port 1 [ 3.538218] ehci-pci 0000:00:1a.7: cache line size of 64 is not supported [ 3.538231] ehci-pci 0000:00:1a.7: irq 18, io mem 0xd3325400 [ 3.548017] ehci-pci 0000:00:1a.7: USB 2.0 started, EHCI 1.00 [ 3.548042] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 [ 3.548045] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.548048] usb usb1: Product: EHCI Host Controller [ 3.548050] usb usb1: Manufacturer: Linux 3.11.0-12-generic ehci_hcd [ 3.548053] usb usb1: SerialNumber: 0000:00:1a.7 [ 3.548155] hub 1-0:1.0: USB hub found [ 3.548159] hub 1-0:1.0: 6 ports detected [ 3.548311] ehci-pci 0000:00:1d.7: setting latency timer to 64 [ 3.548319] ehci-pci 0000:00:1d.7: EHCI Host Controller [ 3.548323] ehci-pci 0000:00:1d.7: new USB bus registered, assigned bus number 2 [ 3.548333] ehci-pci 0000:00:1d.7: debug port 1 [ 3.552228] ehci-pci 0000:00:1d.7: cache line size of 64 is not supported [ 3.552239] ehci-pci 0000:00:1d.7: irq 23, io mem 0xd3325000 [ 3.564014] ehci-pci 0000:00:1d.7: USB 2.0 started, EHCI 1.00 [ 3.564044] usb usb2: New USB device found, idVendor=1d6b, idProduct=0002 [ 3.564047] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564050] usb usb2: Product: EHCI Host Controller [ 3.564052] usb usb2: Manufacturer: Linux 3.11.0-12-generic ehci_hcd [ 3.564056] usb usb2: SerialNumber: 0000:00:1d.7 [ 3.564163] hub 2-0:1.0: USB hub found [ 3.564167] hub 2-0:1.0: 6 ports detected [ 3.564274] ehci-platform: EHCI generic platform driver [ 3.564280] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 3.564281] ohci-platform: OHCI generic platform driver [ 3.564287] uhci_hcd: USB Universal Host Controller Interface driver [ 3.564345] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [ 3.564347] uhci_hcd 0000:00:1a.0: UHCI Host Controller [ 3.564352] uhci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 3 [ 3.564378] uhci_hcd 0000:00:1a.0: irq 16, io base 0x0000f0c0 [ 3.564402] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.564404] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564406] usb usb3: Product: UHCI Host Controller [ 3.564408] usb usb3: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.564410] usb usb3: SerialNumber: 0000:00:1a.0 [ 3.564478] hub 3-0:1.0: USB hub found [ 3.564482] hub 3-0:1.0: 2 ports detected [ 3.564589] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [ 3.564592] uhci_hcd 0000:00:1a.1: UHCI Host Controller [ 3.564597] uhci_hcd 0000:00:1a.1: new USB bus registered, assigned bus number 4 [ 3.564623] uhci_hcd 0000:00:1a.1: irq 21, io base 0x0000f0a0 [ 3.564647] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.564649] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564651] usb usb4: Product: UHCI Host Controller [ 3.564653] usb usb4: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.564654] usb usb4: SerialNumber: 0000:00:1a.1 [ 3.564727] hub 4-0:1.0: USB hub found [ 3.564730] hub 4-0:1.0: 2 ports detected [ 3.564834] uhci_hcd 0000:00:1a.2: setting latency timer to 64 [ 3.564837] uhci_hcd 0000:00:1a.2: UHCI Host Controller [ 3.564843] uhci_hcd 0000:00:1a.2: new USB bus registered, assigned bus number 5 [ 3.564863] uhci_hcd 0000:00:1a.2: irq 18, io base 0x0000f080 [ 3.564885] usb usb5: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.564887] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.564889] usb usb5: Product: UHCI Host Controller [ 3.564891] usb usb5: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.564892] usb usb5: SerialNumber: 0000:00:1a.2 [ 3.564962] hub 5-0:1.0: USB hub found [ 3.564966] hub 5-0:1.0: 2 ports detected [ 3.565073] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 3.565076] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 3.565081] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 6 [ 3.565101] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000f060 [ 3.565124] usb usb6: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.565127] usb usb6: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.565128] usb usb6: Product: UHCI Host Controller [ 3.565130] usb usb6: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.565132] usb usb6: SerialNumber: 0000:00:1d.0 [ 3.565195] hub 6-0:1.0: USB hub found [ 3.565198] hub 6-0:1.0: 2 ports detected [ 3.565303] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 3.565306] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 3.565310] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 7 [ 3.565329] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000f040 [ 3.565352] usb usb7: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.565354] usb usb7: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.565356] usb usb7: Product: UHCI Host Controller [ 3.565358] usb usb7: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.565359] usb usb7: SerialNumber: 0000:00:1d.1 [ 3.565424] hub 7-0:1.0: USB hub found [ 3.565427] hub 7-0:1.0: 2 ports detected [ 3.565534] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 3.565537] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 3.565541] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 8 [ 3.565560] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000f020 [ 3.565584] usb usb8: New USB device found, idVendor=1d6b, idProduct=0001 [ 3.565587] usb usb8: New USB device strings: Mfr=3, Product=2, SerialNumber=1 [ 3.565588] usb usb8: Product: UHCI Host Controller [ 3.565590] usb usb8: Manufacturer: Linux 3.11.0-12-generic uhci_hcd [ 3.565592] usb usb8: SerialNumber: 0000:00:1d.2 [ 3.565658] hub 8-0:1.0: USB hub found [ 3.565661] hub 8-0:1.0: 2 ports detected ... [ 4.120014] usb 2-3: new high-speed USB device number 2 using ehci-pci ... [ 4.468908] usb 2-3: New USB device found, idVendor=046d, idProduct=0825 [ 4.468912] usb 2-3: New USB device strings: Mfr=0, Product=0, SerialNumber=2 [ 4.468914] usb 2-3: SerialNumber: AF582E10 ... [ 5.284019] usb 5-2: new full-speed USB device number 2 using uhci_hcd [ 5.465903] usb 5-2: New USB device found, idVendor=046d, idProduct=0b04 [ 5.465908] usb 5-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 5.465911] usb 5-2: Product: Logitech BT Mini-Receiver [ 5.465914] usb 5-2: Manufacturer: Logitech [ 5.468948] hub 5-2:1.0: USB hub found [ 5.470898] hub 5-2:1.0: 3 ports detected [ 5.476096] Switched to clocksource tsc [ 5.712099] usb 7-2: new full-speed USB device number 2 using uhci_hcd [ 5.896366] usb 7-2: New USB device found, idVendor=046d, idProduct=c52b [ 5.896370] usb 7-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 5.896372] usb 7-2: Product: USB Receiver [ 5.896374] usb 7-2: Manufacturer: Logitech [ 6.140016] usb 8-1: new full-speed USB device number 2 using uhci_hcd [ 6.324597] usb 8-1: New USB device found, idVendor=0738, idProduct=1708 [ 6.324603] usb 8-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 6.324605] usb 8-1: Product: Mad Catz R.A.T.7 Mouse [ 6.324608] usb 8-1: Manufacturer: Mad Catz [ 6.564012] usb 8-2: new low-speed USB device number 3 using uhci_hcd [ 6.746602] usb 8-2: New USB device found, idVendor=1d57, idProduct=0010 [ 6.746608] usb 8-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 6.746610] usb 8-2: Product: usb mouse with wheel [ 6.746613] usb 8-2: Manufacturer: HID-Compliant Mouse [ 7.337898] usb 5-2.2: new full-speed USB device number 3 using uhci_hcd [ 7.490902] usb 5-2.2: New USB device found, idVendor=046d, idProduct=c713 [ 7.490907] usb 5-2.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 7.490910] usb 5-2.2: Product: Logitech BT Mini-Receiver [ 7.490913] usb 5-2.2: Manufacturer: Logitech [ 7.490915] usb 5-2.2: SerialNumber: 001F203BD6A7 [ 7.569898] usb 5-2.3: new full-speed USB device number 4 using uhci_hcd [ 7.722901] usb 5-2.3: New USB device found, idVendor=046d, idProduct=c714 [ 7.722906] usb 5-2.3: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 7.722909] usb 5-2.3: Product: Logitech BT Mini-Receiver [ 7.722911] usb 5-2.3: Manufacturer: Logitech [ 7.722913] usb 5-2.3: SerialNumber: 001F203BD6A7 lsusb (more output): Bus 002 Device 002: ID 046d:0825 Logitech, Inc. Webcam C270 Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 008 Device 003: ID 1d57:0010 Xenta Bus 008 Device 002: ID 0738:1708 Mad Catz, Inc. Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 002: ID 046d:c52b Logitech, Inc. Unifying Receiver Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 004: ID 046d:c714 Logitech, Inc. diNovo Edge Keyboard Bus 005 Device 003: ID 046d:c713 Logitech, Inc. Bus 005 Device 002: ID 046d:0b04 Logitech, Inc. Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub More background. Before that I had a problem with logging in to GNOME. During which I upgraded all the packages at one point (apt-get upgrade) and it stopped booting at all (it didn't get to login screen). Then I fixed PATH issue and now I've got this usb-not-working issue. I tried reinstalling kernel, to no effect. Is there anything else I can do to fix or diagnose the problem?

    Read the article

  • Understanding 400 Bad Request Exception

    - by imran_ku07
        Introduction:          Why I am getting this exception? What is the cause of this error. Developers are always curious to know the root cause of an exception, even though they found the solution from elsewhere. So what is the reason of this exception (400 Bad Request).The answer is security. Security is an important feature for any application. ASP.NET try to his best to give you more secure application environment as possible. One important security feature is related to URLs. Because there are various ways a hacker can try to access server resource. Therefore it is important to make your application as secure as possible. Fortunately, ASP.NET provides this security by throwing an exception of Bad Request whenever he feels. In this Article I am try to present when ASP.NET feels to throw this exception. You will also see some new ASP.NET 4 features which gives developers some control on this situation.   Description:   http.sys Restrictions:           It is interesting to note that after deploying your application on windows server that runs IIS 6 or higher, the first receptionist of HTTP request is the kernel mode HTTP driver: http.sys. Therefore for completing your request successfully you need to present your validity to http.sys and must pass the http.sys restriction.           Every http request URL must not contain any character from ASCII range of 0x00 to 0x1F, because they are not printable. These characters are invalid because these are invalid URL characters as defined in RFC 2396 of the IETF. But a question may arise that how it is possible to send unprintable character. The answer is that when you send your request from your application in binary format.           Another restriction is on the size of the request. A request containg protocal, server name, headers, query string information and individual headers sent along with the request must not exceed 16KB. Also individual header should not exceed 16KB.           Any individual path segment (the portion of the URL that does not include protocol, server name, and query string, for example, http://a/b/c?d=e,  here the b and c are individual path) must not contain more than 260 characters. Also http.sys disallows URLs that have more than 255 path segments.           If any of the above rules are not follow then you will get 400 Bad Request Exception. The reason for this restriction is due to hack attacks against web servers involve encoding the URL with different character representations.           You can change the default behavior enforced by http.sys using some Registry switches present at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters    ASP.NET Restrictions:           After passing the restrictions enforced by the kernel mode http.sys then the request is handed off to IIS and then to ASP.NET engine and then again request has to pass some restriction from ASP.NET in order to complete it successfully.           ASP.NET only allows URL path lengths to 260 characters(only paths, for example http://a/b/c/d, here path is from a to d). This means that if you have long paths containing 261 characters then you will get the Bad Request exception. This is due to NTFS file-path limit.           Another restriction is that which characters can be used in URL path portion.You can use any characters except some characters because they are called invalid characters in path. Here are some of these invalid character in the path portion of a URL, <,>,*,%,&,:,\,?. For confirming this just right click on your Solution Explorer and Add New Folder and name this File to any of the above character, you will get the message. Files or folders cannot be empty strings nor they contain only '.' or have any of the following characters.....            For checking the above situation i have created a Web Application and put Default.aspx inside A%A folder (created from windows explorer), then navigate to, http://localhost:1234/A%25A/Default.aspx, what i get response from server is the Bad Request exception. The reason is that %25 is the % character which is invalid URL path character in ASP.NET. However you can use these characters in query string.           The reason for these restrictions are due to security, for example with the help of % you can double encode the URL path portion and : is used to get some specific resource from server.   New ASP.NET 4 Features:           It is worth to discuss the new ASP.NET 4 features that provides some control in the hand of developer. Previously we are restricted to 260 characters path length and restricted to not use some of characters, means these characters cannot become the part of the URL path segment.           You can configure maxRequestPathLength and maxQueryStringLength to allow longer or shorter paths and query strings. You can also customize set of invalid character using requestPathInvalidChars, under httpruntime element. This may be the good news for someone who needs to use some above character in their application which was invalid in previous versions. You can find further detail about new ASP.NET features about URL at here           Note that the above new ASP.NET settings will not effect http.sys. This means that you have pass the restriction of http.sys before ASP.NET ever come in to the action. Note also that previous restriction of http.sys is applied on individual path and maxRequestPathLength is applied on the complete path (the portion of the URL that does not include protocol, server name, and query string). For example, if URL is http://a/b/c/d?e=f, then maxRequestPathLength will takes, a/b/c/d, into account while http.sys will take a, b, c individually.   Summary:           Hopefully this will helps you to know how some of initial security features comes in to play, but i also recommend that you should read (at least first chapter called Initial Phases of a Web Request of) Professional ASP.NET 2.0 Security, Membership, and Role Management by Stefan Schackow. This is really a nice book.

    Read the article

  • Replacing jQuery.live() with jQuery.on()

    - by Rick Strahl
    jQuery 1.9 and 1.10 have introduced a host of changes, but for the most part these changes are mostly transparent to existing application usage of jQuery. After spending some time last week with a few of my projects and going through them with a specific eye for jQuery failures I found that for the most part there wasn't a big issue. The vast majority of code continues to run just fine with either 1.9 or 1.10 (which are supposed to be in sync but with 1.10 removing support for legacy Internet Explorer pre-9.0 versions). However, one particular change in the new versions has caused me quite a bit of update trouble, is the removal of the jQuery.live() function. This is my own fault I suppose - .live() has been deprecated for a while, but with 1.9 and later it was finally removed altogether from jQuery. In the past I had quite a bit of jQuery code that used .live() and it's one of the things that's holding back my upgrade process, although I'm slowly cleaning up my code and switching to the .on() function as the replacement. jQuery.live() jQuery.live() was introduced a long time ago to simplify handling events on matched elements that exist currently on the document and those that are are added in the future and also match the selector. jQuery uses event bubbling, special event binding, plus some magic using meta data attached to a parent level element to check and see if the original target event element matches the selected selected elements (for more info see Elijah Manor's comment below). An Example Assume a list of items like the following in HTML for example and further assume that the items in this list can be appended to at a later point. In this app there's a smallish initial list that loads to start, and as the user scrolls towards the end of the initial small list more items are loaded dynamically and added to the list.<div id="PostItemContainer" class="scrollbox"> <div class="postitem" data-id="4z6qhomm"> <div class="post-icon"></div> <div class="postitemheader"><a href="show/4z6qhomm" target="Content">1999 Buick Century For Sale!</a></div> <div class="postitemprice rightalign">$ 3,500 O.B.O.</div> <div class="smalltext leftalign">Jun. 07 @ 1:06am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> <div class="postitem" data-id="2jtvuu17"> <div class="postitemheader"><a href="show/2jtvuu17" target="Content">Toyota VAN 1987</a></div> <div class="postitemprice rightalign">$950</div> <div class="smalltext leftalign">Jun. 07 @ 12:29am</div> <div class="post-byline">- Vehicles - Automobiles</div> </div> … </div> With the jQuery.live() function you could easily select elements and hook up a click handler like this:$(".postitem").live("click", function() {...}); Simple and perfectly readable. The behavior of the .live handler generally was the same as the corresponding simple event handlers like .click(), except that you have to explicitly name the event instead of using one of the methods. Re-writing with jQuery.on() With .live() removed in 1.9 and later we have to re-write .live() code above with an alternative. The jQuery documentation points you at the .on() or .delegate() functions to update your code. jQuery.on() is a more generic event handler function, and it's what jQuery uses internally to map the high level event functions like .click(),.change() etc. that jQuery exposes. Using jQuery.on() however is not a one to one replacement of the .live() function. While .on() can handle events directly and use the same syntax as .live() did, you'll find if you simply switch out .live() with .on() that events on not-yet existing elements will not fire. IOW, the key feature of .live() is not working. You can use .on() to get the desired effect however, but you have to change the syntax to explicitly handle the event you're interested in on the container and then provide a filter selector to specify which elements you are actually interested in for handling the event for. Sounds more complicated than it is and it's easier to see with an example. For the list above hooking .postitem clicks, using jQuery.on() looks like this:$("#PostItemContainer").on("click", ".postitem", function() {...}); You specify a container that can handle the .click event and then provide a filter selector to find the child elements that trigger the  the actual event. So here #PostItemContainer contains many .postitems, whose click events I want to handle. Any container will do including document, but I tend to use the container closest to the elements I actually want to handle the events on to minimize the event bubbling that occurs to capture the event. With this code I get the same behavior as with .live() and now as new .postitem elements are added the click events are always available. Sweet. Here's the full event signature for the .on() function: .on( events [, selector ] [, data ], handler(eventObject) ) Note that the selector is optional - if you omit it you essentially create a simple event handler that handles the event directly on the selected object. The filter/child selector required if you want life-like - uh, .live() like behavior to happen. While it's a bit more verbose than what .live() did, .on() provides the same functionality by being more explicit on what your parent container for trapping events is. .on() is good Practice even for ordinary static Element Lists As a side note, it's a good practice to use jQuery.on() or jQuery.delegate() for events in most cases anyway, using this 'container event trapping' syntax. That's because rather than requiring lots of event handlers on each of the child elements (.postitem in the sample above), there's just one event handler on the container, and only when clicked does jQuery drill down to find the matching filter element and tries to match it to the originating element. In the early days of jQuery I used manually build handlers that did this and manually drilled from the event object into the originalTarget to determine if it's a matching element. With later versions of jQuery the various event functions in jQuery essentially provide this functionality out of the box with functions like .on() and .delegate(). All of this is nothing new, but I thought I'd write this up because I have on a few occasions forgotten what exactly was needed to replace the many .live() function calls that litter my code - especially older code. This will be a nice reminder next time I have a memory blank on this topic. And maybe along the way I've helped one or two of you as well to clean up your .live() code…© Rick Strahl, West Wind Technologies, 2005-2013Posted in jQuery   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Getting MySQL work with Entity Framework 4.0

    - by DigiMortal
    Does MySQL work with Entity Framework 4.0? The answer is: yes, it works! I just put up one experimental project to play with MySQL and Entity Framework 4.0 and in this posting I will show you how to get MySQL data to EF. Also I will give some suggestions how to deploy your applications to hosting and cloud environments. MySQL stuff As you may guess you need MySQL running somewhere. I have MySQL installed to my development machine so I can also develop stuff when I’m offline. The other thing you need is MySQL Connector for .NET Framework. Currently there is available development version of MySQL Connector/NET 6.3.5 that supports Visual Studio 2010. Before you start download MySQL and Connector/NET: MySQL Community Server Connector/NET 6.3.5 If you are not big fan of phpMyAdmin then you can try out free desktop client for MySQL – HeidiSQL. I am using it and I am really happy with this program. NB! If you just put up MySQL then create also database with couple of table there. To use all features of Entity Framework 4.0 I suggest you to use InnoDB or other engine that has support for foreign keys. Connecting MySQL to Entity Framework 4.0 Now create simple console project using Visual Studio 2010 and go through the following steps. 1. Add new ADO.NET Entity Data Model to your project. For model insert the name that is informative and that you are able later recognize. Now you can choose how you want to create your model. Select “Generate from database” and click OK. 2. Set up database connection Change data connection and select MySQL Database as data source. You may also need to set provider – there is only one choice. Select it if data provider combo shows empty value. Click OK and insert connection information you are asked about. Don’t forget to click test connection button to see if your connection data is okay. If everything works then click OK. 3. Insert context name Now you should see the following dialog. Insert your data model name for application configuration file and click OK. Click next button. 4. Select tables for model Now you can select tables and views your classes are based on. I have small database with events data. Uncheck the checkbox “Include foreign key columns in the model” – it is damn annoying to get them away from model later. Also insert informative and easy to remember name for your model. Click finish button. 5. Define your classes Now it’s time to define your classes. Here you can see what Entity Framework generated for you. Relations were detected automatically – that’s why we needed foreign keys. The names of classes and their members are not nice yet. After some modifications my class model looks like on the following diagram. Note that I removed attendees navigation property from person class. Now my classes look nice and they follow conventions I am using when naming classes and their members. NB! Don’t forget to see properties of classes (properties windows) and modify their set names if set names contain numbers (I changed set name for Entity from Entity1 to Entities). 6. Let’s test! Now let’s write simple testing program to see if MySQL data runs through Entity Framework 4.0 as expected. My program looks for events where I attended. using(var context = new MySqlEntities()) {     var myEvents = from e in context.Events                     from a in e.Attendees                     where a.Person.FirstName == "Gunnar" &&                             a.Person.LastName == "Peipman"                     select e;       Console.WriteLine("My events: ");       foreach(var e in myEvents)     {         Console.WriteLine(e.Title);     } }   Console.ReadKey(); And when I run it I get the result shown on screenshot on right. I checked out from database and these results are correct. At first run connector seems to work slow but this is only the effect of first run. As connector is loaded to memory by Entity Framework it works fast from this point on. Now let’s see what we have to do to get our program work in hosting and cloud environments where MySQL connector is not installed. Deploying application to hosting and cloud environments If your hosting or cloud environment has no MySQL connector installed you have to provide MySQL connector assemblies with your project. Add the following assemblies to your project’s bin folder and include them to your project (otherwise they are not packaged by WebDeploy and Azure tools): MySQL.Data MySQL.Data.Entity MySQL.Web You can also add references to these assemblies and mark references as local so these assemblies are copied to binary folder of your application. If you have references to these assemblies then you don’t have to include them to your project from bin folder. Also add the following block to your application configuration file. <?xml version="1.0" encoding="utf-8"?> <configuration> ...   <system.data>     <DbProviderFactories>         <add              name=”MySQL Data Provider”              invariant=”MySql.Data.MySqlClient”              description=”.Net Framework Data Provider for MySQL”              type=”MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data,                   Version=6.2.0.0, Culture=neutral,                   PublicKeyToken=c5687fc88969c44d”          />     </DbProviderFactories>   </system.data> ... </configuration> Conclusion It was not hard to get MySQL connector installed and MySQL connected to Entity Framework 4.0. To use full power of Entity Framework we used InnoDB engine because it supports foreign keys. It was also easy to query our model. To get our project online we needed some easy modifications to our project and configuration files.

    Read the article

  • .NET 4.5 is an in-place replacement for .NET 4.0

    - by Rick Strahl
    With the betas for .NET 4.5 and Visual Studio 11 and Windows 8 shipping many people will be installing .NET 4.5 and hacking away on it. There are a number of great enhancements that are fairly transparent, but it's important to understand what .NET 4.5 actually is in terms of the CLR running on your machine. When .NET 4.5 is installed it effectively replaces .NET 4.0 on the machine. .NET 4.0 gets overwritten by a new version of .NET 4.5 which - according to Microsoft - is supposed to be 100% backwards compatible. While 100% backwards compatible sounds great, we all know that 100% is a hard number to hit, and even the aforementioned blog post at the Microsoft site acknowledges this. But there's so much more than backwards compatibility that makes this awkward at best and confusing at worst. What does ‘Replacement’ mean? When you install .NET 4.5 your .NET 4.0 assemblies in the \Windows\.NET Framework\V4.0.30319 are overwritten with a new set of assemblies. You end up with overwritten assemblies as well as a bunch of new ones (like the new System.Net.Http assemblies for example). The following screen shot demonstrates system.dll on my test machine (left) running .NET 4.5 on the right and my production laptop running stock .NET 4.0 (right):   Clearly they are different files with a difference in file sizes (interesting that the 4.5 version is actually smaller). That’s not all. If you actually query the runtime version when .NET 4.5 is installed with with Environment.Version you still get: 4.0.30319 If you open the properties of System.dll assembly in .NET 4.5 you'll also see: Notice that the file version is also left at 4.0.xxx. There are differences in build numbers: .NET 4.0 shows 261 and the current .NET 4.5 beta build is 17379. I suppose you can use assume a build number greater than 17000 is .NET 4.5, but that's pretty hokey to say the least. There’s no easy or obvious way to tell whether you are running on 4.0 or 4.5 – to the application they appear to be the same runtime version. And that is what Microsoft intends here. .NET 4.5 is intended as an in-place upgrade. Compile to 4.5 run on 4.0 – not quite! You can compile an application for .NET 4.5 and run it on the 4.0 runtime – that is until you hit a new feature that doesn’t exist on 4.0. At which point the app bombs at runtime. Say you write some code that is mostly .NET 4.0, but only has a few of the new features of .NET 4.5 like aync/await buried deep in the bowels of the application where it only fires occasionally. .NET will happily start your application and run everything 4.0 fine, until it hits that 4.5 code – and then crash unceremoniously at runtime. Oh joy! You can .NET 4.0 applications on .NET 4.5 of course and that should work without much fanfare. Different than .NET 3.0/3.5 Note that this in-place replacement is very different from the side by side installs of .NET 2.0 and 3.0/3.5 which all ran on the 2.0 version of the CLR. The two 3.x versions were basically library enhancements on top of the core .NET 2.0 runtime. Both versions ran under the .NET 2.0 runtime which wasn’t changed (other than for security patches and bug fixes) for the whole 3.x cycle. The 4.5 update instead completely replaces the .NET 4.0 runtime and leaves the actual version number set at v4.0.30319. When you build a new project with Visual Studio 2011, you can still target .NET 4.0 or you can target .NET 4.5. But you are in effect referencing the same set of assemblies for both regardless which version you use. What's different is the compiler used to compile and link your code so compiling with .NET 4.0 gives you just the subset of the functionality that is available in .NET 4.0, but when you use the 4.5 compiler you get the full functionality of what’s actually available in the assemblies and extra libraries. It doesn’t look like you will be able to use Visual Studio 2010 to develop .NET 4.5 applications. Good news – Bad news Microsoft is trying hard to experiment with every possible permutation of releasing new versions of the .NET framework apparently. No two updates have been the same. Clearly updating to a full new version of .NET (ie. .NET 2.0, 4.0 and at some point 5.0 runtimes) has its own set of challenges, but doing an in-place update of the runtime and then not even providing a good way to tell which version is installed is pretty whacky even by Microsoft’s standards. Especially given that .NET 4.5 includes a fairly significant update with all the aysnc functionality baked into the runtime. Most of the IO APIs have been updated to support task based async operation which significantly affects many existing APIs. To make things worse .NET 4.5 will be the initial version of .NET that ships with Windows 8 so it will be with us for a long time to come unless Microsoft finally decides to push .NET versions onto Windows machines as part of system upgrades (which currently doesn’t happen). This is the same story we had when Vista launched with .NET 3.0 which was a minor version that quickly was replaced by 3.5 which was more long lived and practical. People had enough problems dealing with the confusing versioning of the 3.x versions which ran on .NET 2.0. I can’t count the amount support calls and questions I’ve fielded because people couldn’t find a .NET 3.5 entry in the IIS version dialog. The same is likely to happen with .NET 4.5. It’s all well and good when we know that .NET 4.5 is an in-place replacement, but administrators and IT folks not intimately familiar with .NET are unlikely to understand this nuance and end up thoroughly confused which version is installed. It’s hard for me to see any upside to an in-place update and I haven’t really seen a good explanation of why this approach was decided on. Sure if the version stays the same existing assembly bindings don’t break so applications can stay running through an update. I suppose this is useful for some component vendors and strongly signed assemblies in corporate environments. But seriously, if you are going to throw .NET 4.5 into the mix, who won’t be recompiling all code and thoroughly test that code to work on .NET 4.5? A recompile requirement doesn’t seem that serious in light of a major version upgrade.  Resources http://blogs.msdn.com/b/dotnet/archive/2011/09/26/compatibility-of-net-framework-4-5.aspx http://www.devproconnections.com/article/net-framework/net-framework-45-versioning-faces-problems-141160© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Thursday, March 11, 2010

    CodePlex Daily Summary for Thursday, March 11, 2010New ProjectsASP.NET Wiki Control: This ASP.NET user control allows you to embed a very useful wiki directly into your already existing ASP.NET website taking advantage of the popula...BabyLog: Log baby daily activity.buddyHome: buddyHome is a project that can make your home smarter. as good as your buddy. Cloud Community: Cloud Community makes it easier for organizations to have a simple to use community platform. Our mission is to create an easy to use community pl...Community Connectors for Microsoft CRM 4.0: Community Connectors for Microsoft CRM 4.0 allows Microsoft CRM 4.0 customers and partners to monitor and analyze customers’ interaction from their...Console Highlighter: Hightlights Microsoft Windows Command prompt (cmd.exe) by outputting ANSI VT100 Control sequences to color the output. These sequences are not hand...Cornell Store: This is IN NO WAY officially affiliated or related to the Cornell University store. Instead, this is a project that I am doing for a class. Ther...DevUtilities: This project is for creating some utility tools, and they will be useful during the development.DotNetNuke® Skin Maple: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by DyNNamite.co.uk. The package includes 4 color variations and sev...HRNet: HRNetIIS Web Site Monitoring: A software for monitor a particular web site on IIS, even if its IP is sharing between different web site.Iowa Code Camp: The source code for the Iowa Code Camp website.Leonidas: Leonidas is a virtual tutorLunch 'n Learn: The Lunch 'n Learn web application is an open source ASP.NET MVC application that allows you to setup lunch 'n learn presentations for your team, c...MNT Cryptography: A very simple cryptography classMooiNooi MVC2LINQ2SQL Web Databinder: mvc2linq2sql is a databinder for ASP.NET MVC that make able developer to clean bind object from HTML FORMS to Linq entities. Even 1 to N relations ...MoqBot: MoqBot is an auto mocking library for Moq and Ninject.mtExperience1: hoiMvcPager: MvcPager is a free paging component for ASP.NET MVC web application, it exposes a series of extension methods for using in ASP.NET MVC applications...OCal: OCal is based on object calisthenics to identify code smellsPex Custom Arithmetic Solver: Pex Custom Arithmetic Solver contains a collection of meta-heuristic search algorithms. The goal is to improve Pex's code coverage for code involvi...SetControls: Расширеные контролы для ASP.NET приложений. Полная информация ближе к релизу...shadowrage1597: CTC 195 Game Design classSharePoint Team-Mailer: A SharePoint 2007 solution that defines a generic CustomList for sending e-mails to SharePoint Groups.Sql Share: SQL Share is a collaboration tool used within the science to allow database engineers to work tightly with domain scientists.TechCalendar: Tech Events Calendar ASP.NET project.ZLYScript: A very simple script language compiler.New ReleasesALGLIB: ALGLIB 2.4.0: New ALGLIB release contains: improved versions of several linear algebra algorithms: QR decomposition, matrix inversion, condition number estimatio...AmiBroker Plug-Ins with C#: AmiBroker Plug-Ins v0.0.2: Source codes and a binaryAppFabric Caching UI Admin Tool: AppFabric Caching Beta 2 UI Admin Tool: System Requirements:.NET 4.0 RC AppFabric Caching Beta2 Test On:Win 7 (64x)Autodocs - WCF REST Automatic API Documentation Generator: Autodocs.ServiceModel.Web: This archive contains the reference DLL, instructions and license.Compact Plugs & Compact Injection: Compact Injection and Compact Plugs 1.1 Beta: First release of Compact Plugs (CP). The solution includes a simple example project of CP, called "TestCompactPlugs1". Also some fixes where made ...Console Highlighter: Console Highlighter 0.9 (preview release): Preliminary release.Encrypted Notes: Encrypted Notes 1.3: This is the latest version of Encrypted Notes (1.3). It has an installer - it will create a directory 'CPascoe' in My Documents. The last one was ...Family Tree Analyzer: Version 1.0.2: Family Tree Analyzer Version 1.0.2 This early beta version implements loading a gedcom file and displaying some basic reports. These reports inclu...FRC1103 - FRC Dashboard viewer: 2010 Documentation v0.1: This is my current version of the control system documentation for 2010. It isn't complete, but it has the information required for a custom dashbo...jQuery.cssLess: jQuery.cssLess 0.5 (Even less release): NEW - support for nested special CSS classes (like :hover) MAIN RELEASE This release, code "Even less", is the one that will interpret cssLess wit...MooiNooi MVC2LINQ2SQL Web Databinder: MooiNooi MVC2LINQ2SQL DataBinder: I didn't try this... I just took it off from my project. Please, tell me any problem implementing in your own development and I'll be pleased to h...MvcPager: MvcPager 1.2 for ASP.NET MVC 1.0: MvcPager 1.2 for ASP.NET MVC 1.0Mytrip.Mvc: Mytrip 1.0 preview 1: Article Manager Blog Manager L2S Membership(.NET Framework 3.5) EF Membership(.NET Framework 4) User Manager File Manager Localization Captcha ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.117: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version adds ...Pex Custom Arithmetic Solver: PexCustomArithmeticSolver: This is the alpha release containing the Alternating Variable Method and Evolution Strategies to try and solve constraints over floating point vari...Scrum Sprint Monitor: v1.0.0.44877: What is new in this release? Major performance increase in animations (up to 50 fps from 2 fps) by replacing DropShadow effect with png bitmaps; ...sELedit: sELedit v1.0b: + Added support for empty strings / wstrings + Fixed: critical bug in configuration files (list 53)sPWadmin: pwAdmin v0.9_nightly: + Fixed: XML editor can now open and save character templates + Added: PWI item name database + Added: Plugin SupportTechCalendar: Events Calendar v.1.0: Initial release.The Silverlight Hyper Video Player [http://slhvp.com]: Beta 2: Beta 2.0 Some fixes from Beta 1, and a couple small enhancements. Intensive testing continues, and I will continue to update the code at least ever...ThreadSafe.Caching: 2010.03.10.1: Updates to the scavanging behaviour since last release. Scavenging will now occur every 30 seconds by default and all objects in the cache will be ...VCC: Latest build, v2.1.30310.0: Automatic drop of latest buildVisual Studio DSite: Email Sender (C++): The same Email Sender program that I but made in visual c plus plus 2008 instead of visual basic 2008.Web Forms MVP: Web Forms MVP CTP7: The release can be considered stable, and is in use behind several high traffic, public websites. It has been marked as a CTP release as it is not ...White Tiger: 0.0.3.1: Now you can load or create files with whatever root element you want *check f or sets file permisionsMost Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesASP.NET Ajax LibraryMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsN2 CMSFasterflect - A Fast and Simple Reflection APIjQuery Library for SharePoint Web ServicesBlogEngine.NETFarseer Physics Enginepatterns & practices – Enterprise LibraryCaliburn: An Application Framework for WPF and Silverlight

    Read the article

  • Using a "white list" for extracting terms for Text Mining

    - by [email protected]
    In Part 1 of my post on "Generating cluster names from a document clustering model" (part 1, part 2, part 3), I showed how to build a clustering model from text documents using Oracle Data Miner, which automates preparing data for text mining. In this process we specified a custom stoplist and lexer and relied on Oracle Text to identify important terms.  However, there is an alternative approach, the white list, which uses a thesaurus object with the Oracle Text CTXRULE index to allow you to specify the important terms. INTRODUCTIONA stoplist is used to exclude, i.e., black list, specific words in your documents from being indexed. For example, words like a, if, and, or, and but normally add no value when text mining. Other words can also be excluded if they do not help to differentiate documents, e.g., the word Oracle is ubiquitous in the Oracle product literature. One problem with stoplists is determining which words to specify. This usually requires inspecting the terms that are extracted, manually identifying which ones you don't want, and then re-indexing the documents to determine if you missed any. Since a corpus of documents could contain thousands of words, this could be a tedious exercise. Moreover, since every word is considered as an individual token, a term excluded in one context may be needed to help identify a term in another context. For example, in our Oracle product literature example, the words "Oracle Data Mining" taken individually are not particular helpful. The term "Oracle" may be found in nearly all documents, as with the term "Data." The term "Mining" is more unique, but could also refer to the Mining industry. If we exclude "Oracle" and "Data" by specifying them in the stoplist, we lose valuable information. But it we include them, they may introduce too much noise. Still, when you have a broad vocabulary or don't have a list of specific terms of interest, you rely on the text engine to identify important terms, often by computing the term frequency - inverse document frequency metric. (This is effectively a weight associated with each term indicating its relative importance in a document within a collection of documents. We'll revisit this later.) The results using this technique is often quite valuable. As noted above, an alternative to the subtractive nature of the stoplist is to specify a white list, or a list of terms--perhaps multi-word--that we want to extract and use for data mining. The obvious downside to this approach is the need to specify the set of terms of interest. However, this may not be as daunting a task as it seems. For example, in a given domain (Oracle product literature), there is often a recognized glossary, or a list of keywords and phrases (Oracle product names, industry names, product categories, etc.). Being able to identify multi-word terms, e.g., "Oracle Data Mining" or "Customer Relationship Management" as a single token can greatly increase the quality of the data mining results. The remainder of this post and subsequent posts will focus on how to produce a dataset that contains white list terms, suitable for mining. CREATING A WHITE LIST We'll leverage the thesaurus capability of Oracle Text. Using a thesaurus, we create a set of rules that are in effect our mapping from single and multi-word terms to the tokens used to represent those terms. For example, "Oracle Data Mining" becomes "ORACLEDATAMINING." First, we'll create and populate a mapping table called my_term_token_map. All text has been converted to upper case and values in the TERM column are intended to be mapped to the token in the TOKEN column. TERM                                TOKEN DATA MINING                         DATAMINING ORACLE DATA MINING                  ORACLEDATAMINING 11G                                 ORACLE11G JAVA                                JAVA CRM                                 CRM CUSTOMER RELATIONSHIP MANAGEMENT    CRM ... Next, we'll create a thesaurus object my_thesaurus and a rules table my_thesaurus_rules: CTX_THES.CREATE_THESAURUS('my_thesaurus', FALSE); CREATE TABLE my_thesaurus_rules (main_term     VARCHAR2(100),                                  query_string  VARCHAR2(400)); We next populate the thesaurus object and rules table using the term token map. A cursor is defined over my_term_token_map. As we iterate over  the rows, we insert a synonym relationship 'SYN' into the thesaurus. We also insert into the table my_thesaurus_rules the main term, and the corresponding query string, which specifies synonyms for the token in the thesaurus. DECLARE   cursor c2 is     select token, term     from my_term_token_map; BEGIN   for r_c2 in c2 loop     CTX_THES.CREATE_RELATION('my_thesaurus',r_c2.token,'SYN',r_c2.term);     EXECUTE IMMEDIATE 'insert into my_thesaurus_rules values                        (:1,''SYN(' || r_c2.token || ', my_thesaurus)'')'     using r_c2.token;   end loop; END; We are effectively inserting the token to return and the corresponding query that will look up synonyms in our thesaurus into the my_thesaurus_rules table, for example:     'ORACLEDATAMINING'        SYN ('ORACLEDATAMINING', my_thesaurus)At this point, we create a CTXRULE index on the my_thesaurus_rules table: create index my_thesaurus_rules_idx on        my_thesaurus_rules(query_string)        indextype is ctxsys.ctxrule; In my next post, this index will be used to extract the tokens that match each of the rules specified. We'll then compute the tf-idf weights for each of the terms and create a nested table suitable for mining.

    Read the article

  • Transparency and AlphaBlending

    - by TechTwaddle
    In this post we'll look at the AlphaBlend() api and how it can be used for semi-transparent blitting. AlphaBlend() takes a source device context and a destination device context (DC) and combines the bits in such a way that it gives a transparent effect. Follow the links for the msdn documentation. So lets take a image like, and AlphaBlend() it on our window. The code to do so is below, (under the WM_PAINT message of WndProc) HBITMAP hBitmap=NULL, hBitmapOld=NULL; HDC hMemDC=NULL; BLENDFUNCTION bf; hdc = BeginPaint(hWnd, &ps); hMemDC = CreateCompatibleDC(hdc); hBitmap = LoadBitmap(g_hInst, MAKEINTRESOURCE(IDB_BITMAP1)); hBitmapOld = SelectObject(hMemDC, hBitmap); bf.BlendOp = AC_SRC_OVER; bf.BlendFlags = 0; bf.SourceConstantAlpha = 80; //transparency value between 0-255 bf.AlphaFormat = 0;    AlphaBlend(hdc, 0, 25, 240, 100, hMemDC, 0, 0, 240, 100, bf); SelectObject(hMemDC, hBitmapOld); DeleteDC(hMemDC); DeleteObject(hBitmap); EndPaint(hWnd, &ps);   The code above creates a memory DC (hMemDC) using CreateCompatibleDC(), loads a bitmap onto the memory DC and AlphaBlends it on the device DC (hdc), with a transparency value of 80. The result is: Pretty simple till now. Now lets try to do something a little more exciting. Lets get two images involved, each overlapping the other, giving a better demonstration of transparency. I am also going to add a few buttons so that the user can increase or decrease the transparency by clicking on the buttons. Since this is the first time I played around with GDI apis, I ran into something that everybody runs into sometime or the other, flickering. When clicking the buttons the images would flicker a lot, I figured out why and used something called double buffering to avoid flickering. We will look at both my first implementation and the second implementation just to give the concept a little more depth and perspective. A few pre-conditions before I dive into the code: - hBitmap and hBitmap2 are handles to the two images obtained using LoadBitmap(), these variables are global and are initialized under WM_CREATE - The two buttons in the application are labeled Opaque++ (make more opaque, less transparent) and Opaque-- (make less opaque, more transparent) - DrawPics(HWND hWnd, int step=0); is the function called to draw the images on the screen. This is called from under WM_PAINT and also when the buttons are clicked. When Opaque++ is clicked the 'step' value passed to DrawPics() is +20 and when Opaque-- is clicked the 'step' value is -20. The default value of 'step' is 0 Now lets take a look at my first implementation: //this funciton causes flicker, cos it draws directly to screen several times void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL;     BLENDFUNCTION bf;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     //create a memory DC     hMemDC = CreateCompatibleDC(hdc);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents of screen     if (step < 0)     {         RECT rect = {0, 0, 240, 200};         FillRect(hdc, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     BitBlt(hdc, 0, 25, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hdc, 0, 75, 240, 100, hMemDC, 0, 0, 240, 100, bf);     DeleteDC(hMemDC);     ReleaseDC(hWnd, hdc); }   In the code above, we first get the window DC using GetDC() and create a memory DC using CreateCompatibleDC(). Then we select hBitmap2 onto the memory DC and Blt it on the window DC (hdc). Next, we select the other image, hBitmap, onto memory DC and AlphaBlend() it over window DC. As I told you before, this implementation causes flickering because it draws directly on the screen (hdc) several times. The video below shows what happens when the buttons were clicked rapidly: Well, the video recording tool I use captures only 15 frames per second and so the flickering is not visible in the video. So you're gonna have to trust me on this, it flickers (; To solve this problem we make sure that the drawing to the screen happens only once and to do that we create an additional memory DC, hTempDC. We perform all our drawing on this memory DC and finally when it is ready we Blt hTempDC on hdc, and the images are displayed in one go. Here is the code for our new DrawPics() function: //no flicker void DrawPics(HWND hWnd, int step) {     HDC hdc=NULL, hMemDC=NULL, hTempDC=NULL;     BLENDFUNCTION bf;     HBITMAP hBitmapTemp=NULL, hBitmapOld=NULL;     static UINT32 transparency = 100;     //no point in drawing when transparency is 0 and user clicks Opaque--     if (transparency == 0 && step < 0)         return;     //no point in drawing when transparency is 240 (opaque) and user clicks Opaque++     if (transparency == 240 && step > 0)         return;         hdc = GetDC(hWnd);     if (!hdc)         return;     hMemDC = CreateCompatibleDC(hdc);     hTempDC = CreateCompatibleDC(hdc);     hBitmapTemp = CreateCompatibleBitmap(hdc, 240, 150);     hBitmapOld = (HBITMAP)SelectObject(hTempDC, hBitmapTemp);     if (!hMemDC)     {         ReleaseDC(hWnd, hdc);         return;     }     //while increasing transparency, clear the contents     if (step < 0)     {         RECT rect = {0, 0, 240, 150};         FillRect(hTempDC, &rect, (HBRUSH)GetStockObject(WHITE_BRUSH));     }     SelectObject(hMemDC, hBitmap2);     //Blt hBitmap2 directly to hTempDC     BitBlt(hTempDC, 0, 0, 240, 100, hMemDC, 0, 0, SRCCOPY);         SelectObject(hMemDC, hBitmap);     transparency += step;     if (transparency >= 240)         transparency = 240;     if (transparency <= 0)         transparency = 0;     bf.BlendOp = AC_SRC_OVER;     bf.BlendFlags = 0;     bf.SourceConstantAlpha = transparency;     bf.AlphaFormat = 0;            AlphaBlend(hTempDC, 0, 50, 240, 100, hMemDC, 0, 0, 240, 100, bf);     //now hTempDC is ready, blt it directly on hdc     BitBlt(hdc, 0, 25, 240, 150, hTempDC, 0, 0, SRCCOPY);     SelectObject(hTempDC, hBitmapOld);     DeleteObject(hBitmapTemp);     DeleteDC(hMemDC);     DeleteDC(hTempDC);     ReleaseDC(hWnd, hdc); }   This function is very similar to the first version, except for the use of hTempDC. Another point to note is the use of CreateCompatibleBitmap(). When a memory device context is created using CreateCompatibleDC(), the context is exactly one monochrome pixel high and one monochrome pixel wide. So in order for us to draw anything onto hTempDC, we first have to set a bitmap on it. We use CreateCompatibleBitmap() to create a bitmap of required dimension (240x150 above), and then select this bitmap onto hTempDC. Think of it as utilizing an extra canvas, drawing everything on the canvas and finally transferring the contents to the display in one scoop. And with this version the flickering is gone, video follows:   If you want the entire solutions source code then leave a message, I will share the code over SkyDrive.

    Read the article

  • WWDC and Tech Ed: A Tale of Two DevCons

    - by andrewbrust
    Next week marks the first full week of June.  Summer will feel in full swing and it will be a pretty big season for technology.  In seeming acknowledgement of that very fact, both Apple and Microsoft will be holding large developers conferences starting Monday.  Apple will hold its annual Worldwide Developers Conference (WWDC) in lovely San Francisco and Microsoft will hold its Tech Ed conference in muggy, oil-laden yet soulful New Orleans.  A brief survey of each show reveals much about the differences in each company’s offerings, strategy, and approach to customers and partners. In the interest of full disclosure, I must explain that I will be speaking at Microsoft’s Tech Ed show, and have done so, on and off, since 2003.  I have never been to an Apple conference and, as readers of this blog may know, I acquired my first ever Apple product 2 months ago when I bought an iPad on the day of that product’s launch.  I think I have keen insights into Microsoft’s conference.  My ability to comment on Apple’s event ranges somewhere between backseat driver and naive observer.  Just so you know. Although both shows cater to their respective company’s developers, there are a number of differences in the events’ purposes and content approaches.  First off, let’s consider each show as a news and PR vehicle.  WWDC will feature Steve Jobs’ keynote address and most likely will be where Apple officially reveals details of its 4th-generation iPhone. Jobs will likely also provide deep background information on the corresponding iPhone OS release.  These presumed announcements will make the show a magnet for the tech press and tech blogger elite.  Apple’s customers will be interested too, especially since the iPhone OS release will likely be made available to owners of existing iPhone, iPod Touch and iPad devices. Tech Ed, on the other hand, may not be especially newsworthy at all.  The keynote address will be given by Bob Muglia, who is President of the company’s Server and Tools Division, and he’ll likely be reviewing things more than previewing them. That’s because the company has, in the last 6-8 months, already released new versions of a majority of its products, including Windows, Office, SharePoint, SQL Server, Exchange, its Azure cloud platform, its .NET software development layer, its Silverlight Rich Internet Application (RIA) technology and its Visual Studio developer suite.  Redmond’s product pipeline has functioned more like a firehose of late, and the company has a ton of work to do to get developers up to speed on everything that’s new. I know I keep saying “developers,” but in Tech Ed’s case, that’s not really accurate.  In North America, Tech Ed caters to both developers and IT pros (i.e. technologists who work with physical IT infrastructure, as well as security and administration of the server software that runs on it).  This pairing has, since its inception, struck some as anomalous and others, including many exhibitors, as very smart. Certainly, it means Tech Ed ends up being a confab for virtually all professionals in Microsoft’s ecosystem.  And this year, Microsoft’s Business Intelligence (BI) conference will be co-located with Tech Ed, further enhancing that fusion effect. Clearly then, Microsoft’s show will focus on education, as its name assures us.  Apple’s will serve as both a press event and an opportunity to get its own App Store developer channel synced up with its newest technology advances.  For example, we already know that iPhone OS 4.0 will provide for a limited multitasking capability; that will only work well if people know how to code to it in a capable way.  Apple also told us its iAd advertising platform will be part of the new OS, and Steve Jobs insists that’s to provide a revenue opportunity for developers.  This too, then, needs to be explicated and soaked up buy the faithful. A look at each show’s breakout session lineup provides some interesting takeaways.  WWDC will have very few Mac-specific sessions on offer, and virtually no sessions that at are IT- or “Enterprise-“ related.  It’s all about the phone, music players and tablets.  However, WWDC will have plenty of low-level, hardcore tech coverage of such things as Advanced Memory Analysis and Creating Secure Applications, as well as lots of rich media-related content like Core Animation and Game Design and Development.  Beyond Apple’s proprietary platform, WWDC will also feature an array of sessions on HTML 5 and other Web standards.  In all, WWDC offers over 100 technical sessions and hands-on labs. What about Tech Ed’s editorial content?  Like the target audience, it really runs the gamut.  The show has 21 tracks (versus WWDC’s 5) and more than 745 “learning opportunities” which include breakout sessions, demo stations, hands-on labs and BIrds of a Feather discussion sessions.  Topics range from Architecture talks like Patterns of Parallel Programming to cloud computing talks like Building High Capacity Compute Applications with Windows Azure to IT-focused topics like Virtualization of Microsoft SharePoint 2010 Farm Architecture.  I also count 19 sessions on Windows Phone 7.  Unfortunately, with regard to Web standards and HTML 5, only a few sessions are offered, all of them specific to Internet Explorer. All-in-all, Apple’s show looks more exciting and “sexier” than Tech Ed. Microsoft’s show seems a lot more enterprise-focused than WWDC. This is, of course, well in sync with each company’s approach and products.  Microsoft’s content is much wider ranging and bests WWDC in sheer volume of sessions and labs.  I suppose some might argue that less is more; others that Apple’s consumer-focused offerings simply don’t provide for the same depth of coverage to a business audience.  Microsoft has a serious focus on the cloud and  a paucity of coverage on client-side Web standards; Apple has virtually no cloud offering at all.  Again, this reflects each tech titan’s go-to-market strategy. My own take is that employees of each company should attend the other’s event.  The amount of mutual exclusivity in content may make sense in terms of corporate philosophy, but the reality is that each company could stand to diversify into the other’s territory, at least somewhat. My own talk at Tech Ed will focus on competitive analysis around Microsoft’s BI products.  Apple does not today figure into that analysis. Maybe one day it will.

    Read the article

  • The hidden cost of interrupting knowledge workers

    - by Piet
    The November issue of pragpub has an interesting article on interruptions. The article is written by Brian Tarbox, who also mentions the article on his blog. I like the subtitle: ‘Simple Strategies for Avoiding Dumping Your Mental Stack’. Brian talks about the effective cost of interrupting a ‘knowledge worker’, often with trivial questions or distractions. In the eyes of the interruptor, the interruption only costs the time the interrupted had to listen to the question and give an answer. However, depending on what the interrupted was doing at the time, getting fully immersed in their task again might take up to 15-20 minutes. Enough interruptions might even cause a knowledge worker to mentally call it a day. According to this article interruptions can consume about 28% of a knowledge worker’s time, translating in a $588 billion loss for US companies each year. Looking for a new developer to join your team? Ever thought about optimizing your team’s environment and the way they work instead? Making non knowledge workers aware You can’t. Well, I haven’t succeeded yet. And believe me: I’ve tried. When you’ve got a simple way to really increase your productivity (’give me 2 hours of uninterrupted time a day’) it wouldn’t be right not to tell your boss or team-leader about it. The problem is: only productive knowledge workers seem to understand this. People who don’t fall into this category just seem to think you’re joking, being arrogant or anti-social when you tell them the interruptions can really have an impact on your productivity. Also, knowledge workers often work in a very concentrated mental state which is described here as: It is the same mindfulness as ecstatic lovemaking, the merging of two into a fluidly harmonious one. The hallmark of flow is a feeling of spontaneous joy, even rapture, while performing a task. Yes, coding can be addictive and if you’re interrupting a programmer at the wrong moment, you’re effectively bringing down a junkie from his high in just a few seconds. This can result in seemingly arrogant, almost aggressive reactions. How to make people aware of the production-cost they’re inflicting: I’ve been often pondering that question myself. The article suggests that solutions based on that question never seem to work. To be honest: I’ve never even been able to find a half decent solution for this question. People who are not in this situations just don’t understand the issue, no matter how you try to explain it. Fun (?) thing I’ve noticed: Programmers or IT people in general who don’t get this are often the kind of people who just don’t get anything done. Interrupt handling (interruption management?) IRL Have non-urgent questions handled in a non-interruptive way It helps a bit to educate people into using non-interruptive ways to ask questions: “duh, I have no idea, but I’m a bit busy here now could you put it in an email so I don’t forget?”. Eventually, a considerable amount of people will skip interrupting you and just send an email right away. Some stubborn-headed people however will continue to just interrupt you, saying “you’re 10 meters from my desk, why can’t we just talk?”. Just remember to disable your email notifications, it can be hard to resist opening your email client when you know a new email just arrived. Use Do Not Disturb signals When working in a group of programmers, often the unofficial sign you can only be interrupted for something important is to put on headphones. And when the environment is quiet enough, often people aren’t even listening to music. Otherwise music can help to block the indirect distractions (someone else talking on the phone or tapping their feet). You might get a “they’re all just surfing and listening to music”-reaction from outsiders though. Peopleware talks about a team where the no-interruption sign was placing a shawl on the desk. If I remember correctly, I am unable to locate my copy of this really excellent must-read book. If you have all standardized on the same IM tool, maybe that tool has a ‘do not disturb’ setting. Also some phone-systems have a ‘DND’ (do not disturb) setting. Hide Brian offers a number of good suggestions, some obvious like: hide away somewhere they can’t find you. Not sure how long it’ll be till someone thinks you’re just taking a nap somewhere though. Also, this often isn’t possible or your boss might not understand this. And if you really get caught taking a nap, make sure to explain that your were powernapping. Counter-act interruptions Another suggestion he offers is when you’re being interrupted to just hold up your hand, blocking the interruption, and at least giving you time to finish your sentence or your block/line of code. The last suggestion works more as a way to make it obvious to the interruptor that they really are interrupting your work and to offload some of the cost on the interruptor. In practice, this can also helps you cool down a bit so you don’t start saying nasty things to the interruptor. Unfortunately I’ve sometimes been confronted with people who just ignore this signal and keep talking, as if they’re sure that whatever they’ve got to say is really worth listening to and without a doubt more important than anything you might be doing. This behaviour usually leaves me speechless (not good when someone just asked a question). I’ve noticed that these people are usually also the first to complain when being interrupted themselves. They’re generally not very liked as colleagues, so try not to imitate their behaviour. TDD as a way to minimize recovery time I don’t like Test Driven Development. Mainly for only one reason: It interrupts flow. At least, that’s what it does for me, but maybe I’m just not grown used to TDD yet. BUT a positive effect TDD has on me when I have to work in an interruptive environment and can’t really get into the ‘flow’ (also supposedly called ‘the zone’ by software developers, although I’ve never heard it 1st hand), TDD helps me to concentrate on the tasks at hand and helps me to get back at work after an interruption. I feel when using TDD, I can get by without the need for being totally ‘in’ the project and I can be reasonably productive without obtaining ‘flow’. Do you have a suggestion on how to make people aware of the concept of ‘flow’ and the cost of interruptions? (without looking like an arrogant ass or a weirdo)

    Read the article

  • HDMI video connection cuts top and bottom borders of screen

    - by Luis Alvarado
    Ok this is an extension of another problem I had with a VGA connection and an Nvidia Geforce GT 440 card. Here is goes the explanation of this particular problem: I have a Soneview 32' TV. This TV has many connections including VGA (First reason I bought it), HDMI (Second reason but did not have a HDMI cable at that time) and DVI. I have had this TV for little over a month now, actually I had it to celebrate the release of Ubuntu 11.10 and started using it exactly on that date (I know too much fan there but hey, I like geek stuff). I started using it with the VGA cable. After 2 weeks I bought an Nvidia GT440 card. The previous 9500GT was working correctly with no problems whatsoever. I installed the GT440 and the first problem that I encountered using this latest card is mentioned here: Nvidia GT 440 black screen problem when loading lightdm greeter. The solution to this problem was to actually disconnect then connect again the VGA cable. This would result in the screen showing me the lightdm screen for my login. If I did not disconnect then connect the cable I could be there forever thinking that there is no video signal. I got tired of looking for answers that did not work and for solutions that made me literally have to install Ubuntu again. I just went and bought a HDMI cable and changed the VGA one for that one. It worked and I did not have to disconnect/connect the cable but now I have this problem when using any resolution. My normal resolution is 1920x1080 (This TV is 1080HD) so in VGA I could use this resolution with no problem, but on HDMI am getting the borders cut out. Here is a pic: As you can see from the PIC, the Launcher icons only show less than 50% of their witdh. Forget about the top and bottom parts, I can access them with the mouse but I can not visualize them in the screen. It is like it's outside of the TVs view. Basically there is like 20 to 30 pixels gone from all sides. I searched around and came to running xrand --verbose to see what it could detect from the TV. I got this: cyrex@cyrex:~$ xrandr --verbose xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 175, current 1920 x 1080, maximum 1920 x 1080 default connected 1920x1080+0+0 (0x164) normal (normal) 0mm x 0mm Identifier: 0x163 Timestamp: 465485 Subpixel: unknown Clones: CRTC: 0 CRTCs: 0 Transform: 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000 filter: 1920x1080 (0x164) 103.7MHz *current h: width 1920 start 0 end 0 total 1920 skew 0 clock 54.0KHz v: height 1080 start 0 end 0 total 1080 clock 50.0Hz 1920x1080 (0x165) 105.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 55.1KHz v: height 1080 start 0 end 0 total 1080 clock 51.0Hz 1920x1080 (0x166) 107.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 56.2KHz v: height 1080 start 0 end 0 total 1080 clock 52.0Hz 1920x1080 (0x167) 109.9MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 57.2KHz v: height 1080 start 0 end 0 total 1080 clock 53.0Hz 1920x1080 (0x168) 112.0MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 58.3KHz v: height 1080 start 0 end 0 total 1080 clock 54.0Hz 1920x1080 (0x169) 114.0MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 59.4KHz v: height 1080 start 0 end 0 total 1080 clock 55.0Hz 1680x1050 (0x16a) 98.8MHz h: width 1680 start 0 end 0 total 1680 skew 0 clock 58.8KHz v: height 1050 start 0 end 0 total 1050 clock 56.0Hz 1680x1050 (0x16b) 100.5MHz h: width 1680 start 0 end 0 total 1680 skew 0 clock 59.9KHz v: height 1050 start 0 end 0 total 1050 clock 57.0Hz 1600x1024 (0x16c) 95.0MHz h: width 1600 start 0 end 0 total 1600 skew 0 clock 59.4KHz v: height 1024 start 0 end 0 total 1024 clock 58.0Hz 1440x900 (0x16d) 76.5MHz h: width 1440 start 0 end 0 total 1440 skew 0 clock 53.1KHz v: height 900 start 0 end 0 total 900 clock 59.0Hz 1360x768 (0x171) 65.8MHz h: width 1360 start 0 end 0 total 1360 skew 0 clock 48.4KHz v: height 768 start 0 end 0 total 768 clock 63.0Hz 1360x768 (0x172) 66.8MHz h: width 1360 start 0 end 0 total 1360 skew 0 clock 49.2KHz v: height 768 start 0 end 0 total 768 clock 64.0Hz 1280x1024 (0x173) 85.2MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 66.6KHz v: height 1024 start 0 end 0 total 1024 clock 65.0Hz 1280x960 (0x176) 83.6MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 65.3KHz v: height 960 start 0 end 0 total 960 clock 68.0Hz 1280x960 (0x177) 84.8MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 66.2KHz v: height 960 start 0 end 0 total 960 clock 69.0Hz 1280x720 (0x178) 64.5MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 50.4KHz v: height 720 start 0 end 0 total 720 clock 70.0Hz 1280x720 (0x179) 65.4MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 51.1KHz v: height 720 start 0 end 0 total 720 clock 71.0Hz 1280x720 (0x17a) 66.4MHz h: width 1280 start 0 end 0 total 1280 skew 0 clock 51.8KHz v: height 720 start 0 end 0 total 720 clock 72.0Hz 1152x864 (0x17b) 72.7MHz h: width 1152 start 0 end 0 total 1152 skew 0 clock 63.1KHz v: height 864 start 0 end 0 total 864 clock 73.0Hz 1152x864 (0x17c) 73.7MHz h: width 1152 start 0 end 0 total 1152 skew 0 clock 63.9KHz v: height 864 start 0 end 0 total 864 clock 74.0Hz ....Many Resolutions later... 320x200 (0x1d1) 10.2MHz h: width 320 start 0 end 0 total 320 skew 0 clock 31.8KHz v: height 200 start 0 end 0 total 200 clock 159.0Hz 320x175 (0x1d2) 9.0MHz h: width 320 start 0 end 0 total 320 skew 0 clock 28.0KHz v: height 175 start 0 end 0 total 175 clock 160.0Hz 1920x1080 (0x1dd) 333.8MHz h: width 1920 start 0 end 0 total 1920 skew 0 clock 173.9KHz v: height 1080 start 0 end 0 total 1080 clock 161.0Hz If it helps, the Refresh Rate at 1920x1080 is 60. There is a flickering effect at this resolution using HDMI but not VGA which I imagine is related to the borders cut off issue am asking here. I have also done the following but this will only solve the problem on lower resolutions than 1920x1080 or on others TV (My father has a Sony TV where this problem is also solved): NVIDIA WAY Go to Nvidia-Settings and there will be an option that will have more features if a HDMI cable is connected. In the next pic the option is DFP-1 (CNDLCD) but this name changes depending on what device the PC is connected to: Uncheck Force Full GPU Scaling What this will do for resolutions LOWER than 1920x1080 (At least in my case) is solve the flickering problem and fix the borders cut by the monitor. Save to Xorg.conf file the changes made after changing to a resolution acceptable to your eyes. TV WAY If you TV has OSD Menu and this menu has options for scanning the screen resolution or auto adjusting to it, disable them. Specifically the option about SCAN. If you have an option for AV Mode disable it. Basically disable any option that needs to scan and scale the resolution. Test one by one. In the case of my father's TV this did it. In my case, the Nvidia solved it for lower resolutions. NOTE: In the case this is not solved in the next couple of weeks I will add this as the answer but take into consideration that the issue is still active with 1920x1080 resolutions.

    Read the article

  • Software Architecture: Quality Attributes

    Quality is what all software engineers should strive for when building a new system or adding new functionality. Dictonary.com ambiguously defines quality as a grade of excellence. Unfortunately, quality must be defined within the context of a situation in that each engineer must extract quality attributes from a project’s requirements. Because quality is defined by project requirements the meaning of quality is constantly changing base on the project. Software architecture factors that indicate the relevance and effectiveness The relevance and effectiveness of architecture can vary based on the context in which it was conceived and the quality attributes that are required to meet. Typically when evaluating architecture for a specific system regarding relevance and effectiveness the following questions should be asked.   Architectural relevance and effectiveness questions: Does the architectural concept meet the needs of the system for which it was designed? Out of the competing architectures for a system, which one is the most suitable? If we look at the first question regarding meeting the needs of a system for which it was designed. A system that answers yes to this question must meet all of its quality goals. This means that it consistently meets or exceeds performance goals for the system. In addition, the system meets all the other required system attributers based on the systems requirements. The suitability of a system is based on several factors. In order for a project to be suitable the necessary resources must be available to complete the task. Standard Project Resources: Money Trained Staff Time Life cycle factors that affect the system and design The development life cycle used on a project can drastically affect how a system’s architecture is created as well as influence its design. In the case of using the software development life cycle (SDLC) each phase must be completed before the next can begin.  This waterfall approach does not allow for changes in a system’s architecture after that phase is completed. This can lead to major system issues when the architecture for the system is not as optimal because of missed quality attributes. This can occur when a project has poor requirements and makes misguided architectural decisions to name a few examples. Once the architectural phase is complete the concepts established in this phase must move on to the design phase that is bound to use the concepts and guidelines defined in the previous phase regardless of any missing quality attributes needed for the project. If any issues arise during this phase regarding the selected architectural concepts they cannot be corrected during the current project. This directly has an effect on the design of a system because the proper qualities required for the project where not used when the architectural concepts were approved. When this is identified nothing can be done to fix the architectural issues and system design must use the existing architectural concepts regardless of its missing quality properties because the architectural concepts for the project cannot be altered. The decisions made in the design phase then preceded to fall down to the implementation phase where the actual system is coded based on the approved architectural concepts established in the architecture phase regardless of its architectural quality. Conversely projects using more of an iterative or agile methodology to implement a system has more flexibility to correct architectural decisions based on missing quality attributes. This is due to each phase of the SDLC is executed more than once so any issues identified in architecture of a system can be corrected in the next architectural phase. Subsequently the corresponding changes will then be adjusted in the following design phase so that when the project is completed the optimal architectural and design decision are applied to the solution. Architecture factors that indicate functional suitability Systems that have function shortcomings do not have the proper functionality based on the project’s driving quality attributes. What this means in English is that the system does not live up to what is required of it by the stakeholders as identified by the missing quality attributes and requirements. One way to prevent functional shortcomings is to test the project’s architecture, design, and implementation against the project’s driving quality attributes to ensure that none of the attributes were missed in any of the phases. Another way to ensure a system has functional suitability is to certify that all its requirements are fully articulated so that there is no chance for misconceptions or misinterpretations by all stakeholders. This will help prevent any issues regarding interpreting the system requirements during the initial architectural concept phase, design phase and implementation phase. Consider the applicability of other architectural models When considering an architectural model for a project is also important to consider other alternative architectural models to ensure that the model that is selected will meet the systems required functionality and high quality attributes. Recently I can remember talking about a project that I was working on and a coworker suggested a different architectural approach that I had never considered. This new model will allow for the same functionally that is offered by the existing model but will allow for a higher quality project because it fulfills more quality attributes. It is always important to seek alternatives prior to committing to an architectural model. Factors used to identify high-risk components A high risk component can be defined as a component that fulfills 2 or more quality attributes for a system. An example of this can be seen in a web application that utilizes a remote database. One high-risk component in this system is the TCIP component because it allows for HTTP connections to handle by a web server and as well as allows for the server to also connect to a remote database server so that it can import data into the system. This component allows for the assurance of data quality attribute and the accessibility quality attribute because the system is available on the network. If for some reason the TCIP component was to fail the web application would fail on two quality attributes accessibility and data assurance in that the web site is not accessible and data cannot be update as needed. Summary As stated previously, quality is what all software engineers should strive for when building a new system or adding new functionality. The quality of a system can be directly determined by how closely it is implemented when compared to its desired quality attributes. One way to insure a higher quality system is to enforce that all project requirements are fully articulated so that no assumptions or misunderstandings can be made by any of the stakeholders. By doing this a system has a better chance of becoming a high quality system based on its quality attributes

    Read the article

  • Globally Handling Request Validation In ASP.NET MVC

    - by imran_ku07
       Introduction:           Cross Site Scripting(XSS) and Cross-Site Request Forgery (CSRF) attacks are one of dangerous attacks on web.  They are among the most famous security issues affecting web applications. OWASP regards XSS is the number one security issue on the Web. Both ASP.NET Web Forms and ASP.NET MVC paid very much attention to make applications build with ASP.NET as secure as possible. So by default they will throw an exception 'A potentially dangerous XXX value was detected from the client', when they see, < followed by an exclamation(like <!) or < followed by the letters a through z(like <s) or & followed by a pound sign(like &#123) as a part of querystring, posted form and cookie collection. This is good for lot of applications. But this is not always the case. Many applications need to allow users to enter html tags, for example applications which uses  Rich Text Editor. You can allow user to enter these tags by just setting validateRequest="false" in your Web.config application configuration file inside <pages> element if you are using Web Form. This will globally disable request validation. But in ASP.NET MVC request handling is different than ASP.NET Web Form. Therefore for disabling request validation globally in ASP.NET MVC you have to put ValidateInputAttribute in your every controller. This become pain full for you if you have hundred of controllers. Therefore in this article i will present a very simple way to handle request validation globally through web.config.   Description:           Before starting how to do this it is worth to see why validateRequest in Page directive and web.config not work in ASP.NET MVC. Actually request handling in ASP.NET Web Form and ASP.NET MVC is different. In Web Form mostly the HttpHandler is the page handler which checks the posted form, query string and cookie collection during the Page ProcessRequest method, while in MVC request validation occur when ActionInvoker calling the action. Just see the stack trace of both framework.   ASP.NET MVC Stack Trace:     System.Web.HttpRequest.ValidateString(String s, String valueName, String collectionName) +8723114   System.Web.HttpRequest.ValidateNameValueCollection(NameValueCollection nvc, String collectionName) +111   System.Web.HttpRequest.get_Form() +129   System.Web.HttpRequestWrapper.get_Form() +11   System.Web.Mvc.ValueProviderDictionary.PopulateDictionary() +145   System.Web.Mvc.ValueProviderDictionary..ctor(ControllerContext controllerContext) +74   System.Web.Mvc.ControllerBase.get_ValueProvider() +31   System.Web.Mvc.ControllerActionInvoker.GetParameterValue(ControllerContext controllerContext, ParameterDescriptor parameterDescriptor) +53   System.Web.Mvc.ControllerActionInvoker.GetParameterValues(ControllerContext controllerContext, ActionDescriptor actionDescriptor) +109   System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +399   System.Web.Mvc.Controller.ExecuteCore() +126   System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +27   ASP.NET Web Form Stack Trace:    System.Web.HttpRequest.ValidateString(String s, String valueName, String collectionName) +3213202   System.Web.HttpRequest.ValidateNameValueCollection(NameValueCollection nvc, String collectionName) +108   System.Web.HttpRequest.get_QueryString() +119   System.Web.UI.Page.GetCollectionBasedOnMethod(Boolean dontReturnNull) +2022776   System.Web.UI.Page.DeterminePostBackMode() +60   System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +6953   System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +154   System.Web.UI.Page.ProcessRequest() +86                        Since the first responder of request in ASP.NET MVC is the controller action therefore it will check the posted values during calling the action. That's why web.config's requestValidate not work in ASP.NET MVC.            So let's see how to handle this globally in ASP.NET MVC. First of all you need to add an appSettings in web.config. <appSettings>    <add key="validateRequest" value="true"/>  </appSettings>              I am using the same key used in disable request validation in Web Form. Next just create a new ControllerFactory by derving the class from DefaultControllerFactory.     public class MyAppControllerFactory : DefaultControllerFactory    {        protected override IController GetControllerInstance(Type controllerType)        {            var controller = base.GetControllerInstance(controllerType);            string validateRequest=System.Configuration.ConfigurationManager.AppSettings["validateRequest"];            bool b;            if (validateRequest != null && bool.TryParse(validateRequest,out b))                ((ControllerBase)controller).ValidateRequest = bool.Parse(validateRequest);            return controller;        }    }                         Next just register your controller factory in global.asax.        protected void Application_Start()        {            //............................................................................................            ControllerBuilder.Current.SetControllerFactory(new MyAppControllerFactory());        }              This will prevent the above exception to occur in the context of ASP.NET MVC. But if you are using the Default WebFormViewEngine then you need also to set validateRequest="false" in your web.config file inside <pages> element            Now when you run your application you see the effect of validateRequest appsetting. One thing also note that the ValidateInputAttribute placed inside action or controller will always override this setting.    Summary:          Request validation is great security feature in ASP.NET but some times there is a need to disable this entirely. So in this article i just showed you how to disable this globally in ASP.NET MVC. I also explained the difference between request validation in Web Form and ASP.NET MVC. Hopefully you will enjoy this.

    Read the article

  • Customize the Five Windows Folder Templates

    - by Mark Virtue
    Are you’re particular about the way Windows Explorer presents each folder’s contents? Here we show you how to take advantage of Explorer’s built-in templates, which cuts down the time it takes to do customizations. Note: The techniques in this article apply to Windows XP, Vista, and Windows 7. When opening a folder for the first time in Windows Explorer, we are presented with a standard default view of the files and folders in that folder. It may be that the items are presented are perfectly fine, but on the other hand, we may want to customize the view.  The aspects of it that we can customize are the following: The display type (list view, details, tiles, thumbnails, etc) Which columns are displayed, and in which order The widths of the visible columns The order in which the files and folders are sorted Any file groupings Thankfully, Windows offers us a shortcut.  A particular folder’s settings can be used as a “template” for other, similar folders.  In fact, we can store up to five separate sets of folder presentation configurations.  Once we save the settings for a particular template, that template can then be applied to other folders. Customize Your First Folder We’ll start by setting up the first of our templates – the default one.  Once we create this template and apply it, the vast majority of the folders in our file system will change to match it, so it’s important that we set it up very carefully.  The first step in creating and applying the template is to customize one folder with the settings that all the rest will have. Choose a folder that is typical of the folders that you wish to have this default template.  Select it in Windows Explorer.  To ensure that it is a suitable candidate, right-click the folder name and select Properties, then go to the Customize tab.  Ensure that this folder is marked as General Items.  If it is not, either choose a different folder or select General Items from the list. Click OK.  Now we’re ready to customize our first folder. Changing the way one single folder is presented is straightforward.  We start with the folder’s display type.  Click the Change your view button in the top-right corner of every Explorer window. Each time you click the button, the folder’s view cycles to the next view type.  Alternatively you can click the little down-arrow next to the button to see all the display types at once, and select the one you want. Click the view you want, or drag the slider next to the one you want. If you have chosen Details, then the next thing you may wish to change is which columns are displayed, and the order of these.  To choose which columns are displayed, simply right-click on any column heading.  A list of the columns currently being display appears. Simply uncheck a column if you don’t want it displayed, and check the columns that you want displayed.  If you want some information displayed about your files that is not listed here, then click the More… button for a full list of file attributes. There’s a lot of them! To change the order of the columns that are currently being displayed, simply click on a column heading and drag it to where you think it should be.  To change the width of a column, click the line that represents the right-hand edge of the column and drag it left or right. To sort by a column, click once on that column.  To reverse the sort-order, click that same column again. To change the groupings of the files in the folder, right-click in a blank area of the folder, select Group by, and select the appropriate column. Apply This Default Template to All Similar Folders Once you have the folder exactly the way you want it, we now use this folder as our default template for most of the folders in our file system.  To do this, ensure that you are still in the folder you just customized, and then, from the Organize menu in Explorer, click on Folder and search options. Then select the View tab and click the Apply to Folders button. After you’ve clicked OK, visit some of the other folders in your file system.  You should see that most have taken on these new settings. What we’ve just done, in effect, is we have customized the General Items template.  This is one of five templates that Windows Explorer uses to display folder contents.  The five templates are called (in Windows 7): General Items Documents Pictures Music Videos When a folder is opened, Windows Explorer examines the contents to see if it can automatically determine which folder template to use to display the folder contents.  If it is not obvious that the folder contents falls into any of the last four templates, then Windows Explorer chooses the General Items template.  That’s why most of the folders in your file system are shown using the General Items template. Changing the Other Four Templates If you want to adjust the other four templates, the process is very similar to what we’ve just done.  If you wanted to change the “Music” template, for example, the steps would be as follows: Select a folder that contains music items Apply the existing Music template to the folder (even if it doesn’t look like you want it to) Customize the folder to your personal preferences Apply the new template to all “Music” folders A fifth step would be:  When you open a folder that contains music items but is not automatically displayed using the Music template, you manually select the Music template for that folder. First, select a folder that contains music items.  It will probably be displayed using the existing Music template: Next, ensure that it is using the Music template.  If it’s not, then manually select the Music template. Next, customize the folder to suit your personal preferences (here we’ve added a couple of columns, and sorted by Artist). Now we can set this view to be our Music template.  Choose Organize, then the View tab, and click the Apply to Folders button. Note: The only folders that will inherit these settings are the ones that are currently (or will soon be) using the Music template. Now, if you have any folder that contains music items, and you want it to inherit all of these settings, then right-click the folder name, choose Properties, and select that this folder should use the Music template.  You can also cehck the box entitled Also apply this template to all subfolders if you want to save yourself even more time with all the sub-folders. Conclusion It’s neat to be able to set up templates for your folder views like this.  It’s a shame that Microsoft didn’t take the concept just a little further and allow you to create as many templates as you want. Similar Articles Productive Geek Tips Fix For When Windows Explorer in Vista Stops Showing File NamesCustomize the Windows 7 or Vista Send To MenuFix for New Contact Group Button Not Displaying in VistaWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Make Your Last Minute Holiday Cards with Microsoft Word TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • The Proper Use of the VM Role in Windows Azure

    - by BuckWoody
    At the Professional Developer’s Conference (PDC) in 2010 we announced an addition to the Computational Roles in Windows Azure, called the VM Role. This new feature allows a great deal of control over the applications you write, but some have confused it with our full infrastructure offering in Windows Hyper-V. There is a proper architecture pattern for both of them. Virtualization Virtualization is the process of taking all of the hardware of a physical computer and replicating it in software alone. This means that a single computer can “host” or run several “virtual” computers. These virtual computers can run anywhere - including at a vendor’s location. Some companies refer to this as Cloud Computing since the hardware is operated and maintained elsewhere. IaaS The more detailed definition of this type of computing is called Infrastructure as a Service (Iaas) since it removes the need for you to maintain hardware at your organization. The operating system, drivers, and all the other software required to run an application are still under your control and your responsibility to license, patch, and scale. Microsoft has an offering in this space called Hyper-V, that runs on the Windows operating system. Combined with a hardware hosting vendor and the System Center software to create and deploy Virtual Machines (a process referred to as provisioning), you can create a Cloud environment with full control over all aspects of the machine, including multiple operating systems if you like. Hosting machines and provisioning them at your own buildings is sometimes called a Private Cloud, and hosting them somewhere else is often called a Public Cloud. State-ful and Stateless Programming This paradigm does not create a new, scalable way of computing. It simply moves the hardware away. The reason is that when you limit the Cloud efforts to a Virtual Machine, you are in effect limiting the computing resources to what that single system can provide. This is because much of the software developed in this environment maintains “state” - and that requires a little explanation. “State-ful programming” means that all parts of the computing environment stay connected to each other throughout a compute cycle. The system expects the memory, CPU, storage and network to remain in the same state from the beginning of the process to the end. You can think of this as a telephone conversation - you expect that the other person picks up the phone, listens to you, and talks back all in a single unit of time. In “Stateless” computing the system is designed to allow the different parts of the code to run independently of each other. You can think of this like an e-mail exchange. You compose an e-mail from your system (it has the state when you’re doing that) and then you walk away for a bit to make some coffee. A few minutes later you click the “send” button (the network has the state) and you go to a meeting. The server receives the message and stores it on a mail program’s database (the mail server has the state now) and continues working on other mail. Finally, the other party logs on to their mail client and reads the mail (the other user has the state) and responds to it and so on. These events might be separated by milliseconds or even days, but the system continues to operate. The entire process doesn’t maintain the state, each component does. This is the exact concept behind coding for Windows Azure. The stateless programming model allows amazing rates of scale, since the message (think of the e-mail) can be broken apart by multiple programs and worked on in parallel (like when the e-mail goes to hundreds of users), and only the order of re-assembling the work is important to consider. For the exact same reason, if the system makes copies of those running programs as Windows Azure does, you have built-in redundancy and recovery. It’s just built into the design. The Difference Between Infrastructure Designs and Platform Designs When you simply take a physical server running software and virtualize it either privately or publicly, you haven’t done anything to allow the code to scale or have recovery. That all has to be handled by adding more code and more Virtual Machines that have a slight lag in maintaining the running state of the system. Add more machines and you get more lag, so the scale is limited. This is the primary limitation with IaaS. It’s also not as easy to deploy these VM’s, and more importantly, you’re often charged on a longer basis to remove them. your agility in IaaS is more limited. Windows Azure is a Platform - meaning that you get objects you can code against. The code you write runs on multiple nodes with multiple copies, and it all works because of the magic of Stateless programming. you don’t worry, or even care, about what is running underneath. It could be Windows (and it is in fact a type of Windows Server), Linux, or anything else - but that' isn’t what you want to manage, monitor, maintain or license. You don’t want to deploy an operating system - you want to deploy an application. You want your code to run, and you don’t care how it does that. Another benefit to PaaS is that you can ask for hundreds or thousands of new nodes of computing power - there’s no provisioning, it just happens. And you can stop using them quicker - and the base code for your application does not have to change to make this happen. Windows Azure Roles and Their Use If you need your code to have a user interface, in Visual Studio you add a Web Role to your project, and if the code needs to do work that doesn’t involve a user interface you can add a Worker Role. They are just containers that act a certain way. I’ll provide more detail on those later. Note: That’s a general description, so it’s not entirely accurate, but it’s accurate enough for this discussion. So now we’re back to that VM Role. Because of the name, some have mistakenly thought that you can take a Virtual Machine running, say Linux, and deploy it to Windows Azure using this Role. But you can’t. That’s not what it is designed for at all. If you do need that kind of deployment, you should look into Hyper-V and System Center to create the Private or Public Infrastructure as a Service. What the VM Role is actually designed to do is to allow you to have a great deal of control over the system where your code will run. Let’s take an example. You’ve heard about Windows Azure, and Platform programming. You’re convinced it’s the right way to code. But you have a lot of things you’ve written in another way at your company. Re-writing all of your code to take advantage of Windows Azure will take a long time. Or perhaps you have a certain version of Apache Web Server that you need for your code to work. In both cases, you think you can (or already have) code the the software to be “Stateless”, you just need more control over the place where the code runs. That’s the place where a VM Role makes sense. Recap Virtualizing servers alone has limitations of scale, availability and recovery. Microsoft’s offering in this area is Hyper-V and System Center, not the VM Role. The VM Role is still used for running Stateless code, just like the Web and Worker Roles, with the exception that it allows you more control over the environment of where that code runs.

    Read the article

  • User Experience Highlights in PeopleSoft and PeopleTools: Direct from Jeff Robbins

    - by mvaughan
    By Kathy Miedema, Oracle Applications User Experience  This is the fifth in a series of blog posts on the user experience (UX) highlights in various Oracle product families. The last posted interview was with Nadia Bendjedou, Senior Director, Product Strategy on upcoming Oracle E-Business Suite user experience highlights. You’ll see themes around productivity and efficiency, and get an early look at the latest mobile offerings coming through these product lines. Today’s post is on the user experience in PeopleSoft and PeopleTools. To learn more about what’s ahead, attend PeopleSoft or PeopleTools OpenWorld presentations.This interview is with Jeff Robbins, Senior Director, PeopleSoft Development. Jeff Robbins Q: How would you describe the vision you have for the user experience of PeopleSoft?A: Intuitive – Specifically, customers use PeopleSoft to help their employees do their day-to-day work, and the UI (user interface) has been helpful and assistive in that effort. If it’s not obvious what they need to do a task, then the UI isn’t working. So the application needs to make it simple for users to find information they need, complete a task, do all the things they are responsible for, and it really helps when the UI just makes sense. Productive – PeopleSoft is a tool used to support people to do their work, and a lot of users are measured by how much work they’re able to get done per hour, per day, etc. The UI needs to help them be as productive as possible, and can’t make them waste time or energy. The UI needs to reflect the type of work necessary for a task -- if it's data entry, the UI needs to assist the user to get information into the system. For analysts, the UI needs help users assess or analyze information in a particular way. Innovative – The concept of the UI being innovative is something we’ve been working on for years. It’s not just that we want to be seen as innovative, the fact is that companies are asking their employees to do more than they’ve ever asked before. More often companies want to roll out processes as employee or manager self-service, where an employee is responsible to review and maintain their own data. So we’ve had to reinvent, and ask,  “How can we modify the ways an employee interacts with our applications so that they can be more productive and efficient – even with tasks that are entirely unfamiliar?”  Our focus on innovation has forced us to design new ways for users to interact with the entire application.Q: How are the UX features you have delivered so far resonating with customers?  A: Resonating very well. We’re hearing tremendous responses from users, managers, decision-makers -- who are very happy with the improved user experience. Many of the individual features resonate well. Some have really hit home, others are better than they used to be but show us that there’s still room for improvement.A couple innovations really stand out; features that have a significant effect on how users interact with PeopleSoft.First, the deployment of PeopleSoft in a way that’s more like a consumer website with the PeopleSoft Home page and Dashboards.  This new approach is very web-centric, where users feel they’re coming to a website rather than logging into an enterprise application.  There’s lots of information from all around the organization collected in a way that feels very familiar to users. In order to do your job, you can come to this web site rather than having to learn how to log into an application and figure out a complicated menu. Companies can host these really rich web sites for employees that are home pages for accessing critical tasks and information. The UI elements of incorporating search into the whole navigation process is another hit. Rather than having to log in and choose a task from a menu, users come to the web site and begin a task by simply searching for data: themselves, another employee, a customer record, whatever.  The search results include the data along with a set of actions the user might take, completely eliminating the need to hunt through a complicated system menu. Search-centric navigation is really sitting well with customers who are trying to deploy an intuitive set of systems. Q: Are any UX highlights more popular than you expected them to be?  A: We introduced a feature called Pivot Grid in the last release, which is a combination of an interactive grid, like an Excel Pivot Table, along with a dynamic visual chart that automatically graphs the data. I wasn’t certain at first how extensively this would be used. It looked like an innovative tool, but it wasn’t clear how it would be incorporated in business process applications. The fact is that everyone who sees Pivot Grids is thrilled with that kind of interactivity.  It reflects the amount of analytical thinking customers are asking employees to do. Employees can’t just enter data any more. They must interact with it, analyze it, and make decisions. Pivot Grids fit into this way of working. Q: What can you tell us about PeopleSoft’s mobile offerings?A: A lot of customers are finding that mobile is the chief priority in their organization.  They tell us they want their employees to be able to access company information from their mobile devices.  Of course, not everyone has the same requirements, so we’re working to make sure we can help our customers accomplish what they’re trying to do.  We’ve already delivered a number of mobile features.  For instance, PeopleSoft home pages, dashboards and workcenters all work well on an iPad, straight out of the box.  We’ve delivered a number of key functions and tasks for mobile workers – those who are responsible for using a mobile device to manage inventory, for example.  Customers tell us they also need a holistic strategy, one that allows their employees to access nearly every task from a mobile device.  While we don’t expect users to do extensive data entry from their smartphone, it makes sense that they have access to company information and systems while away from their desk.  That’s where our strategy is going now.  We plan to unveil a number of new mobile offerings at OpenWorld.  Some will be available then, some shortly after. Q: What else are you working on now that you think is going to be exciting to customers at Oracle OpenWorld?A: Our next release -- the big thing is PeopleSoft 9.2, and we’ll be talking about the huge amount of work that’s gone into the next versions. A new toolset, 8.53, will be coming, and there’s a lot to talk about there, and the next generation of PeopleSoft 9.2.  We have a ton of new stuff coming.Q: What do you want PeopleSoft customers to know? A: We have been focusing on the user experience in PeopleSoft as a very high priority for the last 4 years, and it’s had interesting effects. One thing is that the application is better, more usable.  We’ve made visible improvements. Another aspect is that in customers’ minds, the PeopleSoft brand is being reinvigorated. Customers invested in PeopleSoft years ago, and then they weren’t sure where PeopleSoft was going.  This investment in the UI and overall user experience keeps PeopleSoft current, innovative and fresh.  Customers  are able to take advantage of a lot of new features, even on the older applications, simply by upgrading their PeopleTools. The interest in that ability has been tremendous. Knowing they have a lot of these features available -- right now, that’s pretty huge. There’s been a tremendous amount of positive response, just on the fact that we’re focusing on the user experience. Editor’s note: For more on PeopleSoft and PeopleTools user experience highlights, visit the Usable Apps web site.To find out more about these enhancements at Openworld, be sure to check out these sessions: GEN8928     General Session: PeopleSoft Update and Product RoadmapCON9183     PeopleSoft PeopleTools Technology Roadmap CON8932     New Functional PeopleSoft PeopleTools Capabilities for the Line-of-Business UserCON9196     PeopleSoft PeopleTools Roadmap: Mobile ApplicationsCON9186     Case Study: Delivering a Groundbreaking User Interface with PeopleSoft PeopleTools

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • Week in Geek: IPv6 Capable Smartphones Compromise User Privacy Edition

    - by Asian Angel
    This week we learned how to “clone a disk, resize static windows, and create system function shortcuts”, use 45 different services, sites, and apps to help read favorite sites, add MP3 support to Audacity (for saving in MP3 format), install a Wii game loader for easy backups and fast load times, create a Blue Screen of Death in any color, and more. Photo by legofenris. Weekly News Links Photo by The H Security. IPv6: Smartphones compromise users’ privacy Since version 4 of the iOS operating system, Apple’s iPhones, iPads and iPods have been capable of handling IPv6, and most Android devices have been capable since version 2.1. However, the operating systems transfer an ID that discloses information about their users. Dumb phones can be attacked too Much of the discussion of security threats to mobile phones revolves around smartphones, but researchers have found that less advanced “feature phones,” still used by the majority of people around the world, also are vulnerable to attack. SCADA exploit – the dragon awakes The recent publication of an exploit for KingView, a software package for visualising industrial process control systems, appears to be having an effect. Threatpost reports that both the Chinese vendor Wellintech and Chinese CERT (CN-CERT) have now reacted. Sophos: Spam to get more malicious Spam is becoming more malicious in nature as trickery tactics change in line with current user interests, according to a new report released Tuesday by Sophos. Global spam traffic rebounds as Rustock wakes Spam is on the rise after the Rustock botnet awoke from its Christmas slumber, according to Symantec. Cracking WPA keys in the cloud At the forthcoming Black Hat conference, blogger Thomas Roth plans to demonstrate how weak WPA PSKs can be cracked quickly and easily using Amazon’s Elastic Compute Cloud (EC2) service. Microsoft Security Advisory: Vulnerability in Internet Explorer could allow remote code execution Provides a link to more details about the vulnerability and shows a work-around/fix for the problem. Adobe plans to make it easier to delete Flash cookies in web browsers The new API, NPAPI:ClearSiteData, will allow Flash cookies – also known as Local Shared Objects (LSO) – to be deleted directly in the browser’s settings. Firefox beta getting new database standard The ninth beta version of Firefox is set to get support for a standard called IndexedDB that provides a database interface useful for offline data storage and other tasks needing information on a browser’s computer. MetroPCS accused of blocking certain Net content MetroPCS is violating the FCC’s recently approved Net neutrality rules by blocking certain Internet content, say several public interest groups. Server and Tools chief Muglia to leave Microsoft in summer 2011 Microsoft veteran and Server & Tools Business (STB) President Bob Muglia is leaving Microsoft, according to an email that CEO Steve Ballmer sent to employees on January 10. Report: DOJ nearing decision on Google-ITA The U.S. Department of Justice is gearing up for a possible formal antitrust investigation into whether or not Google should be allowed to purchase travel software company ITA Software, according to a report. South Korea says Google Street View broke law Police in South Korea reportedly say Google broke the country’s law when its Street View service captured personal data from unsecure Wi-Fi networks. The backlash over Google’s HTML5 video bet Choosing strategies based on what you believe to be long-term benefits is generally a good idea when running a business, but if you manage to alienate the world in the process, the long term may become irrelevant. Google answers critics on HTML5 Web video move Google responded to critics of its decision to drop support for a popular HTML5 video codec by declaring that a royalty-supported standard for Web video will hold the Web hostage. Random TinyHacker Links A Special GiveAway: a Great Book & Great Security Software The team from 7 Tutorials has a special giveaway running during the month of January. Signed copies of their latest book, full 1-year licenses of BitDefender Internet Security 2011 and free 3-month trials for everyone willing to participate. One Click Rooting For Android Phones Here’s a nice tool that helps you root your Android phone effortlessly. New Angry Birds Free version 1.0 Available in the App Store. Google Code University Learn programming at Google Code University. Capture and Share Your Favorite Part Of a YouTube Video SnipSnip.it lets you share only the part of the video that you like. Super User Questions More great questions and answers from this past week’s popular topics at Super User. What are the Windows A: and B: drives used for? Does OS X support linux-like features? What is the easiest way to make a backup of an entire hard disk? Will shifting from Wireless to Wired network result in better performance? Is it legal to install Windows 7 Home Premium Retail inside VMware virtual machine? How-To Geek Weekly Article Recap Enjoy reading through our hottest articles from this past week. The 50 Best Ways to Disable Built-in Windows Features You Don’t Want The Best of CES (Consumer Electronics Show) in 2011 How to Upgrade Windows 7 Easily (And Understand Whether You Should) The Worst of CES (Consumer Electronics Show) in 2011 The How-To Geek Guide to Audio Editing: Basic Noise Removal One Year Ago on How-To Geek More great articles from one year ago filled with helpful geeky goodness for you to enjoy. Share Text & Images the Easy Way with JustPaste.it Start Portable Firefox in Safe Mode Firefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible Extensions Protect Your Computer from “Little Hands” with KidSafe Lock Prying Eyes Out of Your Minimized Windows Custom Crocheted Cylon-Cthulhu Hybrid What happens when you let your Cylon Centurion figure and your crocheted Cthulhu spend too many lonely nights together? A Cylon-Cthulhu hybrid, of course! You can get your own from the Cthulhu Chick store over on Etsy. Note: This is not an ad…Ruth is a friend of ours, and this Cylon-Cthulhu hybrid makes the perfect guard for the new MVP trophy in our office. The Geek Note Whether it is a geeky indoor project or just getting outside, we hope that you and your families have a terrific fun-filled weekend! Remember to keep sending those great tips in to us at [email protected]. Photo by qwrrty. Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Firefox 4.0 Beta 9 Available for Download – Get Your Copy Now The Frustrations of a Computer Literate Watching a Newbie Use a Computer [Humorous Video] Season0nPass Jailbreaks Current Gen Apple TVs IBM’s Jeopardy Playing Computer Watson Shows The Pros How It’s Done [Video] Tranquil Juice Drop Abstract Wallpaper Pulse Is a Sleek Newsreader for iOS and Android Devices

    Read the article

  • Database continuous integration step by step

    - by David Atkinson
    This post will describe how to set up basic database continuous integration using TeamCity to initiate the build process, SQL Source Control to put your database under source control, and the SQL Compare command line to keep a test database up to date. In my example I will be using Subversion as my source control repository. If you wish to follow my steps verbatim, please make sure you have TortoiseSVN, SQL Compare and SQL Source Control installed. Downloading and Installing TeamCity TeamCity (http://www.jetbrains.com/teamcity/index.html) is free for up to three agents, so it a great no-risk tool you can use to experiment with. 1. Download the latest version from the JetBrains website. For some reason the TeamCity executable didn't download properly for me, stalling frustratingly at 99%, so I tried again with the zip file download option (see screenshot below), which worked flawlessly. 2. Run the installer using the defaults. This results in a set-up with the server component and agent installed on the same machine, which is ideal for getting started with ease. 3. Check that the build agent is pointing to the server correctly. This has caught me out a few times before. This setting is in C:\TeamCity\buildAgent\conf\buildAgent.properties and for my installation is serverUrl=http\://localhost\:80 . If you need to change this value, if for example you've had to install the Server console to a different port number, the TeamCity Build Agent Service will need to be restarted for the change to take effect. 4. Open the TeamCity admin console on http://localhost , and specify your own designated username and password at first startup. Putting your database in source control using SQL Source Control 5. Assuming you've got SQL Source Control installed, select a development database in the SQL Server Management Studio Object Explorer and select Link Database to Source Control. 6. For the Link step you can either create your own empty folder in source control, or you can select Just Evaluating, which just creates a local subversion repository for you behind the scenes. 7. Once linked, note that your database turns green in the Object Explorer. Visit the Commit tab to do an initial commit of your database objects by typing in an appropriate comment and clicking Commit. 8. There is a hidden feature in SQL Source Control that opens up TortoiseSVN (provided it is installed) pointing to the linked repository. Keep Shift depressed and right click on the text to the right of 'Linked to', in the example below, it's the red Evaluation Repository text. Select Open TortoiseSVN Repo Browser. This screen should give you an idea of how SQL Source Control manages the object files behind the scenes. Back in the TeamCity admin console, we'll now create a new project to monitor the above repository location and to trigger a 'build' each time the repository changes. 9. In TeamCity Adminstration, select Create Project and give it a name, such as "My first database CI", and click Create. 10. Click on Create Build Configuration, and name it something like "Integration build". 11. Click VCS settings and then Create And Attach new VCS root. This is where you will tell TeamCity about the repository it should monitor. 12. In my case since I'm using the Just Evaluating option in SQL Source Control, I should select Subversion. 13. In the URL field paste your repository location. In my case this is file:///C:/Users/David.Atkinson/AppData/Local/Red Gate/SQL Source Control 3/EvaluationRepositories/WidgetDevelopment/WidgetDevelopment 14. Click on Test Connection to ensure that you can communicate with your source control system. Click Save. 15. Click Add Build Step, and Runner Type: Command Line. Should you be familiar with the other runner types, such as NAnt, MSBuild or Powershell, you can opt for these, but for the same of keeping it simple I will pick the simplest option. 16. If you have installed SQL Compare in the default location, set the Command Executable field to: C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe 17. Flip back to SSMS briefly and add a new database to your server. This will be the database used for continuous integration testing. 18. Set the command parameters according to your server and the name of the database you have created. In my case I created database RedGateCI on server .\sql2008r2 /scripts1:. /server2:.\sql2008r2 /db2:RedGateCI /sync /verbose Note that if you pick a server instance that isn't on your local machine, you'll need the TCP/IP protocol enabled in SQL Server Configuration Manager otherwise the SQL Compare command line will not be able to connect. 19. Save and select Build Triggering / Add New Trigger / VCS Trigger. This is where you tell TeamCity when it should initiate a build. Click Save. 20. Now return to SQL Server Management Studio and make a schema change (eg add a new object) to your linked development database. A blue indicator will appear in the Object Explorer. Commit this change, typing in an appropriate check-in comment. All being good, within 60 seconds (a TeamCity default that can be changed) a build will be triggered. 21. Click on Projects in TeamCity to get back to the overview screen: The build log will show you the console output, which is useful for troubleshooting any issues: That's it! You now have continuous integration on your database. In future posts I'll cover how you can generate and test the database creation script, the database upgrade script, and run database unit tests as part of your continuous integration script. If you have any trouble getting this up and running please let me know, either by commenting on this post, or email me directly using the email address below. Technorati Tags: SQL Server

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >