Search Results

Search found 1162 results on 47 pages for 'discover'.

Page 43/47 | < Previous Page | 39 40 41 42 43 44 45 46 47  | Next Page >

  • Corsair Hackers Reboot

    It wasn't easy for me to attend but it was absolutely worth to go. The Linux User Group of Mauritius (LUGM) organised another get-together for any open source enthusiast here on the island. Strangely named "Corsair Hackers Reboot" but it stands for a positive cause: "Corsair Hackers Reboot Event A collaborative activity involving LUGM, UoM Computer Club, Fortune Way Shopping Mall and several geeks from around the island, striving to put FOSS into homes & offices. The public is invited to discover and explore Free Software & Open Source." And it was a good opportunity for me and the kids to visit the east coast of Mauritius, too. Perfect timing It couldn't have been better... Why? Well, for two important reasons (in terms of IT): End of support for Microsoft Windows XP - 08.04.2014 Release of Ubuntu 14.04 Long Term Support - 17.04.2014 Quite funnily, those two IT dates weren't the initial reasons and only during the weeks of preparations we put those together. And therefore it was even more positive to promote the use of Linux and open source software in general to a broader audience. Getting there ... Thanks to the new motor way M3 and all the additional road work which has been completed recently it was very simple to get across the island in a very quick and relaxed manner. Compared to my trips in the early days of living in Mauritius (and riding on a scooter) it was very smooth and within less than an hour we hit Centrale de Flacq. Well, being in the city doesn't necessarily mean that one has arrived at the destination. But thanks to modern technology I had a quick look on Google Maps, and we finally managed to get a parking behind the huge bus terminal in Flacq. From there it was just a short walk to Fortune Way. The children were trying to count the number of buses... Well, lots and lots of buses - really impressive actually. What was presented? There were different areas set up. Right at the entrance one's attention was directly drawn towards the elevated hacker's stage. Similar to rock stars performing their gig there was bunch of computers, laptops and networking equipment in order to cater the right working conditions for coding/programming challenge(s) on the one hand and for the pen-testing or system hacking competition on the other hand. Personally, I was very impresses that actually Nitin took care of the pen-testing competition. He hardly started one year back with Linux in general, and Kali Linux specifically. Seeing his personal development from absolute newbie to a decent Linux system administrator within such a short period of time, is really impressive. His passion to open source software made him a living. Next, clock-wise seen, was the Kid's Corner with face-painting as the main attraction. Additionally, there were numerous paper print outs to colour. Plus a decent workstation with the educational suite GCompris. Of course, my little ones were into that. They already know GCompris since a while as they are allowed to use it on an IGEL thin client terminal here at home. To simplify my life, I set up GCompris as full-screen guest session on the server, and they can pass the login screen without any further obstacles. And because it's a thin client hooked up to a XDMCP remote session I don't have to worry about the hardware on their desk, too. The next section was the main attraction of the event: BYOD - Bring Your Own Device Well, compared to the usual context of BYOD the corsairs had a completely different intention. Here, you could bring your own laptop and a team of knowledgeable experts - read: geeks and so on - offered to fully convert your system on any Linux distribution of your choice. And even though I came later, I was told that the USB pen drives had been in permanent use. From being prepared via dd command over launching LiveCD session to finally installing a fresh Linux system on bare metal. Most interestingly, I did a similar job already a couple of months ago, while upgrading an existing Windows XP system to Xubuntu 13.10. So far, the female owner is very happy and enjoys her system almost every evening to go shopping online, checking mails, and reading latest news from the Anime world. Back to the Hackers event, Ish told me that they managed approximately 20 conversion during the day. Furthermore, Ajay and others gladly assisted some visitors with some tricky issues and by the end of the day you can call is a success. While I was around, there was a elderly male visitor that got a full-fledged system conversion to a Linux system running completely in French language. A little bit more to the centre it was Yasir's turn to demonstrate his Arduino hardware that he hooked up with an experimental electrical circuit board connected to an LCD matrix display. That's the real spirit of hacking, and he showed some minor adjustments on the fly while demo'ing the system. Also, very interesting there was a thermal sensor around. Personally, I think that platforms like the Arduino as well as the Raspberry Pi have a great potential at a very affordable price in order to bring a better understanding of electronics as well as computer programming to a broader audience. It would be great to see more of those experiments during future activities. And last but not least there were a small number of vendors. Amongst them was Emtel - once again as sponsor of the general internet connectivity - and another hardware supplier from Riche Terre shopping mall. They had a good collection of Android related gimmicks, like a autonomous web cam that can convert any TV with HDMI connector into an online video chat system given WiFi. It's actually kind of awesome to have a Skype or Google hangout video session on the big screen rather than on the laptop. Some pictures of the event LUGM: Great conversations on Linux, open source and free software during the Corsair Hackers Reboot LUGM: Educational workstation running GCompris suite attracted the youngest attendees of the day. Of course, face painting had to be done prior to hacking... LUGM: Nadim demoing some Linux specifics to interested visitors. Everyone was pretty busy during the whole day LUGM: The hacking competition, here pen-testing a wireless connection and access point between multiple machines LUGM: Well prepared workstations to be able to 'upgrade' visitors' machines to any Linux operating system Final thoughts Gratefully, during the preparations of the event I was invited to leave some comments or suggestions, and the team of the LUGM did a great job. The outdoor banner was a eye-catcher, the various flyers and posters for the event were clearly written and as far as I understood from the quick chats I had with Ish, Nadim, Nitin, Ajay, and of course others all were very happy about the event execution. Great job, LUGM! And I'm already looking forward to the next Corsair Hackers Reboot event ... Crossing fingers: Very soon and hopefully this year again :) Update: In the media The event had been announced in local media, too. L'Express: Salon informatique: Hacking Challenge à Flacq

    Read the article

  • Following my passion

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} What makes you go the extra mile? What makes you move forward and be ambitious? My name is Alin Gheorghe and I am currently working as a Contracts Administrator in the Shared Service Centre in Bucharest, Romania. I have graduated from the Political Science Faculty of the National School of Political and Administrative Studies here in Bucharest and I am currently undergoing a Master Program on Security and Diplomacy at the same university. Although I have been working a full time job here at Oracle since January 2011 and also going to school after work, I am going to tell you how I spend my spare time and about my passion. I always thought that if one doesn’t have something that he would consider a passion it’s always just a matter of time until he would discover one. Looking back, I can tell you that I discovered mine when I was 14 years old and I remember watching a football game when suddenly I became fascinated by the “man in black” that all football players obeyed during the match. That year I attended and promoted a referee course within my local referee committee and about 6 months later I was delegated to my first official game at youth tournament. Almost 10 years have passed since then and I can tell you that I very much love and appreciate this activity that I have spent doing, each and every weekend, 9 months every year, acquiring more than 600 official games until now. And even if not having a real free weekend or holiday might be sound very consuming, I can say that having something I am passionate about helps me to keep myself balanced and happy while giving me an option to channel any stress or anxiety I may feel. I think it’s important to have something of your own besides work that you spend time and effort on. Whether it’s painting, writing or a sport, having a passion can only have a positive effect on your life. And as every extra thing, it’s not always easy to follow your passion, but is it worth it? Speaking from my own experience I am sure it is, and here are some tips and tricks I constantly use not to give up on my passion: Normal 0 false false false EN-US X-NONE X-NONE -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-ansi-language:RO;} No matter how much time you spend at work and how much credit you get for that, it will always be the passion related achievements that will comfort you more and boost your self esteem and nothing compares to that feeling you get. I always try to keep this in mind so that each time I think about giving up I get even more ambitious to move forward. Everybody can just do what they are paid to do or what they are requested to do at work but not everybody can go that extra mile when it comes to following their passion and putting in extra work for that. By exercising this constantly you get used to also applying this attitude on the work related tasks. It takes accurate planning, anticipation and forecasting in order to combine your work with your passion. Therefore having a full schedule and keeping up with it will only help develop and exercise such skills and also will prove to you that you are up to such a challenge. I always keep in mind as a final goal that if you get very good at your passion you can actually start earning from it. And I think that is the ultimate level when you can say that you make a living by doing exactly what you are passionate about. In conclusion, by taking the easy way not only do you miss out on something nice, but life’s priceless rewards are usually given by those things that you actually believe in and know how to stand up for over time.

    Read the article

  • Why do we (really) program to interfaces?

    - by Kyle Burns
    One of the earliest lessons I was taught in Enterprise development was "always program against an interface".  This was back in the VB6 days and I quickly learned that no code would be allowed to move to the QA server unless my business objects and data access objects each are defined as an interface and have a matching implementation class.  Why?  "It's more reusable" was one answer.  "It doesn't tie you to a specific implementation" a slightly more knowing answer.  And let's not forget the discussion ending "it's a standard".  The problem with these responses was that senior people didn't really understand the reason we were doing the things we were doing and because of that, we were entirely unable to realize the intent behind the practice - we simply used interfaces and had a bunch of extra code to maintain to show for it. It wasn't until a few years later that I finally heard the term "Inversion of Control".  Simply put, "Inversion of Control" takes the creation of objects that used to be within the control (and therefore a responsibility of) of your component and moves it to some outside force.  For example, consider the following code which follows the old "always program against an interface" rule in the manner of many corporate development shops: 1: ICatalog catalog = new Catalog(); 2: Category[] categories = catalog.GetCategories(); In this example, I met the requirement of the rule by declaring the variable as ICatalog, but I didn't hit "it doesn't tie you to a specific implementation" because I explicitly created an instance of the concrete Catalog object.  If I want to test the functionality of the code I just wrote I have to have an environment in which Catalog can be created along with any of the resources upon which it depends (e.g. configuration files, database connections, etc) in order to test my functionality.  That's a lot of setup work and one of the things that I think ultimately discourages real buy-in of unit testing in many development shops. So how do I test my code without needing Catalog to work?  A very primitive approach I've seen is to change the line the instantiates catalog to read: 1: ICatalog catalog = new FakeCatalog();   once the test is run and passes, the code is switched back to the real thing.  This obviously poses a huge risk for introducing test code into production and in my opinion is worse than just keeping the dependency and its associated setup work.  Another popular approach is to make use of Factory methods which use an object whose "job" is to know how to obtain a valid instance of the object.  Using this approach, the code may look something like this: 1: ICatalog catalog = CatalogFactory.GetCatalog();   The code inside the factory is responsible for deciding "what kind" of catalog is needed.  This is a far better approach than the previous one, but it does make projects grow considerably because now in addition to the interface, the real implementation, and the fake implementation(s) for testing you have added a minimum of one factory (or at least a factory method) for each of your interfaces.  Once again, developers say "that's too complicated and has me writing a bunch of useless code" and quietly slip back into just creating a new Catalog and chalking any test failures up to "it will probably work on the server". This is where software intended specifically to facilitate Inversion of Control comes into play.  There are many libraries that take on the Inversion of Control responsibilities in .Net and most of them have many pros and cons.  From this point forward I'll discuss concepts from the standpoint of the Unity framework produced by Microsoft's Patterns and Practices team.  I'm primarily focusing on this library because it questions about it inspired this posting. At Unity's core and that of most any IoC framework is a catalog or registry of components.  This registry can be configured either through code or using the application's configuration file and in the most simple terms says "interface X maps to concrete implementation Y".  It can get much more complicated, but I want to keep things at the "what does it do" level instead of "how does it do it".  The object that exposes most of the Unity functionality is the UnityContainer.  This object exposes methods to configure the catalog as well as the Resolve<T> method which is used to obtain an instance of the type represented by T.  When using the Resolve<T> method, Unity does not necessarily have to just "new up" the requested object, but also can track dependencies of that object and ensure that the entire dependency chain is satisfied. There are three basic ways that I have seen Unity used within projects.  Those are through classes directly using the Unity container, classes requiring injection of dependencies, and classes making use of the Service Locator pattern. The first usage of Unity is when classes are aware of the Unity container and directly call its Resolve method whenever they need the services advertised by an interface.  The up side of this approach is that IoC is utilized, but the down side is that every class has to be aware that Unity is being used and tied directly to that implementation. Many developers don't like the idea of as close a tie to specific IoC implementation as is represented by using Unity within all of your classes and for the most part I agree that this isn't a good idea.  As an alternative, classes can be designed for Dependency Injection.  Dependency Injection is where a force outside the class itself manipulates the object to provide implementations of the interfaces that the class needs to interact with the outside world.  This is typically done either through constructor injection where the object has a constructor that accepts an instance of each interface it requires or through property setters accepting the service providers.  When using dependency, I lean toward the use of constructor injection because I view the constructor as being a much better way to "discover" what is required for the instance to be ready for use.  During resolution, Unity looks for an injection constructor and will attempt to resolve instances of each interface required by the constructor, throwing an exception of unable to meet the advertised needs of the class.  The up side of this approach is that the needs of the class are very clearly advertised and the class is unaware of which IoC container (if any) is being used.  The down side of this approach is that you're required to maintain the objects passed to the constructor as instance variables throughout the life of your object and that objects which coordinate with many external services require a lot of additional constructor arguments (this gets ugly and may indicate a need for refactoring). The final way that I've seen and used Unity is to make use of the ServiceLocator pattern, of which the Patterns and Practices team has also provided a Unity-compatible implementation.  When using the ServiceLocator, your class calls ServiceLocator.Retrieve in places where it would have called Resolve on the Unity container.  Like using Unity directly, it does tie you directly to the ServiceLocator implementation and makes your code aware that dependency injection is taking place, but it does have the up side of giving you the freedom to swap out the underlying IoC container if necessary.  I'm not hugely concerned with hiding IoC entirely from the class (I view this as a "nice to have"), so the single biggest problem that I see with the ServiceLocator approach is that it provides no way to proactively advertise needs in the way that constructor injection does, allowing more opportunity for difficult to track runtime errors. This blog entry has not been intended in any way to be a definitive work on IoC, but rather as something to spur thought about why we program to interfaces and some ways to reach the intended value of the practice instead of having it just complicate your code.  I hope that it helps somebody begin or continue a journey away from being a "Cargo Cult Programmer".

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • Benefits of Behavior Driven Development

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2013/07/26/benefits-of-behavior-driven-development.aspxContinuing my previous article on BDD, I wanted to point out some benefits of BDD and since BDD is an extension of Test Driven Development (TDD), you get those as well. I’ll add another article on some possible downsides of this approach. There are many articles about the benefits of TDD and they apply to BDD. I’ve pointed out some here and copied some of the main points for each article, but there are many more including the book The Art of Unit Testing by Roy Osherove. http://geekswithblogs.net/leesblog/archive/2008/04/30/the-benefits-of-test-driven-development.aspx (Lee Brandt) Stability Accountability Design Ability Separated Concerns Progress Indicator http://tddftw.com/benefits-of-tdd/ Help maintainers understand the intention behind the code Bring validation and proper data handling concerns to the forefront. Writing the tests first is fun. Better APIs come from writing testable code. TDD will make you a better developer. http://www.slideshare.net/dhelper/benefit-from-unit-testing-in-the-real-world (from Typemock). Take a look at the slides, especially the extra time required for TDD (slide 10) and the next one of the bugs avoided using TDD (slide 11). Less bugs (slide 11) about testing and development (13) Increase confidence in code (14) Fearlessly change your code (14) Document Requirements (14) also see http://visualstudiomagazine.com/articles/2013/06/01/roc-rocks.aspx Discover usability issues early (14) All these points and articles are great and there are many more. The following are my additions to the benefits of BDD from using it in real projects for my company. July 2013 on MSDN - Behavior-Driven Design with SpecFlow Scott Allen did a very informative TDD and MVC module, but to me he is doing BDDCompile and Execute Requirements in Microsoft .NET ~ Video from TechEd 2012 Communication I was working through a complicated task that the decision tree kept growing. After writing out the Given, When, Then of the scenario, I was able tell QA what I had worked through for their initial test cases. They were able to add from there. It is also useful to use this language with other developers, managers, or clients to help make informed decisions on if it meets the requirements or if it can simplified to save time (money). Thinking through solutions, before starting to code This was the biggest benefit to me. I like to jump into coding to figure out the problem. Many times I don't understand my path well enough and have to do some parts over. A past supervisor told me several times during reviews that I need to get better at seeing "the forest for the trees". When I sit down and write out the behavior that I need to implement, I force myself to think things out further and catch scenarios before they get to QA. A co-worker that is new to BDD and we’ve been using it in our new project for the last 6 months, said “It really clarifies things”. It took him awhile to understand it all, but now he’s seeing the value of this approach (yes there are some downsides, but that is a different issue). Developers’ Confidence This is huge for me. With tests in place, my confidence grows that I won’t break code that I’m not directly changing. In the past, I’ve worked on projects with out tests and we would frequently find regression bugs (or worse the users would find them). That isn’t fun. We don’t catch all problems with the tests, but when QA catches one, I can write a test to make sure it doesn’t happen again. It’s also good for Releasing code, telling your manager that it’s good to go. As time goes on and the code gets older, how confident are you that checking in code won’t break something somewhere else? Merging code - pre release confidence If you’re merging code a lot, it’s nice to have the tests to help ensure you didn’t merge incorrectly. Interrupted work I had a task that I started and planned out, then was interrupted for a month because of different priorities. When I started it up again, and un-shelved my changes, I had the BDD specs and it helped me remember what I had figured out and what was left to do. It would have much more difficult without the specs and tests. Testing and verifying complicated scenarios Sometimes in the UI there are scenarios that get tricky, because there are a lot of steps involved (click here to open the dialog, enter the information, make sure it’s valid, when I click cancel it should do {x}, when I click ok it should close and do {y}, then do this, etc….). With BDD I can avoid some of the mouse clicking define the scenarios and have them re-run quickly, without using a mouse. UI testing is still needed, but this helps a bunch. The same can be true for tricky server logic. Documentation of Assumptions and Specifications The BDD spec tests (Jasmine or SpecFlow or other tool) also work as documentation and show what the original developer was trying to accomplish. It’s not a different Word document, so developers will keep this up to date, instead of letting it become obsolete. What happens if you leave the project (consulting, new job, etc) with no specs or at the least good comments in the code? Sometimes I think of a new scenario, so I add a failing spec and continue in the same stream of thought (don’t forget it because it was on a piece of paper or in a notepad). Then later I can come back and handle it and have it documented. Jasmine tests and JavaScript –> help deal with the non-typed system I like JavaScript, but I also dislike working with JavaScript. I miss C# telling me if a property doesn’t actually exist at build time. I like the idea of TypeScript and hope to use it more in the future. I also use KnockoutJs, which has observables that need to be called with ending (), since the observable is a function. It’s hard to remember when to use () or not and the Jasmine specs/tests help ensure the correct usage.   This should give you an idea of the benefits that I see in using the BDD approach. I’m sure there are more. It talks a lot of practice, investment and experimentation to figure out how to approach this and to get comfortable with it. I agree with Scott Allen in the video I linked above “Remember that TDD can take some practice. So if you're not doing test-driven design right now? You can start and practice and get better. And you'll reach a point where you'll never want to get back.”

    Read the article

  • PASS Summit 2010 BI Workshop Feedbacks

    - by Davide Mauri
    As many other speakers already did, I’d like to share with the SQL Community the feedback of my PASS Summit 2010 Workshop. For those who were not there, my workshop was the “BI From A-Z” and the main objective of that workshop was to introduce people in the BI world not only from a technical point of view but insist a lot on the methodological and “engineered” approach. The will to put more engineering in the IT (and specially in the BI field) is something that has been growing stronger and stronger in me every day for of this last 5 years since is simply envy the fact that Airbus, Fincatieri, BMW (just to name a few) can create very complex machine “just” using putting people together and giving them some rules to follow (Of course this is an oversimplification but I think you get what I mean). The key point of engineering is that, after having defined the project blueprint, you have the possibility to give to a huge number of people, the rules to follow, the correct tools in order to implement the rules easily and semi-automatically and a way to measure the quality of the results. Could this be done in IT? Very big question, so my scope is now limited to BI. So that’s the main point of my workshop: and entry-level approach to BI (level was 200) in order to allow attendees to know the basics, to understand what tools they should use for which purpose and, above all, a set of rules and tools in order to make a BI solution scalable in terms of people working on it, while still maintaining a very good quality. All done not focusing only on the practice but explaining the theory behind to see how it can help *a lot* to build a correct solution despite the technology used to implement it. The idea is to reach a point where more then 70% of the work done to create a BI solution can be reused even if technologies changes. This is a very demanding challenge nowadays with the coming of Denali and its column-aligned storage and the shiny-new DAX language. As you may understand I was looking forward to get the feedback since you may have noticed that there’s a lot of “architectural” stuff in IT but really nothing on “engineering”. So how the session could be perceived by the attendees was really unknown to me. The feedback could also give a good indication if the need of more “engineering” is something I feel only by myself or if is something more broad. I’m very happy to be able to say that the overall score of 4.75 put my workshop in the TOP 20 session (on near 200 sessions)! Here’s the detailed evaluations: How would you rate the usefulness of the information presented in your day-to-day environment? 4.75 Answer:    # of Responses 3    1         4    12        5    42               How would you rate the Speaker's presentation skills? 4.80 Answer:    # of Responses 3 : 1         4 : 9         5 : 45               How would you rate the Speaker's knowledge of the subject? 4.95 Answer:    # of Responses 4 :  3         5 : 52               How would you rate the accuracy of the session title, description and experience level to the actual session? 4.75 Answer:    # of Responses 3 : 2         4 : 10         5 : 43               How would you rate the amount of time allocated to cover the topic/session? 4.44 Answer:    # of Responses 3 : 7         4 : 17        5 : 31               How would you rate the quality of the presentation materials? 4.62 Answer:    # of Responses 4 : 21        5 : 34 The comments where all very positive. Many of them asked for more time on the subject (or to shorten the very last topics). I’ll make treasure of these comments and will review the content accordingly. We’ll organize a two-day classes on this topic, where also more examples will be shown and some arguments will be explained more deeply. I’d just like to answer a comment that asks how much of what I shown is “universally applicable”. I can tell you that all of our BI project follow these rules and they’ve been applied to different markets (Insurance, Fashion, GDO) with different people and different teams and they allowed us to be “Adaptive” against the customer. The more the rules are well defined and the more there are tools that supports their implementations, the easier is to add new people to the project and to add or change solution features. Think of a car. How come that almost any mechanic can help you to fix a problem? Because they know what to expect. Because there a rules that allow them to identify the problem without having to discover each time how the car has been implemented build. And this is of course also true for car upgrades/improvements. Last but not least: thanks a lot to everyone for coming!

    Read the article

  • Are you reporting Visual Studio 2012 issues to Microsoft correctly?

    - by Tarun Arora
    Issues you may run into while using Visual Studio need to be reported to the Microsoft Product Team via the Microsoft connect site. The Microsoft team then tries to reproduce the issue using the details provided by you. If the information you provide isn’t sufficient to reproduce the issue the team tries to contact you for specifics, this not only increases the cycle time to resolution but the lack of communication also results in issues not being resolved. So, when I report an issue one part of me tells me to include as much detail about the issue as I can clubbing screen shots, repo steps, system information, visual studio version information,… the other half tells me this is so time consuming, leave it for now and come back to fill all these details later. Reporting a bug but not including the supporting information is an invitation to excuses like …     Microsoft has absolutely changed this experience for VS 2012. The Microsoft Visual Studio Feedback tool is designed to simplify the process of providing feedback and reporting issues to Microsoft that you may encounter while using Microsoft Visual Studio 2012. Note – The Microsoft Visual Studio 2012 Feedback client currently only works for VS 2012 and not any other versions of Visual Studio. Setting up the Microsoft Visual Studio 2012 Feedback client Open Visual Studio, from the Tools menu select Extension and Updates. In the Extension and Updates window, click Online from the left pane and search using the text ‘feedback’, download and Install Microsoft Visual Studio 2012 Feedback Tool by following the instructions from the wizard. Note - Restarting Visual Studio after the install is a must! How to report a bug for Visual Studio 2012? Click on the Help menu and choose Report a Bug You should see an icon Microsoft Visual Studio 2012 Feedback Tool come up in the system tray icon area You’ll need to accept the Privacy statement. You have the option of reporting the feedback as private or public. Microsoft works with several Partners, MVP’s and Vendors who get access to early bits of Microsoft products for valuation. This is where it becomes essential to report the feedback privately. I would choose the Public option otherwise. After all if it’s out there in the public, others can discover and add to it easily. You now have the option to report a new issue or add to an existing issue. Should you choose to add to an existing issue you should have the feedback ID of the issue available. This can be obtained from the Microsoft Connect site. For now I am going to focus on reporting a new feedback privately. Filling out the feedback details You will notice that VsInfo.xml and DxDiagOutput.txt are automatically attached as you enter this screen (more on that later).  Feedback Type Choose the feedback type from (Performance, Hang, Crash, Other) Note – The record button will only be enabled once you have enabled once you have chosen the feedback type, Bug-repro recording is not available for Windows Server 2008.     Effective Title and Description Enter a title that helps us differentiate the bug when it appears in a list, so that we can group it with any related bugs, assign it to a developer more effectively, and resolve it more quickly. Example: Imagine that you are submitting a bug because you tried to install Service Pack 1 and got a message that Visual Studio is not installed even though it is. Helpful:  Installed Visual Studio version not detected during Service Pack 1 setup. Not helpful:  Service Pack 1 problem. Tip: Write the problem description first, and then distil it to create a title. Example Description: Helpful: When I run Service Pack 1 Setup, I get the message "No Visual Studio version is detected" even though I have Visual Studio 2010 Ultimate and Visual C++ 2010 Express installed on my machine. Even though I uninstalled both editions, and then first reinstalled Ultimate and then Express, I still get the message. Record: Becoming a first class citizen Often a repro report is invaluable to describe and decipher the issue. Please use this feature to send actionable feedback. The record repro feature works differently depending on the feedback type you selected. Please find below details for each recording option. You can start recording simply by selecting a feedback type, and clicking on the “Record” button. When "Performance" is the bug type: When the Microsoft Visual Studio trace recorder starts, perform the actions that show the performance problem you want to report and then click on the "Stop Recording" button as soon as you experience the performance problem. Because the tool optimizes trace collection, you can run it for as long as it takes to show the problem, up to two hours. Note that, you need to stop recording as soon as the performance issue occurs, because the tool captures only the last couple minutes of your actions to optimize the trace collection. After you stop the recording, the tool takes up to two minutes to assemble the data and attach an ETLTrace.zip file to your bug report. The data includes information about Windows events and the Visual Studio code path. Note that, running the Microsoft Visual Studio trace recorder requires elevated user privilege. When "Crash" is the bug type: When the dialog box appears, select the running Visual Studio instance for which you want to show the steps that cause a crash. When the crash occurs, click on the "Stop Record" button. After you do this, two files are attached to your bug report - an AutomaticCrashDump.zip file that contains information about the crash and a ReproSteps.zip file that shows the repro steps. Repro steps are captured by Windows Problem Steps Recorder. Note that, you can pause the recording, and resume later, or for a specific step, you can add additional comments. When "Hang" is the bug type: The process for recording the steps that cause a hang resembles the one for crashes. The difference is, you can even collect a dump file after the VS hangs; start the VSFT either from the system tray or by starting a new instance of VS, select "Hang" as feedback type and click on the "Record" button. You will be prompted which VS to collect dump about, select the VS instance that hanged. VSFT collects a dump file regarding the hang, called MiniDump.zip, and attaches to your bug report. When "Other" is the bug type: When the problem step recorder starts, perform the actions that show the issue you want to report and then choose the "Stop” button. You can pause the recording, and resume later, or for a specific step, you can add additional comments. Once you’re done, ReproSteps.zip is added to your bug report. Pre-attached files It is essential for Microsoft to know what version of the the product are you currently using and what is the current configuration of your system. Note – The total size of all attachments in a bug report cannot exceed 2 GB, and every uncompressed attachment must be smaller than 512 MB. We recommend that you assemble all of your attachments, compress them together into a .zip file, and then attach the .zip file. Taking a screenshot Associate a screen shot by clicking the Take screenshot button, choose either the entire desktop, the specific monitor (useful if you are working in a multi monitor configuration) or the specific window in question. And finally … click Submit If you need further help, more details can be found here. You can view your feedback online by using the following URL “">https://connect.microsoft.com/VisualStudio/SearchResults.aspx?SearchQuery=<feedbackId>” Happy bug logging

    Read the article

  • Configure PERL DBI and DBD in Linux

    - by Balualways
    I am new to Perl and I work in a Linux OEL 5x server. I am trying to configure the Perl DB modules for Oracle connectivity (DBD and DBI modules). Can anyone help me out in the installation procedure? I had tried CPAN didn't really worked out. Any help would be appreciated. I am not quite sure I need to initialize any variables other than $LD_LIBRARY_PATH and $ORACLE_HOME These are my observations: ISSUE:: I am getting the following issue while using the DBI module to connect to Oracle: install_driver(Oracle) failed: Can't locate loadable object for module DBD::Oracle in @INC (@INC contains: /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi /usr/lib/perl5/5.8.8 .) at (eval 3) line 3 Compilation failed in require at (eval 3) line 3. Perhaps a module that DBD::Oracle requires hasn't been fully installed at connectdb.pl line 57 I had installed the DBD for oracle from /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DBD/DBD-Oracle-1.50 Could you please take a look into the steps and correct me if I am wrong: Observations: $ echo $LD_LIBRARY_PATH /opt/CA/UnicenterAutoSysJM/autosys/lib:/opt/CA/SharedComponents/Csam/SockAdapter/lib:/opt/CA/SharedComponents/ETPKI/lib:/opt/CA/CAlib $ echo $ORACLE_HOME /usr/local/oracle/ORA This is how I tried to install the DBD module: Download the file DBD 1.50 for Oracle Copy to /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/DBD Untar and Makefile.PL . Message: Using DBI 1.52 (for perl 5.008008 on x86_64-linux-thread-multi) installed in /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi/auto/DBI/ Configuring DBD::Oracle for perl 5.008008 on linux (x86_64-linux-thread-multi) Remember to actually *READ* the README file! Especially if you have any problems. Installing on a linux, Ver#2.6 Using Oracle in /opt/oracle/product/10.2 DEFINE _SQLPLUS_RELEASE = "1002000400" (CHAR) Oracle version 10.2.0.4 (10.2) Found /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Found /opt/oracle/product/10.2/rdbms/demo/demo_rdbms64.mk Found /opt/oracle/product/10.2/rdbms/lib/ins_rdbms.mk Using /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Your LD_LIBRARY_PATH env var is set to '/usr/local/oracle/ORA/lib:/usr/dt/lib:/usr/openwin/lib:/usr/local/oracle/ORA/ows/cartx/wodbc/1.0/util/lib:/usr/local/oracle/ORA/lib:/usr/local/sybase/OCS-12_0/lib:/usr/local/sybase/lib:/home/oracle/jdbc/jdbcoci73/lib:./' WARNING: Your LD_LIBRARY_PATH env var doesn't include '/opt/oracle/product/10.2/lib' but probably needs to. Reading /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk Reading /usr/local/oracle/ORA/rdbms/lib/env_rdbms.mk Attempting to discover Oracle OCI build rules sh: make: command not found by executing: [make -f /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk build ECHODO=echo ECHO=echo GENCLNTSH='echo genclntsh' CC=true OPTIMIZE= CCFLAGS= EXE=DBD_ORA_EXE OBJS=DBD_ORA_OBJ.o] WARNING: Oracle build rule discovery failed (32512) Add path to make command into your PATH environment variable. Oracle oci build prolog: [sh: make: command not found] Oracle oci build command: [] WARNING: Unable to interpret Oracle build commands from /opt/oracle/product/10.2/rdbms/demo/demo_rdbms.mk. (Will continue by using fallback approach.) Please report this to [email protected]. See README for what to include. Found header files in /opt/oracle/product/10.2/rdbms/public. client_version=10.2 DEFINE= -Wall -Wno-comment -DUTF8_SUPPORT -DORA_OCI_VERSION=\"10.2.0.4\" -DORA_OCI_102 Checking for functioning wait.ph System: perl5.008008 linux ca-build9.us.oracle.com 2.6.20-1.3002.fc6xen #1 smp thu apr 30 18:08:39 pdt 2009 x86_64 x86_64 x86_64 gnulinux Compiler: gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_REENTRANT -D_GNU_SOURCE -fno-strict-aliasing -pipe -Wdeclaration-after-statement -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm Linker: not found Sysliblist: -ldl -lm -lpthread -lnsl -lirc Oracle makefiles would have used these definitions but we override them: CC: cc CFLAGS: $(GFLAG) $(OPTIMIZE) $(CDEBUG) $(CCFLAGS) $(PFLAGS)\ $(SHARED_CFLAG) $(USRFLAGS) [$(GFLAG) -O3 $(CDEBUG) -m32 $(TRIGRAPHS_CCFLAGS) -fPIC -I/usr/local/oracle/ORA/rdbms/demo -I/usr/local/oracle/ORA/rdbms/public -I/usr/local/oracle/ORA/plsql/public -I/usr/local/oracle/ORA/network/public -DLINUX -D_GNU_SOURCE -D_LARGEFILE64_SOURCE=1 -D_LARGEFILE_SOURCE=1 -DSLTS_ENABLE -DSLMXMX_ENABLE -D_REENTRANT -DNS_THREADS -fno-strict-aliasing $(LPFLAGS) $(USRFLAGS)] build: $(CC) $(ORALIBPATH) -o $(EXE) $(OBJS) $(OCISHAREDLIBS) [ cc -L$(LIBHOME) -L/usr/local/oracle/ORA/rdbms/lib/ -o $(EXE) $(OBJS) -lclntsh $(EXPDLIBS) $(EXOSLIBS) -ldl -lm -lpthread -lnsl -lirc -ldl -lm $(USRLIBS) -lpthread] LDFLAGS: $(LDFLAGS32) [-m32 -o $@ -L/usr/local/oracle/ORA/rdbms//lib32/ -L/usr/local/oracle/ORA/lib32/ -L/usr/local/oracle/ORA/lib32/stubs/] Linking with /usr/local/oracle/ORA/rdbms/lib/defopt.o -lclntsh -ldl -lm -lpthread -lnsl -lirc -ldl -lm -lpthread [from $(DEF_OPT) $(OCISHAREDLIBS)] Checking if your kit is complete... Looks good LD_RUN_PATH=/usr/local/oracle/ORA/lib Using DBD::Oracle 1.50. Using DBD::Oracle 1.50. Using DBI 1.52 (for perl 5.008008 on x86_64-linux-thread-multi) installed in /usr/lib64/perl5/vendor_perl/5.8.8/x86_64-linux-thread-multi/auto/DBI/ Writing Makefile for DBD::Oracle Writing MYMETA.yml and MYMETA.json *** If you have problems... read all the log printed above, and the README and README.help.txt files. (Of course, you have read README by now anyway, haven't you?)

    Read the article

  • Ajax Control Toolkit July 2011 Release and the New HTML Editor Extender

    - by Stephen Walther
    I’m happy to announce the July 2011 release of the Ajax Control Toolkit which includes important bug fixes and a completely new HTML Editor Extender control. You can download the July 2011 Release by visiting the Ajax Control Toolkit CodePlex site at: http://AjaxControlToolkit.CodePlex.com Using the New HTML Editor Extender Control You can use the new HTML Editor Extender to extend any standard ASP.NET TextBox control so that it supports rich formatting such as bold, italics, bulleted lists, numbered lists, typefaces and different foreground and background colors. The following code illustrates how you can extend a standard ASP.NET TextBox control with the HtmlEditorExtender: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Simple.aspx.cs" Inherits="WebApplication1.Simple" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Simple</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager runat="Server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="60" Rows="8" runat="server" /> <asp:HtmlEditorExtender TargetControlID="txtComments" runat="server" /> </form> </body> </html> This page has the following three controls: ToolkitScriptManager – The ToolkitScriptManager renders all of the scripts required by the Ajax Control Toolkit. TextBox – The TextBox control is a standard ASP.NET TextBox which is set to display multiple lines (a TextArea instead of an Input element). HtmlEditorExtender – The HtmlEditorExtender is set to extend the TextBox control. You can use the standard TextBox Text property to read the rich text entered into the TextBox control on the server. Lightweight and HTML5 The HTML Editor Extender works on all modern browsers including the most recent versions of Mozilla Firefox (Firefox 5), Google Chrome (Chrome 12), and Apple Safari (Safari 5). Furthermore, the HTML Editor Extender is compatible with Microsoft Internet Explorer 6 and newer. The HTML Editor Extender is very lightweight. It takes advantage of the HTML5 ContentEditable attribute so it does not require an iframe or complex browser workarounds. If you select View Source in your browser while using the HTML Editor Extender, we hope that you will be pleasantly surprised by how little markup and script is generated by the HTML Editor Extender. Customizable Toolbar Buttons Depending on the web application that you are building, you will want to display different toolbar buttons with the HTML Editor Extender. One of the design goals of the HTML Editor Extender was to make it very easy for you to customize the toolbar buttons. Imagine, for example, that you want to use the HTML Editor Extender when accepting comments on blog posts. In that case, you might want to restrict the type of formatting that a user can display. You might want to enable a user to format text as bold or italic but you do not want the user to make any other formatting changes. The following page illustrates how you can customize the HTML Editor Extender toolbar: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="CustomToolbar.aspx.cs" Inherits="WebApplication1.CustomToolbar" %> <%@ Register TagPrefix="asp" Namespace="AjaxControlToolkit" Assembly="AjaxControlToolkit" %> <html> <head runat="server"> <title>Custom Toolbar</title> </head> <body> <form id="form1" runat="server"> <asp:ToolkitScriptManager Runat="server" /> <asp:TextBox ID="txtComments" TextMode="MultiLine" Columns="50" Rows="10" Text="Hello <b>world!</b>" Runat="server" /> <asp:HtmlEditorExtender TargetControlID="txtComments" runat="server"> <Toolbar> <asp:Bold /> <asp:Italic /> </Toolbar> </asp:HtmlEditorExtender> </form> </body> </html> Notice that the HTML Editor Extender in the page above has a Toolbar subtag. You can list the toolbar buttons which you want to appear within the subtag. In the case above, only Bold and Italic buttons are displayed. Here is a complete list of the Toolbar buttons currently supported by the HTML Editor Extender: Undo Redo Bold Italic Underline StrikeThrough Subscript Superscript JustifyLeft JustifyCenter JustifyRight JustifyFull InsertOrderedList InsertUnorderedList CreateLink UnLink RemoveFormat SelectAll UnSelect Delete Cut Copy Paste BackgroundColorSelector ForeColorSelector FontNameSelector FontSizeSelector Indent Outdent InsertHorizontalRule HorizontalSeparator Of course the HTML Editor Extender was designed to be extensible. You can create your own buttons and add them to the control. Compatible with the AntiXSS Library When using the HTML Editor Extender on a public facing website, we strongly recommend that you use the HTML Editor Extender with the AntiXSS Library. If you allow users to submit arbitrary HTML, and you don’t take any action to strip out malicious markup, then you are opening your website to Cross-Site Scripting Attacks (XSS attacks). The HTML Editor Extender uses the Provider Model to support different Sanitizer Providers. The July 2011 release of the Ajax Control Toolkit ships with a single Sanitizer Provider which uses the AntiXSS library (see http://AntiXss.CodePlex.com ). A Sanitizer Provider is responsible for sanitizing HTML markup by removing any malicious elements, attributes, and attribute values. For example, the AntiXss Sanitizer Provider will take the following block of HTML: <b><a href=""javascript:doEvil()"">Visit Grandma</a></b> <script>doEvil()</script> And return the following sanitized block of HTML: <b><a href="">Visit Grandma</a></b> Notice that the JavaScript href and <SCRIPT> tag are both stripped out. Be aware that there are a depressingly large number of ways to sneak evil markup into your HTML. You definitely want a Sanitizer as a safety net. Before you can use the AntiXSS Sanitizer Provider, you must add three assemblies to your web application: AntiXSSLibrary.dll, HtmlSanitizationLibrary.dll, and SanitizerProviders.dll. All three assemblies are included with the CodePlex download of the Ajax Control Toolkit in the SanitizerProviders folder. Here’s how you modify your web.config file to use the AntiXSS Sanitizer Provider: <configuration> <configSections> <sectionGroup name="system.web"> <section name="sanitizer" requirePermission="false" type="AjaxControlToolkit.Sanitizer.ProviderSanitizerSection, AjaxControlToolkit"/> </sectionGroup> </configSections> <system.web> <compilation targetFramework="4.0" debug="true"/> <sanitizer defaultProvider="AntiXssSanitizerProvider"> <providers> <add name="AntiXssSanitizerProvider" type="AjaxControlToolkit.Sanitizer.AntiXssSanitizerProvider"></add> </providers> </sanitizer> </system.web> </configuration> You can detect whether the HTML Editor Extender is using the AntiXSS Sanitizer Provider by checking the HtmlEditorExtender SanitizerProvider property like this: if (MyHtmlEditorExtender.SanitizerProvider == null) { throw new Exception("Please enable the AntiXss Sanitizer!"); } When the SanitizerProvider property has the value null, you know that a Sanitizer Provider has not been configured in the web.config file. Because the AntiXSS library requires Full Trust, you cannot use the AntiXSS Sanitizer Provider with most shared website hosting providers. Because most shared hosting providers only support Medium Trust and not Full Trust, we do not recommend using the HTML Editor Extender with a public website hosted with a shared hosting provider. Why a New HTML Editor Control? The Ajax Control Toolkit now includes two HTML Editor controls. Why did we introduce a new HTML Editor control when there was already an existing HTML Editor? We think you will like the new HTML Editor much more than the previous one. We had several goals with the new HTML Editor Extender: Lightweight – We wanted to leverage HTML5 to create a lightweight HTML Editor. The new HTML Editor generates much less markup and script than the previous HTML Editor. Secure – We wanted to make it easy to integrate the AntiXSS library with the HTML Editor. If you are creating a public facing website, we strongly recommend that you use the AntiXSS Provider. Customizable – We wanted to make it easy for users to customize the toolbar buttons displayed by the HTML Editor. Compatibility – We wanted to ensure that the HTML Editor will work with the latest versions of the most popular browsers (including Internet Explorer 6 and higher). The old HTML Editor control is still included in the Ajax Control Toolkit and continues to live in the AjaxControlToolkit.HTMLEditor namespace. We have not modified the control and you can continue to use the control in the same way as you have used it in the past. However, we hope that you will consider migrating to the new HTML Editor Extender for the reasons listed above. Summary We’ve introduced a new Ajax Control Toolkit control with this release. I want to thank the developers and testers on the Superexpert team for the huge amount of work which they put into this control. It was a non-trivial task to build an entirely new control which has the complexity of the HTML Editor in less than 6 weeks. Please let us know what you think! We want to hear your feedback. If you discover issues with the new HTML Editor Extender control, or you have questions about the control, or you have ideas for how it can be improved, then please post them to this blog. Tomorrow starts a new sprint

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Issue 15: Oracle PartnerNetwork Exchange @ Oracle OpenWorld

    - by rituchhibber
         ORACLE FOCUS Oracle PartnerNetwork Exchange@ ORACLE OpenWorld Sylvie MichouSenior DirectorPartner Marketing & Communications and Strategic Programs RESOURCES -- Oracle OpenWorld 2012 Oracle PartnerNetwork Exchange @ OpenWorld Oracle PartnerNetwork Exchange @ OpenWorld Registration Oracle PartnerNetwork Exchange SpecializationTest Fest Oracle OpenWorld Schedule Builder Oracle OpenWorld Promotional Toolkit for Partners Oracle Partner Events Oracle Partner Webcasts Oracle EMEA Partner News SUBSCRIBE FEEDBACK PREVIOUS ISSUES If you are attending our forthcoming Oracle OpenWorld 2012 conference in San Francisco from 30 September to 4 October, you will discover a new dedicated programme of keynotes and sessions tailored especially for you, our valued partners. Oracle PartnerNetwork Exchange @ OpenWorld has been created to enhance the opportunities for you to learn from and network with Oracle executives and experts. The programme also provides more informal opportunities than ever throughout the week to meet up with the people who are most important to your business: customers, prospects, colleagues and the Oracle EMEA Alliances & Channels management team. Oracle remains fully focused on building the industry's most admired partner ecosystem—which today spans over 25,000 partners. This new OPN Exchange programme offers an exciting change of pace for partners throughout the conference. Now it will be possible to enjoy a fully-integrated, partner-dedicated session schedule throughout the week, as well as key social events such as the Sunday night Welcome Reception, networking lunches from Monday to Thursday at the Howard Street Tent, and a fantastic closing event on the last Thursday afternoon. In addition to the regular Oracle OpenWorld conference schedule, if you have registered for the Oracle PartnerNetwork Exchange @ OpenWorld programme, you will be invited to attend a much anticipated global partner keynote presentation, plus more than 40 conference sessions aimed squarely at what's most important to you, as partners. Prominent topics for discussion will include: Oracle technologies and roadmaps and how they fit with partners' business plans; business development; regional distinctions in business practices; and much more. Each session will provide plenty of food for thought ahead of the numerous networking opportunities throughout the week, encouraging the knowledge exchange with Oracle executives, customers, prospects, and colleagues that will make this conference of even greater value for you. At Oracle we always work closely with our partners to deliver solution offerings that improve business value, simplify the IT experience and drive innovation and efficiencies for joint customers. The most important element of our new OPN Exchange is content that helps you get more from technology investments, more from your peer-to-peer connections, and more from your interactions with customers. To this end we've created some partner-specific tools which can be used by OPN members ahead of the conference itself. Crucially, a comprehensive Content Catalog already lists and organises details of every OPN Exchange session, speaker, exhibitor, demonstration and related materials. This Content Catalog can be used by all our partners to identify interesting content that you can add to your own personalised Oracle OpenWorld Schedule Builder, allowing more effective planning and pre-enrolment for vital sessions. There are numerous highlights that you will definitely want to include in those personal schedules. On Sunday morning, 30 September we will start the week with partner dedicated OPN Exchange sessions, following our Global Partner Keynote at 13:00 with Judson Althoff, SVP, Worldwide Alliances & Channels and Embedded Sales and senior executives, giving insight into Oracle's partner vision, strategy, and resources—all designed to help build and strengthen market opportunities for you. This will be followed by a number of OPN Exchange general sessions, the Oracle OpenWorld Opening Keynote with Larry Ellison, CEO, Oracle and concluded with the OPN Exchange AfterDark Welcome Reception, starting at 19:30 at the Metreon. From Monday 1 to Thursday 4 October, you can attend the OPN Exchange sessions that are most relevant to your business today and over the coming year. Oracle's top product and sales leaders will be on hand to discuss Oracle's strategic direction in 40+ targeted and in-depth sessions focussing on critical success factors to develop your business. Oracle's dedication to innovation, specialization, enablement and engineering provides Oracle partners with a huge opportunity to create new services and solutions, differentiate themselves and deliver extreme value to joint customers across the globe. Oracle will even be helping over 1000 partners to earn OPN Specialization certification during the Oracle OpenWorld OPN Exchange Test Fest, which will be providing all the study materials and exams required to drive Specialization for free at the conference. You simply need to check the list of current certification tracks available, and make sure you pre-register to reserve a seat in one of the ten sessions being offered free to OPN Exchange registered attendees. And finally, let's not forget those all-important networking opportunities, which can so often provide partners with valuable long-term alliances as well as exciting new business leads. The Oracle PartnerNetwork Lounge, located at Moscone South, exhibition hall, room 100 is the place where partners can meet formally or informally with colleagues, customers, prospects, and other industry professionals. OPN Specialized partners with OPN Exchange passes can also visit the OPN Video Blogging room to record and share ideas, and at the OPN Information Station you will find consultants available to answer your questions. "For the first time ever we will have a full partner conference within OpenWorld. OPN Exchange @ OpenWorld will kick-off on the first Sunday and run the entire week. We'll have over 40 sessions throughout that time and partners will hear from our top development executives, with special sessions dedicated to partnering throughout. It's going to be a phenomenal event, and we look forward to seeing our partners there." Judson Althoff, SVP, Oracle Worldwide Alliances & Channels and Embedded Sales So if you haven't done so already, please register for Oracle PartnerNetwork Exchange @ OpenWorld today or add OPN Exchange to your existing registration for just $100 through My Account. And if you have any further questions regarding partner activities at Oracle OpenWorld, please don't hesitate to contact the Oracle PartnerNetwork team at [email protected] will be on hand to share the very latest information about: Oracle's SPARC Superclusters: the latest Engineered Systems from Oracle, delivering radically improved performance, faster deployment and greatly reduced operational costs for mixed database and enterprise application consolidation Oracle's SPARC T4 servers: with the newly developed T4 processor and Oracle Solaris providing up to five times the single threaded performance and better overall system throughput for expanded application versatility Oracle Database Appliance: a new way to take advantage of the world's most popular database, Oracle Database 11g, in a single, easy-to-deploy and manage system. It's a complete package engineered to deliver simple, reliable and affordable database services to small and medium size businesses and departmental systems. All hardware and software components are supported together and offer customers unique pay-as-you-grow software licensing to quickly scale from two to 24 processor cores without incurring the costs and downtime usually associated with hardware upgrades Oracle Exalogic: the world's only integrated cloud machine, featuring server hardware and middleware software engineered together for maximum performance with minimum set-up and operational cost Oracle Exadata Database Machine: the only database machine that provides extreme performance for both data warehousing and online transaction processing (OLTP) applications, making it the ideal platform for consolidating onto grids or private clouds. It is a complete package of servers, storage, networking and software that is massively scalable, secure and redundant Oracle Sun ZFS Storage Appliances: providing enterprise-class NAS performance, price-performance, manageability and TCO by combining third-generation software with high-performance controllers, flash-based caches and disks Oracle Pillar Axiom Quality-of-Service: confidently consolidate storage for multiple applications into a single datacentre storage solution Oracle Solaris 11: delivering secure enterprise cloud deployments with the ability to run hundreds of virtual application with no overhead and co-engineered with other Oracle software products to provide the highest levels of security, manageability and performance Oracle Enterprise Manager 12c: Oracle's integrated enterprise IT management product, providing the industry's only complete, integrated and business-driven enterprise cloud management solution Oracle VM 3.0: the latest release of Oracle's server virtualisation and management solution, helping to move datacentres beyond server consolidation to improve application deployment and management. Register today and ensure your place at the Extreme Performance Tour! Extreme Performance Tour events are free to attend, but places are limited. To make sure that you don't miss out, please visit Oracle's Extreme Performance Tour website, select the city that you'd be interest in attending an event in, and then click on the 'Register Now' button for that city to secure your interest. Each individual city page also contains more in-depth information about your local event, including logistics, agenda and maybe even a preview of VIP guest speakers. -- Oracle OpenWorld 2010 Whether you attended Oracle OpenWorld 2009 or not, don't forget to save the date now for Oracle OpenWorld 2010. The event will be held a little earlier next year, from 19th-23rd September, so please don't miss out. With thousands of sessions and hundreds of exhibits and demos already lined up, there's no better place to learn how to optimise your existing systems, get an inside line on upcoming technology breakthroughs, and meet with your partner peers, Oracle strategists and even the developers responsible for the products and services that help you get better results for your end customers. Register Now for Oracle OpenWorld 2010! Perhaps you are interested in learning more about Oracle OpenWorld 2010, but don't wish to register at this time? Great! Please just enter your contact information here and we will contact you at a later date. How to Exhibit at Oracle OpenWorld 2010 Sponsorship Opportunities at Oracle OpenWorld 2010 Advertising Opportunities at Oracle OpenWorld 2010 -- Back to the welcome page

    Read the article

  • CodePlex Daily Summary for Friday, March 05, 2010

    CodePlex Daily Summary for Friday, March 05, 2010New Projects.svn Folders Cleanup Tool: dotSVN Cleanup is a tool that allows you to remove the .svn folders . Just click, browse, say abracadabra ...and the magic is done. Have fun with...Accord: The Accord framework creates an easy we to integrate any Dependency Injection framework into your project, while abstracting the details of your im...Asp.net MVC Lab: Try asp.net mvc outASP.NET Themes management with Webforms: The provided source is an example for how to use themes in ASP.NET Webforms. this source is the "up to date" support for the article I wroteB&W Port Scanner: B&W Port Scanner (formerly Net Inspector) is a fast TCP Port scan utility. The main idea is support of customizable operations to be performed f...BizTalk SWAT - Simple Web Activity Tracker: This is a web based version of BizTalk HAT. The concept is designed to be able to share and enable sharing of orchestration info easily. Some of th...C# Linear Hash Table: A C# dictionary-like implementation of a linear hash table. It is more memory efficiant than the .NET dictionary, and also almost as fast. NOTE: On...DBF Import Export Wizard: DBF Import Export Wizard is a tool for anyone needing to import DBF files into SQL Server or to export SQL Server tables to a DBF file. This proje...Domain as XML - Driven Development: Visual Studio Code Samples: Domain as XML - Driven Development: Visual Studio Code SamplesEasyDownload: This application allows to manage downloads handling an stack of files and several useful configurationsEos2: .FlightTickets: This application allows to buy flight ticketsFotofly PhotoViewer: A Silverlight control that uses the Fotofly metadata library to show the people in a photo (using Windows Live Photo Gallery People metadata) and a...Fujiy source code: Source code examplesGameSet: This application allows to play games with distributed users.Injectivity (Dependency Injection): Injectivity is a dependency injection framework (written in C#) with a strong focus on the ease of configuration and performance. Having been writt...Inventory: Keep track of inputs, materias and salesLoanTin.Com Source Code: LoanTin.Com - a Social Networking Website as same as Tumblr.com, based on source code of Loantiner Project, allow anyone can share anything to anyo...mysln: my solutions.NumTextBox: TextBox控件重写 之NumTextBox,主要实现的功能是,只允许输入数字,或String,Numeric,Currency,Decimal,Float,Double,Short,Int,Long 修改自:http://www.codeproject.com/KB/edit/num...Quick Performance Monitor: This small utility helps to monitor performance counters without using the full blown perfmon tool from Windows. It supports a number of command li...Runo: Runo ResearchSales: This application allows to manage a hardware storeScrewWiki Form Auth Provider: Enables your ASP.NET site to use Forms Authentication to integrate with your ScrewWiki. User management is performed on a parent site, and cookie i...SDS: Scientific DataSet library and tools: The SDS library makes it easy for .Net developers to read, write and share scalars, vectors, matrices and multidimensional grids which are very com...ShapeSweeper: Minesweeper-like game for the Zune HD. Each hidden object has three properties to discover--location, color, and shape--and all three must be corre...SilverlightExcel: an Excel file viewer in Silverlight 4: SilverlightExcel is a Silverlight application allowing you to open and view Excel files and also create graphs.sPWadmin: pwAdmin is an Web Interface based on JSP that uses the PW-Java API to control an PW-Server.Video Player control in Silverlight: A control for playing video in Silverlight 4 with chapters on timeline control. This player will be easily skinnable and customizable. More Featur...XNA Light Pre Pass Renderer: A demo/sample that shows how to write a light pre pass renderer in XNA.Zimms: Collaboration Site for friends, a code depot, and scratch padNew Releases.svn Folders Cleanup Tool: dotSVN Cleanup Tool: dotSVN Cleanup Tool executableAccord: Alpha: Initial build of the Accord framework.AcPrac: AcPrac Ver 0.1: The first version of AcPrac. It is not fully functional, but rather a version to get the bugs out. Please report all bugs.ASP.NET: ASP.NET Browser Definition Files: This download contains: ASP.NET 4 Browser Definition Files -- You can use the new ASP.NET 4 browser definition files with earlier versions of ASP....B&W Port Scanner: Black`n`White Port Scanner 1.0: B&W Port Scanner 1.0 Final Release Date: 03.03.2010 Black`n`WhiteBizTalk SWAT - Simple Web Activity Tracker: BizTalk SWAT: This is a web based version of BizTalk HAT. The concept is designed to be able to share and enable sharing of orchestration info easily. It uses th...BTP Tools: CSB+CUV+HCSB dict files 2010-03-04: 5. is now missing a space between the Strong’s number and the Count: >CSB Translation: 圣所 7, 至圣所G39+G394 it should be: CSB Translation: 圣所 7, 至圣所G...C# Linear Hash Table: Linear Hash Table: First working version of the Linear Hash Table.Cassiopeia: WinTools 1.0 beta: First ReleaseComposure: Caliburn-44007-trunk-vs2010.net40: This is a very simple conversion of the Caliburn trunk (rev 44007) for use in Visual Studio 2010 RC1 built against .NET40. Because the conversion w...Cover Creator: CoverCreator 1.3.0: English and Polish version. Functionality to add image to the front page. Load / save covers.DBF Import Export Wizard: DBF Import Export Wizard Source Code: Version 0.1.0.3DBF Import Export Wizard: DBF_Import_Export_Wizard Setup 0.1.0.3: Zip file contains Setup.exeESB Toolkit Extensions: Tellago BizTalk ESB 2.0 Toolkit Extensions v0.2: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...Fotofly PhotoViewer: Fotofly Photoview v0.1: The first public release. Based on a Silverlight application I have been using for over a year at www.tassography.com. This version uses Fotofly v0...HPC with GPUs applied to CG: Cuda Soft Bodies simulation: Cuda src for soft bodiesHPC with GPUs applied to CG: Full Soft Bodies src: full src code for soft bodies simulationInjectivity (Dependency Injection): 2.8.166.2135: Release 2.8.166.2135 of the Injectivity dependency injection framework.Line Counter: 1.5 (Code Outline Preview): This version contains preview of the code outline feature, you can now view C# code outline within Line Counter. Note that the code outline now onl...Micajah Mindtouch Deki Wiki Copier: MicajahWikiCopier: You should use the following line arguments: WikiCopier.exe "http://oldwikiwithdata.wik.is/@api/deki" "login" "password" "http://newwiki.somename.l...ncontrols: Alpha 0.4.0.1: Added some example on the Console Project.NumTextBox: NumTextBox初始版本: TextBox控件重写 之NumTextBox,主要实现的功能是,只允许输入数字,或String,Numeric,Currency,Decimal,Float,Double,Short,Int,Long 此为初始版本PSCodeplex: PS CodePlex 0.2: PS CodePlex 0.2 has some breaking changes to the parameters. A few of the parameters are renamed and a few are made as switch parameters. Add-Rele...Quick Performance Monitor: QPerfMon First release - Version 1.0.0: The first release of the utility.RapidWebDev - .NET Enterprise Software Development Infrastructure: ProductManagement Quick Sample 0.2: This is a sample product management application to demonstrate how to develop enterprise software in RapidWebDev. The glossary of the system are ro...ScrewWiki Form Auth Provider: ScrewWiki Forms Authentication: Initial ReleaseSee.Sharper: See.Sharper.Docs-1.10.3.4: HTML documentation (including Doxygen project)See.Sharper: See.Sharper-1.10.3.4: Solution (Source files, debug and release binaries)Solar.Generic: Solar.Generic 0.8.0.0 Beta (Revised, Renamed): Solar.Generic 0.8.0.0 (Revised & Renamed) Renamed project from Solar.Commons to Solar.Generic. Project solution file is now in format of Visual ...Solar.Security: Solar.Security 1.1.0.0: Performed several major refactorings of code base. Stripped In-Memory implementation of IConfiguration interface of transactional behavior due to...sPWadmin: pwAdmin v0.7: -Star System Simulator: Star System Simulator 2.3: Changes in this release: Fixed several localisation issues. Features in this release: Model star systems in 3D. Euler-Cromer method. Improved...SysI: sysi, stable and ready: This time for sure.TheWhiteAmbit: TheWhiteAmbit - Demo: Two little demos demonstrating: - fast realtime raytracing - generating bent normals for shading (CUDA capable GPU needed = nVidia GeForce >8x00)VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 22 Beta: Build 22 (beta) New: Visual Studio 2010 RC support (VsTortoise for Visual Studio 2010 RC screenshots) New: VsTortoise integrates in to Solution E...WinMergeFS: WinMergeFS 0.1.42128alpha: WinMergeFS provides AuFS functionality for windows. With WinMergeFS users can mount multiple directories into a virtual drive. Plugin based root se...WSDLGenerator: WSDLGenerator 0.0.0.2: - Bugs fixed - Code refactored - Added support for custom typesXNA Light Pre Pass Renderer: LightPrePassRendererXNA: Zipped source code for the light pre pass renderer made with XNA.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)LiveUpload to FacebookASP.NETMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrBlogEngine.NETSDS: Scientific DataSet library and toolsMapWindow GISpatterns & practices – Enterprise LibraryjQuery Library for SharePoint Web ServicesIonics Isapi Rewrite FilterMDT Web FrontEndDiffPlex - a .NET Diff Generator

    Read the article

  • Silverlight for Windows Embedded tutorial (step 6)

    - by Valter Minute
    In this tutorial step we will develop a very simple clock application that may be used as a screensaver on our devices and will allow us to discover a new feature of Silverlight for Windows Embedded (transforms) and how to use an “old” feature of Windows CE (timers) inside a Silverlight for Windows Embedded application. Let’s start with some XAML, as usual: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Width="640" Height="480" FontSize="18" x:Name="Clock">   <Canvas x:Name="LayoutRoot" Background="#FF000000"> <Grid Height="24" Width="150" Canvas.Left="320" Canvas.Top="234" x:Name="SecondsHand" Background="#FFFF0000"> <TextBlock Text="Seconds" TextWrapping="Wrap" Width="50" HorizontalAlignment="Right" VerticalAlignment="Center" x:Name="SecondsText" Foreground="#FFFFFFFF" TextAlignment="Right" Margin="2,2,2,2"/> </Grid> <Grid Height="24" x:Name="MinutesHand" Width="100" Background="#FF00FF00" Canvas.Left="320" Canvas.Top="234"> <TextBlock HorizontalAlignment="Right" x:Name="MinutesText" VerticalAlignment="Center" Width="50" Text="Minutes" TextWrapping="Wrap" Foreground="#FFFFFFFF" TextAlignment="Right" Margin="2,2,2,2"/> </Grid> <Grid Height="24" x:Name="HoursHand" Width="50" Background="#FF0000FF" Canvas.Left="320" Canvas.Top="234"> <TextBlock HorizontalAlignment="Right" x:Name="HoursText" VerticalAlignment="Center" Width="50" Text="Hours" TextWrapping="Wrap" Foreground="#FFFFFFFF" TextAlignment="Right" Margin="2,2,2,2"/> </Grid> </Canvas> </UserControl> This XAML file defines three grid panels, one for each hand of our clock (we are implementing an analog clock using one of the most advanced technologies of the digital world… how cool is that?). Inside each hand we put a TextBlock that will be used to display the current hour, minute, second inside the dial (you can’t do that on plain old analog clocks, but it looks nice). As usual we use XAML2CPP to generate the boring part of our code. We declare a class named “Clock” and derives from the TClock template that XAML2CPP has declared for us. class Clock : public TClock<Clock> { ... }; Our WinMain function is more or less the same we used in all the previous samples. It initializes the XAML runtime, create an instance of our class, initialize it and shows it as a dialog: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { if (!XamlRuntimeInitialize()) return -1;   HRESULT retcode;   IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return -1; Clock clock;   if (FAILED(clock.Init(hInstance,app))) return -1;     UINT exitcode;   if (FAILED(clock.GetVisualHost()->StartDialog(&exitcode))) return -1;   return exitcode; } Silverlight for Windows Embedded provides a lot of features to implement our UI, but it does not provide timers. How we can update our clock if we don’t have a timer feature? We just use plain old Windows timers, as we do in “regular” Windows CE applications! To use a timer in WinCE we should declare an id for it: #define IDT_CLOCKUPDATE 0x12341234 We also need an HWND that will be used to receive WM_TIMER messages. Our Silverlight for Windows Embedded page is “hosted” inside a GWES Window and we can retrieve its handle using the GetContainerHWND function of our VisualHost object. Let’s see how this is implemented inside our Clock class’ Init method: HRESULT Init(HINSTANCE hInstance,IXRApplication* app) { HRESULT retcode;   if (FAILED(retcode=TClock<Clock>::Init(hInstance,app))) return retcode;   // create the timer user to update the clock HWND clockhwnd;   if (FAILED(GetVisualHost()->GetContainerHWND(&clockhwnd))) return -1;   timer=SetTimer(clockhwnd,IDT_CLOCKUPDATE,1000,NULL); return 0; } We use SetTimer to create a new timer and GWES will send a WM_TIMER to our window every second, giving us a chance to update our clock. That sounds great… but how could we handle the WM_TIMER message if we didn’t implement a window procedure for our window? We have to move a step back and look how a visual host is created. This code is generated by XAML2CPP and is inside xaml2cppbase.h: virtual HRESULT CreateHost(HINSTANCE hInstance,IXRApplication* app) { HRESULT retcode; XRWindowCreateParams wp;   ZeroMemory(&wp, sizeof(XRWindowCreateParams)); InitWindowParms(&wp);   XRXamlSource xamlsrc;   SetXAMLSource(hInstance,&xamlsrc); if (FAILED(retcode=app->CreateHostFromXaml(&xamlsrc, &wp, &vhost))) return retcode;   if (FAILED(retcode=vhost->GetRootElement(&root))) return retcode; return S_OK; } As you can see the CreateHostFromXaml function of IXRApplication accepts a structure named XRWindowCreateParams that control how the “plain old” GWES Window is created by the runtime. This structure is initialized inside the InitWindowParm method: // Initializes Windows parameters, can be overridden in the user class to change its appearance virtual void InitWindowParms(XRWindowCreateParams* wp) { wp->Style = WS_OVERLAPPED; wp->pTitle = windowtitle; wp->Left = 0; wp->Top = 0; } This method set up the window style, title and position. But the XRWindowCreateParams contains also other fields and, since the function is declared as virtual, we could initialize them inside our version of InitWindowParms: // add hook procedure to the standard windows creation parms virtual void InitWindowParms(XRWindowCreateParams* wp) { TClock<Clock>::InitWindowParms(wp);   wp->pHookProc=StaticHostHookProc; wp->pvUserParam=this; } This method calls the base class implementation (useful to not having to re-write some code, did I told you that I’m quite lazy?) and then initializes the pHookProc and pvUserParam members of the XRWindowsCreateParams structure. Those members will allow us to install a “hook” procedure that will be called each time the GWES window “hosting” our Silverlight for Windows Embedded UI receives a message. We can declare a hook procedure inside our Clock class: // static hook procedure static BOOL CALLBACK StaticHostHookProc(VOID* pv,HWND hwnd,UINT Msg,WPARAM wParam,LPARAM lParam,LRESULT* pRetVal) { ... } You should notice two things here. First that the function is declared as static. This is required because a non-static function has a “hidden” parameters, that is the “this” pointer of our object. Having an extra parameter is not allowed for the type defined for the pHookProc member of the XRWindowsCreateParams struct and so we should implement our hook procedure as static. But in a static procedure we will not have a this pointer. How could we access the data member of our class? Here’s the second thing to notice. We initialized also the pvUserParam of the XRWindowsCreateParams struct. We set it to our this pointer. This value will be passed as the first parameter of the hook procedure. In this way we can retrieve our this pointer and use it to call a non-static version of our hook procedure: // static hook procedure static BOOL CALLBACK StaticHostHookProc(VOID* pv,HWND hwnd,UINT Msg,WPARAM wParam,LPARAM lParam,LRESULT* pRetVal) { return ((Clock*)pv)->HostHookProc(hwnd,Msg,wParam,lParam,pRetVal); } Inside our non-static hook procedure we will have access to our this pointer and we will be able to update our clock: // hook procedure (handles timers) BOOL HostHookProc(HWND hwnd,UINT Msg,WPARAM wParam,LPARAM lParam,LRESULT* pRetVal) { switch (Msg) { case WM_TIMER: if (wParam==IDT_CLOCKUPDATE) UpdateClock(); *pRetVal=0; return TRUE; } return FALSE; } The UpdateClock member function will update the text inside our TextBlocks and rotate the hands to reflect current time: // udates Hands positions and labels HRESULT UpdateClock() { SYSTEMTIME time; HRESULT retcode;   GetLocalTime(&time);   //updates the text fields TCHAR timebuffer[32];   _itow(time.wSecond,timebuffer,10);   SecondsText->SetText(timebuffer);   _itow(time.wMinute,timebuffer,10);   MinutesText->SetText(timebuffer);   _itow(time.wHour,timebuffer,10);   HoursText->SetText(timebuffer);   if (FAILED(retcode=RotateHand(((float)time.wSecond)*6-90,SecondsHand))) return retcode;   if (FAILED(retcode=RotateHand(((float)time.wMinute)*6-90,MinutesHand))) return retcode;   if (FAILED(retcode=RotateHand(((float)(time.wHour%12))*30-90,HoursHand))) return retcode;   return S_OK; } The function retrieves current time, convert hours, minutes and seconds to strings and display those strings inside the three TextBlocks that we put inside our clock hands. Then it rotates the hands to position them at the right angle (angles are in degrees and we have to subtract 90 degrees because 0 degrees means horizontal on Silverlight for Windows Embedded and usually a clock 0 is in the top position of the dial. The code of the RotateHand function uses transforms to rotate our clock hands on the screen: // rotates a Hand HRESULT RotateHand(float angle,IXRFrameworkElement* Hand) { HRESULT retcode; IXRRotateTransformPtr rotatetransform; IXRApplicationPtr app;   if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode;   if (FAILED(retcode=app->CreateObject(IID_IXRRotateTransform,&rotatetransform))) return retcode;     if (FAILED(retcode=rotatetransform->SetAngle(angle))) return retcode;   if (FAILED(retcode=rotatetransform->SetCenterX(0.0))) return retcode;   float height;   if (FAILED(retcode==Hand->GetActualHeight(&height))) return retcode;   if (FAILED(retcode=rotatetransform->SetCenterY(height/2))) return retcode; if (FAILED(retcode=Hand->SetRenderTransform(rotatetransform))) return retcode;   return S_OK; } It creates a IXRotateTransform object, set its rotation angle and origin (the default origin is at the top-left corner of our Grid panel, we move it in the vertical center to keep the hand rotating around a single point in a more “clock like” way. Then we can apply the transform to our UI object using SetRenderTransform. Every UI element (derived from IXRFrameworkElement) can be rotated! And using different subclasses of IXRTransform also moved, scaled, skewed and distorted in many ways. You can also concatenate multiple transforms and apply them at once suing a IXRTransformGroup object. The XAML engine uses vector graphics and object will not look “pixelated” when they are rotated or scaled. As usual you can download the code here: http://cid-9b7b0aefe3514dc5.skydrive.live.com/self.aspx/.Public/Clock.zip If you read up to (down to?) this point you seem to be interested in Silverlight for Windows Embedded. If you want me to discuss some specific topic, please feel free to point it out in the comments! Technorati Tags: Silverlight for Windows Embedded,Windows CE

    Read the article

  • CodePlex Daily Summary for Tuesday, November 15, 2011

    CodePlex Daily Summary for Tuesday, November 15, 2011Popular ReleasesTHE NVL Maker: The NVL Maker Ver 3.10: 3.10 ??? ???: ·????????? ·????????? ·???“TJS”?“??”“EXP”?????“???”,???????? ·???“????”???,???????@if~@elsif~@else~@endif????? ·TJS????????? ·???????????else?endif??? ??: ·???FantasyDR?????????Wizard.exe(?????:http://code.google.com/p/nvlmaker-wizard/) ·KAGConfigEx2.exe??(?????:http://kcddp.keyfc.net/bbs/viewthread.php?tid=1374&extra=page%3D1) ·??????????skin??? ????: ·mapbutton????EXP??(??macro_map.ks) ·??????????AnimPlayer.ks?system????(??????AnimPlayer.ks???macro.ks) ·??????????????,?????...CreateHandouts: Latest Version: Latest VersionSQL Monitor - tracking sql server activities: SQLMon 4.1 alpha2: 1. improved object search, escape special characters, support search histories, and remember search option. 2. allow user to set connection time out. 3. allow user to drag & drop sql text or file to editors.SCCM Client Actions Tool: SCCM Client Actions Tool v0.8: SCCM Client Actions Tool v0.8 is currently the latest version. It comes with following changes since last version: Added "Wake On LAN" action. WOL.EXE is now included. Added new action "Get all active advertisements" to list all machine based advertisements on remote computers. Added new action "Get all active user advertisements" to list all user based advertisements for logged on users on remote computers. Added config.ini setting "enablePingTest" to control whether ping test is ru...Windows Azure SDK for PHP: Windows Azure SDK for PHP v4.0.4: INSTALLATION Windows Azure SDK for PHP requires no special installation steps. Simply download the SDK, extract it to the folder you would like to keep it in, and add the library directory to your PHP include_path. INSTALLATION VIA PEAR Maarten Balliauw provides an unofficial PEAR channel via http://www.pearplex.net. Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPAzure Or if you've already installed PHPAzure before: pear upgrade p...QuickGraph, Graph Data Structures And Algorithms for .Net: 3.6.61116.0: Portable library build that allows to use QuickGraph in any .NET environment: .net 4.0, silverlight 4.0, WP7, Win8 Metro apps.Devpad: 4.7: Whats new for Devpad 4.7: New export to Rich Text New export to FlowDocument Minor Bug Fix's, improvements and speed upsWeapsy: 0.4.1 Alpha: Edit Text bug fixedDesktop Google Reader: 1.4.2: This release remove the like and the broadcast buttons as Google Reader stopped supporting them (no, we don't like this decission...) Additionally and to have at least a small plus: the login window now automaitcally logs you in if you stored username and passwort (no more extra click needed) Finally added WebKit .NET to the about window and removed Awesomium MD5-Hash: 5fccf25a2fb4fecc1dc77ebabc8d3897 SHA-Hash: d44ff788b123bd33596ad1a75f3b9fa74a862fdbFluent Validation for .NET: 3.2: Changes since 3.1: Fixed issue #7084 (NotEmptyValidator does not work with EntityCollection<T>) Fixed issue #7087 (AbstractValidator.Custom ignores RuleSets and always runs) Removed support for WP7 for now as it doesn't support co/contravariance without crashing.RDRemote: Remote Desktop remote configurator V 1.0.0: Remote Desktop remote configurator V 1.0.0Rawr: Rawr 4.2.7: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...VidCoder: 1.2.2: Updated Handbrake core to svn 4344. Fixed the 6-channel discrete mixdown option not appearing for AAC encoders. Added handling for possible exceptions when copying to the clipboard, added retries and message when it fails. Fixed issue with audio bitrate UI not appearing sometimes when switching audio encoders. Added extra checks to protect against reported crashes. Added code to upgrade encoding profiles on old queued items.Media Companion: MC 3.422b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) TV Show Resolutions... Made the TV Shows folder list sorted. Re-visibled 'Manually Add Path' in Root Folders. Sorted list to process during new tv episode search Rebuild Movies now processes thru folders alphabetically Fix for issue #208 - Display Missing Episodes is not popu...DotSpatial: DotSpatial Release Candidate 1 (1.0.823): Supports loading extensions using System.ComponentModel.Composition. DemoMap compiled as x86 so that GDAL runs on x64 machines. How to: Use an Assembly from the WebBe aware that your browser may add an identifier to downloaded files which results in "blocked" dll files. You can follow the following link to learn how to "Unblock" files. Right click on the zip file before unzipping, choose properties, go to the general tab and click the unblock button. http://msdn.microsoft.com/en-us/library...XPath Visualizer: XPathVisualizer v1.3 Latest: This is v1.3.0.6 of XpathVisualizer. This is an update release for v1.3. These workitems have been fixed since v1.3.0.5: 7429 7432 7427MSBuild Extension Pack: November 2011: Release Blog Post The MSBuild Extension Pack November 2011 release provides a collection of over 415 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GU...Extensions for Reactive Extensions (Rxx): Rxx 1.2: What's NewRelated Work Items Please read the latest release notes for details about what's new. Content SummaryRxx provides the following features. See the Documentation for details. Many IObservable<T> extension methods and IEnumerable<T> extension methods. Many useful types such as ViewModel, CommandSubject, ListSubject, DictionarySubject, ObservableDynamicObject, Either<TLeft, TRight>, Maybe<T> and others. Various interactive labs that illustrate the runtime behavior of the extensio...Facebook C# SDK: v5.3.2: This is a RTW release which adds new features and bug fixes to v5.2.1. Query/QueryAsync methods uses graph api instead of legacy rest api. removed dependency from Code Contracts enabled Task Parallel Support in .NET 4.0+ (experimental) added support for early preview for .NET 4.5 (binaries not distributed in codeplex nor nuget.org, will need to manually build from Facebook-Net45.sln) added additional method overloads for .NET 4.5 to support IProgress<T> for upload progress added ne...Delete Inactive TS Ports: List and delete the Inactive TS Ports: UPDATEAdded support for windows 2003 servers and removed some null reference errors when the registry key was not present List and delete the Inactive TS Ports - The InactiveTSPortList.EXE accepts command line arguments The InactiveTSPortList.Standalone.WithoutPrompt.exe runs as a standalone exe without the need for any command line arguments.New ProjectsAFNC: testArithmetics: arithmetics for silverlight use note pattern by time streamAzon.Library: A collection of extensions, static helpers, AOP attributes. More will added as the project will go on.Chat TextBlock Control: A windows phone 7.1 control Resemble those chat balloon textblocks in the SMS appDiamond Framework: Diamond Framework an Common framework for Diamond Group.DNN Social Helpers: DNN Social HelpersDragon: DragonEasy Video Cropper: A simple application to make cropping videos easy for anyone. - Automatically detects black lines - Uses FFMPEGFluent Resource Mapper: This project aims to develop a framework to assist the internationalization of software using the paradigm Convetion over Configuration.Fully Observable: This project is to create an improved set of observable collections. It provides notifications for when items inside the collection change as well as when the collection itself changes.grpcmnq: no summary at allMathTool: Math tool for silverlight we plan will heve three point .matrix .differential equation .equation of locusnopCommerce Buckaroo payment provider plugin: This is a payment provider plugin for the dutch payment provider BUCKAROO. This plugin is developed and tested for nopCommerce version 2+ Phoenix MVVM+C Framework: Phoenix MVVM+C Framework PowerLib: PowerLib extends system .net library.RDRemote: This utility allows to enable the Remote Desktop connections from a remote computer using WMI.Sencha Touch Mini Workflow Framework: A workflow framework for Sencha Touch mobile apps including automatic component management ShWP: helper library for Windows PhoneTimer, Cronômetro e Despertador: Projeto desenvolvido no curso de extensão de C# da UFSCar SorocabaUtilityLibrary.Ajax: AjaxUtilityLibrary.Email: emailUtilityLibrary.FormBase: UtilityLibrary.FormBaseUtilityLibrary.Http: UtilityLibrary for HttpWebRequestUtilityLibrary.Ormapping: ormappingVoiceModel: VoiceModel is a project which make it easier to develop VoiceXML applications using ASP.Net MVC with Razor. It uses the MVVM (Model-View-VoiceModel) design pattern to abstract the voice application to a higher level. It is developed in C# and Razor.WebSite.Request: WebSite.Request launch web request (via XMLHTTP) on website. Use, for example, to make initial request to sharepoint URL and escape "slow first request" problem.Where's my lei, man?: Where's my lei, man?Zombsquare: Aplicación de ejemplo para Windows Phone utilizada en el Windows Phone Roadshow realizado en España en 2011, en esta solución podras encontra ejemplos de: -Diseño en Blend -BingMaps -GeoLocalizacion -Realidad Aumentada -Converters -Mini-trivial -Serialización de objetos ... resistir un apocalipsis Zombie...

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Mr Flibble: As Seen Through a Lens, Darkly

    - by Phil Factor
    One of the rewarding things about getting involved with Simple-Talk has been in meeting and working with some pretty daunting talents. I’d like to say that Dom Reed’s talents are at the end of the visible spectrum, but then there is Richard, who pops up on national radio occasionally, presenting intellectual programs, Andrew, master of the ukulele, with his pioneering local history work, and Tony with marathon running and his past as a university lecturer. However, Dom, who is Red Gate’s head of creative design and who did the preliminary design work for Simple-Talk, has taken the art photography to an extreme that was impossible before Photoshop. He’s not the first person to take a photograph of himself every day for two years, but he is definitely the first to weave the results into a frightening narrative that veers from comedy to pathos, using all the arts of Photoshop to create a fictional character, Mr Flibble.   Have a look at some of the Flickr pages. Uncle Spike The B-Men – Woolverine The 2011 BoyZ iN Sink reunion tour turned out to be their last Error 404 – Flibble not found Mr Flibble is not a normal type of alter-ego. We generally prefer to choose bronze age warriors of impossibly magnificent physique and stamina; superheroes who bestride the world, scorning the forces of evil and anarchy in a series noble and righteous quests. Not so Dom, whose Mr Flibble is vulnerable, and laid low by an addiction to toxic substances. His work has gained an international cult following and is used as course material by several courses in photography. Although his work was for a while ignored by the more conventional world of ‘art’ photography they became famous through the internet. His photos have received well over a million views on Flickr. It was definitely time to turn this work into a book, because the whole sequence of images has its maximum effect when seen in sequence. He has a Kickstarter project page, one of the first following the recent UK launch of the crowdfunding platform. The publication of the book should be a major event and the £45 I shall divvy up will be one of the securest investments I shall ever make. The local news in Cambridge picked up on the project and I can quote from the report by the excellent Cabume website , the source of Tech news from the ‘Cambridge cluster’ Put really simply Mr Flibble likes to dress up and take pictures of himself. One of the benefits of a split personality, however is that Mr Flibble is supported in his endeavour by Reed’s top notch photography skills, supreme mastery of Photoshop and unflinching dedication to the cause. The duo have collaborated to take a picture every day for the past 730-plus days. It is not a big surprise that neither Mr Flibble nor Reed watches any TV: In addition to his full-time role at Cambridge software house,Red Gate Software as head of creativity and the two to five hours a day he spends taking the Mr Flibble shots, Reed also helps organise the . And now Reed is using Kickstarter to see if the world is ready for a Mr Flibble coffee table book. Judging by the early response it is. At the time of writing, just a few days after it went live, ‘I Drink Lead Paint: An absurd photography book by Mr Flibble’ had raised £1,545 of the £10,000 target it needs to raise by the Friday 30 November deadline from 37 backers. Following the standard Kickstarter template, Reed is offering a series of rewards based on the amount pledged, ranging from a Mr Flibble desktop wallpaper for pledges of £5 or more to a signed copy of the book for pledges of £45 or more, right up to a starring role in the book for £1,500. Mr Flibble is unquestionably one of the more deranged Kickstarter hopefuls, but don’t think for a second that he doesn’t have a firm grasp on the challenges he faces on the road to immortalisation on 150 gsm stock. Under the section ‘risks and challenges’ on his Kickstarter page his statement begins: “An angry horde of telepathic iguanas discover the world’s last remaining stock of vintage lead paint and hold me to ransom. Gosh how I love to guzzle lead paint. Anyway… faced with such brazen bravado, I cower at the thought of taking on their combined might and die a sad and lonely Flibble deprived of my one and only true liquid love.” At which point, Reed manages to wrestle away the keyboard, giving him the opportunity to present slightly more cogent analysis of the obstacles the project must still overcome. We asked Reed a few questions about Mr Flibble’s Kickstarter adventure and felt that his responses were worth publishing in full: Firstly, how did you manage it – holding down a full time job and also conceiving and executing these ideas on a daily basis? I employed a small team of ferocious gerbils to feed me ideas on a daily basis. Whilst most of their ideas were incomprehensibly rubbish and usually revolved around food, just occasionally they’d give me an idea like my B-Men series. As a backup plan though, I found that the best way to generate ideas was to actually start taking photos. If I were to stand in front of the camera, pull a silly face, place a vegetable on my head or something else equally stupid, the resulting photo of that would typically spark an idea when I came to look at it. Sitting around idly trying to think of an idea was doomed to result in no ideas. I admit that I really struggled with time. I’m proud that I never missed a day, but it was definitely hard when you were late from work, tired or doing something socially on the same day. I don’t watch TV, which I guess really helps, because I’d frequently be spending 2-5 hours taking and processing the photos every day. Are there any overlaps between software development and creative thinking? Software is an inherently creative business and the speed that it moves ensures you always have to find solutions to new things. Everyone in the team needs to be a problem solver. Has it helped me specifically with my photography? Probably. Working within teams that continually need to figure out new stuff keeps the brain feisty I suppose, and I guess I’m continually exposed to a lot of possible sources of inspiration. How specifically will this Kickstarter project allow you to test the commercial appeal of your work and do you plan to get the book into shops? It’s taken a while to be confident saying it, but I know that people like the work that I do. I’ve had well over a million views of my pictures, many humbling comments and I know I’ve garnered some loyal fans out there who anticipate my next photo. For me, this Kickstarter is about seeing if there’s worth to my work beyond just making people smile. In an online world where there’s an abundance of freely available content, can you hope to receive anything from what you do, or would people just move onto the next piece of content if you happen to ask for some support? A book has been the single-most requested thing that people have asked me to produce and it’s something that I feel would showcase my work well. It’s just hard to convince people in the publishing industry just now to take any kind of risk – they’ve been hit hard. If I can show that people would like my work enough to buy a book, then it sends a pretty clear picture that publishers might hear, or it gives me the confidence enough to invest in myself a bit more – hard to do when you’re riddled with self-doubt! I’d love to see my work in the shops, yes. I could see it being the thing that someone flips through idly as they’re Christmas shopping and recognizing that it’d be just the perfect gift for their difficult to buy for friend or relative. That said, working in the software industry means I’m clearly aware of how I could use technology to distribute my work, but I can’t deny that there’s something very appealing to having a physical thing to hold in your hands. If the project is successful is there a chance that it could become a full-time job? At the moment that seems like a distant dream, as should this be successful, there are many more steps I’d need to take to reach any kind of business viability. Kickstarter seems exactly that – a way for people to help kick start me into something that could take off. If people like my work and want me to succeed with it, then taking a look at my Kickstarter page (and hopefully pledging a bit of support) would make my elbows blush considerably. So there is is. An opportunity to open the wallet just a bit to ensure that one of the more unusual talents sees the light in the format it deserves.  

    Read the article

  • Using Oracle Enterprise Manager Ops Center to Update Solaris via Live Upgrade

    - by LeonShaner
    Introduction: This Oracle Enterprise Manager Ops Center blog entry provides tips for using Ops Center to update Solaris using Live Upgrade on Solaris 10 and Boot Environments on Solaris 11. Why use Live Upgrade? Live Upgrade (LU) can significantly reduce downtime associated with patching Live Upgrade avoids dropping to single-user mode for long periods of time during patching Live Upgrade relies on an Alternate Boot Environment (ABE)/(BE), which is patched while in multi-user mode; thereby allowing normal system operations to continue with the active BE, while the alternate BE is being patched Activating an newly patched (A)BE is essentially a reboot; therefore the downtime is ~= reboot Admins can easily revert to the prior Boot Environment (BE) as a safeguard / fallback. Why use Ops Center to patch via Live Upgrade, Alternate Boot Environments, and Solaris 11 equivalents? All the benefits of Ops Center's extensive patch and package knowledge base can be leveraged on top of Live Upgrade Ops Center can orchestrate patching based on Live Upgrade and Solaris 11 features, which all works together to minimize downtime Ops Centers advanced inventory and reporting features assurance that each OS is updated to a verifiable, consistent standard, rather than relying on ad-hoc (error prone) procedures and scripts Ops Center gives admins control over the boot environment specifications or they can let Ops Center decide when a BE is necessary, thereby reducing complexity and lowering the opportunity for user error Preparing to use Live Upgrade-like features in Solaris 11 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Solaris 11 has features which are similar in concept to Live Upgrade on Solaris 10, but differ greatly in implementationImportant distinctions: Solaris 11 assumes ZFS root Solaris 11 adds Boot Environments (BE's) as an integrated feature (see beadm) Solaris 11 BE's avoid single-user patching (vs. Solaris 10 w/ ZFS snapshot=ABE). Solaris 11 Image Packaging System (IPS) has hooks for BE creation, as needed Solaris 11 allows pkgs to be installed + upgraded in alternate BE (e.g. instead of the live system) but it is controlled on a per-pkg basis Boot Environments are activated across a reboot; instead of spending long periods installing + upgrading packages in single user mode. Fallback to a prior BE is a function of the BE infrastructure (a la beadm). (Generally) Reboot + BE activation can be much much faster on Solaris 11 Preparing to use Live Upgrade on Solaris 10 Requirements and information you should know: Global Zone Root file-systems must be separate from Solaris Container / Zone filesystems Live Upgrade Pre-requisite patches must be applied before the first Live Upgrade Alternate Boot Environments are created (see "Pre-requisite Patches" section, below...) Solaris 10 Update 6 or newer on ZFS root is the practical starting point for Live Upgrade Live Upgrade with ZFS root is far more straight-forward than any scheme based on Alternative Boot Environments in slices or temporarily breaking mirrors Use Solaris best practices to upgrade the OS to at least Solaris 10 Update 4 (outside of Ops Center) UFS root can (technically) be used, but it is significantly more involved (e.g. discouraged) -- there are many reasons to move to ZFS while going through the process to update to Solaris 10 Update 6 or newer (out side of Ops Center) Recommendation: Start with Solaris 10 Update 6 or newer on ZFS root Recommendation: Start with Ops Center 12c or newer Ops Center 12c can automatically create your ABE's for you, without the need for custom scripts Ops Center 12c Update 2 avoids kernel panic on unpatched Solaris 10 update 9 (and older) -- unrelated to Live Upgrade, but more on the issue, below. NOTE: There is no magic!  If you have systems running Solaris 10 Update 5 or older on UFS root, and you don't know how to get them updated to Solaris 10 on ZFS root, then there are services available from Oracle Advanced Customer Support (ACS), which specialize in this area. Live Upgrade Pre-requisite Patches (Solaris 10) Certain Live Upgrade related patches must be present before the first Live Upgrade ABE's are created on Solaris 10.Use the following MOS Search String to find the “living document” that outlines the required patch minimums, which are necessary before using any Live Upgrade features: Solaris Live Upgrade Software Patch Requirements(Click above – the link is valid as of this writing, but search in MOS for the same "Solaris Live Upgrade Software Patch Requirements" string if necessary) It is a very good idea to check the document periodically and adapt to its contents, accordingly.IMPORTANT:  In case it wasn't clear in the above document, some direct patching of the active OS, including a reboot, may be required before Live Upgrade can be successfully used the first time.HINT: You can use Ops Center to determine what to expect for a given system, and to schedule the “pre-patching” during a maintenance window if necessary. Preparing to use Ops Center Discover + Manage (Install + Configure the Ops Center agent in) each Global Zone Recommendation:  Begin by using OCDoctor --agent-prereq to determine whether OS meets OC prerequisites (resolve any issues) See prior requirements and recommendations w.r.t. starting with Solaris 10 Update 6 or newer on ZFS (or at least Solaris 10 Update 4 on UFS, with caveats) WARNING: Systems running unpatched Solaris 10 update 9 (or older) should run the Ops Center 12c Update 2 agent to avoid a potential kernel panic The 12c Update 2 agent will check patch minimums and disable certain process accounting features if the kernel is not sufficiently patched to avoid the panic SPARC: 142900-05 Obsoleted by: 142900-06 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142901-05 Obsoleted by: 142901-06 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) OR SPARC: 142909-17 SunOS 5.10: kernel patch 10 Oracle Solaris on SPARC (32-bit) X64: 142910-17 SunOS 5.10_x86: kernel patch 10 Oracle Solaris on x86 (32-bit) Ops Center 12c (initial release) and 12c Update 1 agent can also be safely used with a workaround (to be performed BEFORE installing the agent): # mkdir -p /etc/opt/sun/oc # echo "zstat_exacct_allowed=false" > /etc/opt/sun/oc/zstat.conf # chmod 755 /etc/opt/sun /etc/opt/sun/oc # chmod 644 /etc/opt/sun/oc/zstat.conf # chown -Rh root:sys /etc/opt/sun/oc NOTE: Remove the above after patching the OS sufficiently, or after upgrading to the 12c Update 2 agent Using Ops Center to apply Live Upgrade-related Pre-Patches (Solaris 10)Overview: Create an OS Update Profile containing the minimum LU-related pre-patches, based on the Solaris Live Upgrade Software Patch Requirements, previously mentioned. SIMULATE the deployment of the LU-related pre-patches Observe whether any of the LU-related pre-patches will require a reboot The job details for each Global Zone will advise whether a reboot step will be required ACTUALLY deploy the LU-related pre-patches, according to your change control process (e.g. if no reboot, maybe okay to do now; vs. must do later because of the reboot). You can schedule the job to occur later, during a maintenance window Check the job status for each node, resolving any issues found Once the LU-related pre-patches are applied, you can Ops Center to patch using Live Upgrade on Solaris 10 Using Ops Center to patch Solaris 10 with LU/ABE's -- the GOODS!(this is the heart of the tip): Create an OS Update Profile containing the patches that make up your standard build Use Solaris Baselines when possible Add other individual patches as needed ACTUALLY deploy the OS Update Profile Specify the appropriate Live Upgrade options, e.g. Synchronize the active BE to the alternate BE before patching Do not activate the BE after patching Check the job status for each node, resolving any issues found Activate the newly patched BE according to your change control process Activate = Reboot to the ABE, making the ABE the new active BE Ops Center does not separate LU activate from reboot, so expect a reboot! Check the job status for each node, resolving any issues found Examples (w/Screenshots) Solaris 10 and Live Upgrade: Auto-Create the Alternate Boot Environment (ZFS root only) ABE to be created on ZFS with name S10_12_07REC (Example) Uses built in feature to call “lucreate -n S10_12_07REC” behind scenes if not already present NOTE: Leave “lucreate” params blank (if you do specify options, the will be appended after -n $ABEName) Solaris 10 and Live Upgrade: Alternate Boot Environment Creation via Operational Profile (script) The Alternate Boot Environment is to be created via custom, user-supplied script, which does whatever is needed for the system where Live Upgrade will be used. Operational Profile, which provides the script to create an ABE: Very similar to the automatic case, but with a Script (Operational Profile), which is used to create the ABE Relies on user-supplied script in the form of an Operational Profile Could be used to prepare an ABE based on a UFS root in a slice, or on a separate device (e.g. by breaking a mirror first) – it is up to the script author to do the right thing! EXAMPLE: Same result as the ZFS case, but illustrating the Operational Profile (e.g. script) approach to call: # lucreate -n S10_1207REC NOTE: OC special variable is $ABEName Boot Environment Profile, which references the Operational Profile Script = Operational Profile on this screen Refers to Operational Profile shown in the previous section The user-supplied S10_Create_BE Operational Profile will be run The Operational Profile must send a non-zero exit code if there is a problem (so that the OS Update job will not proceed) Solaris 10 OS Update Profile (to provide the actual patch specifications) Solaris 10 Baseline “Recommended” chosen for “Install” Solaris 10 OS Update Plan (two-steps in this case) “Create a Boot Environment” + “Update OS” are chosen. Using Ops Center to patch Solaris 11 with Boot Environments (as needed) Create a Solaris 11 OS Update Profile containing the packages that make up your standard build ACTUALLY deploy the Solaris 11 OS Update Profile BE will be created if needed (or you can stipulate no BE) BE name will be auto-generated (if needed), or you may specify a BE name Check the job status for each node, resolving any issues found Check if a BE was created; if so, activate the new BE Activate = Reboot to the BE, making the new BE the active BE Ops Center does not separate BE activate from reboot NOTE: Not every Solaris 11 OS Update will require a new BE, so a reboot may not be necessary. Solaris 11: Auto BE Create (as Needed -- let Ops Center decide) BE to be created as needed BE to be named automatically Reboot (if necessary) deferred to separate step Solaris 11: OS Profile Solaris 11 “entire” chosen for a particular SRU Solaris 11: OS Update Plan (w/BE)  “Create a Boot Environment” + “Update OS” are chosen. Summary: Solaris 10 Live Upgrade, Alternate Boot Environments, and their equivalents on Solaris 11 can be very powerful tools to help minimize the downtime associated with updating your servers.  For very old Solaris, there are some important prerequisites to adhere to, but once the initial preparation is complete, Live Upgrade can be used going forward.  For Solaris 11, the built-in Boot Environment handling is leveraged directly by the Image Packaging System, and the result is a much more straight forward way to patch, and far fewer prerequisites to satisfy in getting there.  Ops Center simplifies using either approach, and helps you improve consistency from system to system, which ultimately helps you improve the overall up-time across all the Solaris systems in your environment. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • CodePlex Daily Summary for Tuesday, December 14, 2010

    CodePlex Daily Summary for Tuesday, December 14, 2010Popular ReleasesFlickrNet API Library: 3.1.4000: Newest release. Now contains dedicated Windows Phone 7 DLL as well as all previous DLLs. Also contains Windows Help file documentation now as standard.mojoPortal: 2.3.5.8: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2358-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-12-13: Code samples for Visual Studio 2010SuperWebSocket: SuperWebSocket Drop 2: Changes: based on SuperSocket 1.3 supported sub protocol supported SSL/TLS encryption (wss) in Sync socket mode fixed some data communication bugsSSH.NET Library: 2010.12.13: Fixes SFTP issue when you try to uploaded or download multiple files simultaneously. Usage example can be found hereRequest Tracker Data Access: 1.0.0.0: First releaseSQL Monitor: SQL Monitor 2.4: 1. auto adjust datagrids in query 2. disable activities related commands until activities tab is active.SuperSocket, an extensible socket application framework: SuperSocket 1.3 beta 1: SuperSocket 1.3 is built on .NET 4.0 framework. Bug fixes: fixed a potential bug that the running state hadn't been updated after socket server stopped fixed a synchronization issue when clearing timeout session fixed a bug in ArraySegmentList fixed a bug on getting configuration value Third-part library upgrades: upgraded SuperSocket to .NET 4.0 upgraded EntLib 4.1 to 5.0 New features: supported UDP socket support custom protocol (can support binary protocol and other complecate...Wii Backup Fusion: Wii Backup Fusion 0.9 Beta: - Aqua or brushed metal style for Mac OS X - Shows selection count beside ID - Game list selection mode via settings - Compare Files <-> WBFS game lists - Verify game images/DVD/WBFS - WIT command line for log (via settings) - Cancel possibility for loading games process - Progress infos while loading games - Localization for dates - UTF-8 support - Shortcuts added - View game infos in browser - Transfer infos for log - All transfer routines rewritten - Extract image from image/WBFS - Support....NETTER Code Starter Pack: v1.0.beta: '.NETTER Code Starter Pack ' contains a gallery of Visual Studio 2010 solutions leveraging latest and new technologies and frameworks based on Microsoft .NET Framework. Each Visual Studio solution included here is focused to provide a very simple starting point for cutting edge development technologies and framework, using well known Northwind database (for database driven scenarios). The current release of this project includes starter samples for the following technologies: ASP.NET Dynamic...NuGet (formerly NuPack): NuGet 1.0 Release Candidate: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (http://go.microsoft.com/fwlink/?LinkID=206669) and package format. See http://nupack.codeplex.com/documentation?title=Nuspe...Free Silverlight & WPF Chart Control - Visifire: Visifire Silverlight, WPF Charts v3.6.5 Released: Hi, Today we are releasing final version of Visifire, v3.6.5 with the following new feature: * New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. You can visit Visifire documentation to know more. http://www.visifire.com/visifirechartsdocumentation.php Also this release includes few bug fixes: * Chart threw exception while adding new Axis in Chart using Vi...PHPExcel: PHPExcel 1.7.5 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.SwapWin: SwapWin 0.2: Updates: Bring all windows that are swapped to foreground. Make the window sent to primary screen active.??????????: All-In-One Code Framework ??? 2010-12-10: ?????All-In-One Code Framework(??) 2010?12??????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????release?,???????ASP.NET, WinForm, Silverlight????12?Sample Code。???,??????????sample code。 ?????:http://blog.csdn.net/sjb5201/archive/2010/12/13/6072675.aspx ??,??????MSDN????????????。 http://social.msdn.microsoft.com/Forums/zh-CN/codezhchs/threads ?????????????????,??Email ????UOB & ME: UOB_ME 2.5: latest versionAutoLoL: AutoLoL v1.4.3: AutoLoL now supports importing the build pages from Mobafire.com as well! Just insert the url to the build and voila. (For example: http://www.mobafire.com/league-of-legends/build/unforgivens-guide-how-to-build-a-successful-mordekaiser-24061) Stable release of AutoChat (It is still recommended to use with caution and to read the documentation) It is now possible to associate *.lolm files with AutoLoL to quickly open them The selected spells are now displayed in the masteries tab for qu...SubtitleTools: SubtitleTools 1.2: - Added auto insertion of RLE (RIGHT-TO-LEFT EMBEDDING) Unicode character for the RTL languages. - Fixed delete rows issue.PHP Manager for IIS: PHP Manager 1.1 for IIS 7: This is a final stable release of PHP Manager 1.1 for IIS 7. This is a minor incremental release that contains all the functionality available in 53121 plus additional features listed below: Improved detection logic for existing PHP installations. Now PHP Manager detects the location to php.ini file in accordance to the PHP specifications Configuring date.timezone. PHP Manager can automatically set the date.timezone directive which is required to be set starting from PHP 5.3 Ability to ...Algorithmia: Algorithmia 1.1: Algorithmia v1.1, released on December 8th, 2010.New ProjectsAugmented Reality system in Soccer video: Augmented reality system and camera calibration system in soccer videos based on homography and vanishing points. Code generated with Visual C++ (best compiler is .net)Database Schema Provider: Database Schema Provider gets a database schema in unified format independent on the type of database. It uses ADO.NET data provider for Entity Framework. Dicke Bertha: Many many cool features... DNN Bookmark: DNN Bookmarks is a DNN module that aggregates the most popular social bookmarking tools and also allows you to bookmark your DNN web siteDough: Dough is a UI starter kit built using ASP.Net MVC and ExtJS. It's name comes from the concept of Amish friendship bread, a type of bread or cake made from a sourdough starter that is often shared in a manner similar to a chain letter.Garra - Gerenciador Financeiro: Garra é um sistema completo de controle financeiro: contas a pagar, contas a receber, investimentos, etc...ghcwp7: ghcwp7Hackathon - DotNetNuke Razor User Locator: The DotNetNuke Razor User Locator module demonstrates how Razor can be used to author DNN modules. This module shows where recent users to a web site came from based on their IP address. Hackathon. DotNetNuke Razor. Flickr Badge: Flickr badge desktop module allows you to display image thumbnails from Flickr and preview them inside DotNetNuke or on Flickr (controlled by module settings). Image thumbnails can be loaded by tag, user id, user group id, user set id and more.Hackathon: Razor Youtube Gallery: This is a DotNetNuke module which allows a website admin to add several relevant Youtube videos to a pane. The end user watches the selected Youtube video play, while scrolling through thumbnails of other videos to play those without refreshing the page. jQuery UI MVC3 Demo: Demo and possibly a skeleton for using jQuery UI in MVC3 (currently RC2).Microsoft Office Communicator History manager: Needs to save conversation history just on the local workstation (folder) then application could redstore it and show in simple window (mode) or user could open folder and lock on it manuallyMSTest Extensions - Msbuild: This project contains various msbuild tasks that extend helps with test execution using Mcrosoft testing frameworkRayCharlesTracer: this is a scholar project for a raytracer.Refunctor: F# interactive inside Reflector.Request Tracker Data Access: Best Practical RT (Request Tracker) data access .NET library for REST interface.SurfzApp: An application that does data mining on web resources of interest for Swedish windsurfers...Tesseract Solutions Corp. Data Access Base: Tesseract Data Access speeds up data access in .Net projects. Developed in C# .Net 4. It is a C#, class based ORM.TimBazinga EVoting: Undergrad project - designing an e-voting software system.Tiny Library CQRS: Tiny Library CQRS is a small demonstration project which demonstrates the concept of Domain Driven Design and the CQRS architecture pattern. This project relies on the Apworks DDD framework.Toptoys: toptoysWebGroup: WebGroup makes it easier for your website members to comunicate online. It work like Web IM + Forum + Twitter. It can be easily used in your current project. Developed in C#.WpfCustomChromeLibrary: WpfCustomChromeLibrary makes it easier to create WPF applications with custom chrome and caption buttons (min/max/close). You'll no longer have to do all the dirty work yourself in each application where you want a custom chrome. It's developed in XAML and C#.

    Read the article

  • "Translator by Moth"

    - by Daniel Moth
    This article serves as the manual for the free Windows Phone 7 app called "Translator by Moth". The app is available from the following link (browse the link on your Window Phone 7 phone, or from your PC with zune software installed): http://social.zune.net/redirect?type=phoneApp&id=bcd09f8e-8211-e011-9264-00237de2db9e   Startup At startup the app makes a connection to the bing Microsoft Translator service to retrieve the available languages, and also which languages offer playback support (two network calls total). It populates with the results the two list pickers ("from" and "to") on the "current" page. If for whatever reason the network call fails, you are informed via a message box, and the app keeps trying to make a connection every few seconds. When it eventually succeeds, the language pickers on the "current" page get updated. Until it succeeds, the language pickers remain blank and hence no new translations are possible. As you can guess, if the Microsoft Translation service add more languages for textual translation (or enables more for playback) the app will automatically pick those up. "current" page The "current" page is the main page of the app with language pickers, translation boxes and the application bar. Language list pickers The "current" page allows you to pick the "from" and "to" languages, which are populated at start time. Until these language get populated with the results of the network calls, they remain empty and disabled. When enabled, tapping on either of them brings up on a full screen popup the list of languages to pick from, formatted as English Name followed by Native Name (when the latter is known). The "to" list, in addition to the language names, indicates which languages have playback support via a * in front of the language name. When making a selection for the "to" language, and if there is text entered for translation, a translation is performed (so there is no need to tap on the "translate" application bar button). Note that both language choices are remembered between different launches of the application.   text for translation The textbox where you enter the translation is always enabled. When there is nothing entered in it, it displays (centered and in italics) text prompting you to enter some text for translation. When you tap on it, the prompt text disappears and it becomes truly empty, waiting for input via the keyboard that automatically pops up. The text you type is left aligned and not in italic font. The keyboard shows suggestions of text as you type. The keyboard can be dismissed either by tapping somewhere else on the screen, or via tapping on the Windows Phone hardware "back" button, or via taping on the "enter" key. In the latter case (tapping on the "enter" key), if there was text entered and if the "from" language is not blank, a translation is performed (so there is no need to tap on the "translate" application bar button). The last text entered is remembered between application launches. translated text The translated text appears below the "to" language (left aligned in normal font). Until a translation is performed, there is a message in that space informing you of what to expect (translation appearing there). When the "current" page is cleared via the "clear" application bar button, the translated text reverts back to the message. Note a subtle point: when a translation has been performed and subsequently you change the "from" language or the text for translation, the translated text remains in place but is now in italic font (attempting to indicate that it may be out of date). In any case, this text is not remembered between application launches. application bar buttons and menus There are 4 application bar buttons and 4 application bar menus. "translate" button takes the text for translation and translates it to the translated text, via a single network call to the bing Microsoft Translator service. If the network call fails, the user is informed via a message box. The button is disabled when there is no "from" language available or when there is not text for translation entered. "play" button takes the translated text and plays it out loud in a native speaker's voice (of the "to" language), via a single network call to the bing Microsoft Translator service. If the network call fails, the user is informed via a message box. The button is disabled when there is no "to" language available or when there is no translated text available. "clear" button clears any user text entered in the text for translation box and any translation present in the translated text box. If both of those are already empty, the button is disabled. It also stops any playback if there is one in flight. "save" button saves the entire translation ("from" language, "to" language, text for translation, and translated text) to the bottom of the "saved" page (described later), and simultaneously switches to the "saved" page. The button is disabled if there is no translation or the translation is not up to date (i.e. one of the elements have been changed). "swap to and from languages" menu swaps around the "from" and "to" languages. It also takes the translated text and inserts it in the text for translation area. The translated text area becomes blank. The menu is disabled when there is no "from" and "to" language info. "send translation via sms" menu takes the translated text and creates an SMS message containing it. The menu is disabled when there is no translation present. "send translation via email" menu takes the translated text and creates an email message containing it (after you choose which email account you want to use). The menu is disabled when there is no translation present. "about" menu shows the "about" page described later. "saved" page The "saved" page is initially empty. You can add translations to it by translating text on the "current" page and then tapping the application bar "save" button. Once a translation appears in the list, you can read it all offline (both the "from" and "to" text). Thus, you can create your own phrasebook list, which is remembered between application launches (it is stored on your device). To listen to the translation, simply tap on it – this is only available for languages that support playback, as indicated by the * in front of them. The sound is retrieved via a single network call to the bing Microsoft Translator service (if it fails an appropriate message is displayed in a message box). Tap and hold on a saved translation to bring up a context menu with 4 items: "move to top" menu moves the selected item to the top of the saved list (and scrolls there so it is still in view) "copy to current" menu takes the "from" and "to" information (language and text), and populates the "current" page with it (switching at the same time to the current page). This allows you to make tweaks to the translation (text or languages) and potentially save it back as a new item. Note that the action makes a copy of the translation, so you are not actually editing the existing saved translation (which remains intact). "delete" menu deletes the selected translation. "delete all" menu deletes all saved translations from the "saved" page – there is no way to get that info back other than re-entering it, so be cautious. Note: Once playback of a translation has been retrieved via a network call, Windows Phone 7 caches the results. What this means is that as long as you play a saved translation once, it is likely that it will be available to you for some time, even when there is no network connection.   "about" page The "about" page provides some textual information (that you can view in the screenshot) including a link to the creator's blog (that you can follow on your Windows Phone 7 device). Use that link to discover the email for any feedback. Other UI design info As you can see in the screenshots above, "Translator by Moth" has been designed from scratch for Windows Phone 7, using the nice pivot control and application bar. It also supports both portrait and landscape orientations, and looks equally good in both the light and the dark theme. Other than the default black and white colors, it uses the user's chosen accent color (which is blue in the screenshot examples above). Feedback and support Please report (via the email on the blog) any bugs you encounter or opportunities for performance improvements and they will be fixed in the next update. Suggestions for new features will be considered, but given that the app is FREE, no promises are made. If you like the app, don't forget to rate "Translator by Moth" on the marketplace. Comments about this post welcome at the original blog.

    Read the article

  • On REST: WADL or not IDL, is the following approach right ?

    - by redben
    This question is a bit long, please bear with me. In REST, i think we should not need WADL or any IDL. But rather something that would implicitly cover its concept. The way I think about it is when we (humans) surf the Web, when we go to a web site for the first time, we don't know what services it provides. You discover those on the html home page (or a sitemap page in a help section) or maybe just the main menu on the home page. If you make an analogy, the homepage or site map to us humans is what WSDL is to WS-* or what WADL could be to a REST service. Only that its just like any other html content. I think that in REST the following is a good way to do things, respecting the HATEOS paradigm. Have a top level (or default) resource that lists links to your other resources. For a library example, say RestLibrary.com/ it could be something like: <root xmlns:lib="http://librarystandards.com/libraryml"> <resource class="lib:book"> <link type="application/vnd.libraryml+xml" template="mylib.com/book/{isbn}" /> <link type="application/vnd.libraryml+xml" rel="add" href="mylib.com/book" method="POST" /> <link type="application/vnd.libraryml+xml" rel="update" template="mylib.com/book/{isbn}" method="PUT" /> </resource> <resource class="lib:bookList"> <link template="mylib.com/book?keywords={keywords}" type="application/vnd.openlibrary+xml" rel="search" /> </resource> </root> Note that it is assumed that the media type "application/vnd.libraryml+xml" is a defined standard or (may be just proprietary vocabulary) named libraryml. Also, the client should be able to understand this "homepage" resource (elements root, resource and link). This is the part that could be used instead of WADL : an Abstract vocabulary that should be understandable by any client. You could use an existing standard like Atom for example. But the main idea is to have an abstract vocabulary understandable by any client. Why not WADL then ? well wadl is only for service discovery. The idea here is to have an light abstract vocabulary that would serve as a base for hypermedia. A "root" vocabulary. Like in owl we have owl:thing...etc Now if the client knows the "libraryml" standard it can follow the links to the things it understands (after parsing the media type properties and xmlns). If not, it just won't. When i can't understand how to deal with something in REST architecture i tend to see how we Humans do it in the Web. In the Web, we have the Generic language that is HTML that enables site builders to deliver any specific content, regardless of its meaning to the client (the user), Browsers understand HTML but not the "meaning" of its content. It is the user that understands the (domain specific) content. If i go to say QuantumPhysics.org, my browser can render the home page (it is just html after all) and i can read the home page. If i understand quantum then fine i can continue browsing. If i don't i just get out (unless i want to learn the hardway :) ) In the RetsLibrary.com example the client app is just like me+my browser on QuantumPhysics.org. the media type "application/vnd.libraryml+xml" is quantum physics (knowledge). http is http in both examples. Now HTML of QuantumPhysics.org is in RestLibrary.com is XML + that tiny little abstract vocabulary (root resource and link, that you could replace with something like Atom). So does this approach have any value ? don't we need a root tiny hyper-vocabulary so we can succeed with hypermedia and the "initial URI" concept ? edit Yeah why not RDF as the root vocabulary !

    Read the article

  • Your Day-by-Day Guide to Agile PLM at Oracle OpenWorld 2012

    - by Kerrie Foy
    This year’s Oracle OpenWorld conference is nearly here, and we’re all excited about what we have planned! With five days of activities and customer presenters from market leaders and top innovators like The Coca-Cola Company, Starbucks, JDSU, Facebook, GlobalFoundries, and more, this is an event you don't want to miss. I've compiled this day-by-day guide to help anyone keep track of all the “Product Lifecycle Management and Product Value Chain” sessions and activities at OpenWorld 2012, September 30 – October 4 in San Francisco, California.  Monday, October 1 There are great networking activities on Sunday September 30, but PLM specific sessions start after general conference keynotes on Monday, October 1 at 10:45 a.m. at the InterContinental Hotel in room Telegraph Hill. In fact, most of our sessions this year will be held in this room, which is still close to the conference keynotes in Moscone, but just far enough away to allow some focused networking and discussions.   This first session, 10:45 – 11:45 a.m. is a joint session with the Agile and AutoVue teams, entitled “Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization Soltuions” featuring presenters from Oracle as well as joint AutoVue and Agile PLM customer GlobalFoundries. In the following 12:15 – 1:15 p.m. slot, there are two sessions to choose from, so if you have a team of representatives attending OpenWorld, you may consider splitting up to catch both of these: a) Our General Session will be held in the InterContinental Hotel Ballroom C, which will cover our complete enterprise PLM strategy, product updates, and roadmaps. It’s our pleasure to feature a customer keynote presentation from Chris Bedi, CIO, and Rajeev Sethi, Director IT Business Engagement, of JDSU. b) A focused session on integrating PLM with Engineering and Supply Chain Systems will be held on the second floor of Moscone West (next to the InterContinental) in room 2022. Join to discover how these types of integrations help companies manage common and integrated design information across all MCAD, ECAD, and software components. After a lunch break and perhaps a visit to the Demogrounds in Moscone West, select from two product roadmap sessions in the next time slot (3:15 – 4:15 p.m.): an Agile 9.3.x session located in the InterContinental’s Ballroom C, and an Agile PLM for Process session located back in the InterContinental’s Telegraph Room. Both sessions will have strong content around each product line’s latest releases, vision, and customer examples. We are very pleased to feature Daniel Soosai of Facebook in the A9 session and Vinnie D’Agostino of The Coca-Cola Company in the PLM for Process session. Afterwards, hang in there for one last session of the day from 4:45 – 5:45 p.m.; it’s an insightful discussion on leveraging Agile PLM as the Foundation for Enterprise Quality Management, and it’s sure to be one of the best. In the Telegraph Room, this session will feature Oracle experts, partner co-presenter David Bartlett from CPG Solutions, and customer co-presenter Thomas Crowe, CIO of PL Developments. Hear their experience around implementing collaborative, integrated solutions to ensure effective knowledge transfer throughout an organization, and how to perform analysis in real time to resolve product quality issues swiftly and efficiently. On Monday evening there will be plenty of industry, product, and partner dinners, so take advantage of all the networking opportunities and catch some great tunes at the 5 day Oracle OpenWorld Music Festival! Tuesday, October 2 Tuesday starts early with a special PLM Networking Brunch, sponsored by several partners, from 8:30 a.m. – 10:30 a.m. at the B Restaurant that sits atop Yerba Buena Gardens. You’ll have the unique opportunity to meet with like-minded industry peers and a PLM partner to discuss a topic of your choosing while enjoying a delicious meal. Registration is required, so to inquire about attending this brunch, please email Terri.Hiskey-AT-oracle.com. After wrapping up your conversations over brunch, head over to the Marriott Marquis in the Nob Hill CD room for a chance to experience the Oracle Product Lifecycle Analytics solution in a Hands-On Lab, open from 10:15 a.m. – 12:45 p.m. Experts will be there to answer your questions. Back in the InterContinental Hotel’s Telegraph room, the session on “Ideation and Requirements Management: Capturing the Voice of the Customer” begins at 11:45 a.m. – 12:45 p.m. This may be the session for you if you’re struggling with challenges like too many repositories of customer needs, requests, and ideas; limited visibility into which ideas are being advanced by customers and field resources; or if you’re unable to leverage internal expertise to expose effort and potential risks. This session will discuss how Agile PLM can help you overcome ideation challenges to deliver the right products to their targeted markets and fulfill customer desires. Next, from 1:15 – 2:15 p.m. join us for a session on Managing Profitable Innovation with Oracle Product Lifecycle Analytics. If you missed the Hands-on Lab, have more questions, or simply want to be inspired by the product’s forward-thinking vision and capabilities, this is a great opportunity to meet the progressive-minded executives behind the application. After this session, it may be a good opportunity to swing by the Demogrounds in Moscone West and visit the Agile PLM demos at exhibit booths #81 for Agile PLM for Discrete Manufacturing, #70 for Agile PLM for Process, and #82 for AutoVue and Agile PLM Enterprise Visualization. Check out the related Supply Chain Management booths close by if you’re interested - here's the map. There’s always lots to see and do around the exhibit area. But don’t forget the last session of the day from 5:00 p.m. – 6:00 p.m. in Telegraph Hill on Managing Product Innovation and Compliance in Life Science Companies, a “must-see” if you’re in this industry. Launching innovative products quickly is already a high-stakes challenge, but companies in the life sciences industry face uniquely severe consequences when new products don’t perform or comply as required. In recent years, more and more regulations have become mandatory, and new ones, such as REACH, are currently going into effect for several companies. Customer presenters from pharmaceutical leader Eli Lilly will share how they’ve leveraged Agile PLM to deliver high-quality, innovative products in a fast-paced, heavily regulated market environment. Tuesday evening unwind at the Supply Chain Management Reception from 6:00 – 8:00 p.m. at the premier boutique Roe Nightclub and Lounge, which is located about three blocks down on Howard Street (on the other side of Moscone from the InterContinental Hotel). Registration is required. Click here for the details.   Wednesday, October 3 We have another full line-up on Wednesday, so be ready for an action-packed day. We start with a session at 10:15 – 11:15 a.m. in the Telegraph Room where we have a session on “PLM for Consumer Products: Building an Engine for Quality and Innovation” with featured presenters from Starbucks and partner Kalypso. This is a rare opportunity to learn directly from Starbucks how they instill quality and innovation throughout their organization, products, and processes, leveraging PLM disciplines with strong support from their partner.  If you’re not in the consumer products industry, we recommend attending another session at 10:15 – 11:15 a.m. in Moscone West room 3005: “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” featuring Jeff Henley, Oracle’s Chairman of the Board and Jon Chorley, Chief Sustainability Officer. Oracle will honor select customers with Oracle’s Eco-Enterprise Innovation award, which recognizes customers and their respective partners who rely on Oracle products to support their green business practices to reduce their environmental impact while improving business efficiencies and reducing costs. The awards presentation is followed by a panel discussion with customers and Oracle executives, who describe how these award-winning organizations are embracing environmental initiatives as a central part of their business strategy and how information technology plays a pivotal role. Next at 11:45 a.m. – 12:45 p.m. in Telegraph Hill attend our session devoted to exploring Product Lifecycle Management’s role in Software Lifecycle Management. This is a thought leadership session with Oracle experts in the field on the importance of change management, and we’ll discuss how Oracle has for years leveraged Agile PLM to develop Agile PLM. If software lifecycle management doesn’t apply to your business or you’d rather engage in some lively one-on-one discussions, we also have a “Supply Chain Meet the Experts” session in Moscone West Room 2001A. Product experts, thought leaders and executives will be on hand to discuss your questions/topics, so come prepared. This session tends to fill up fast so try to get in early. At 1:15 – 2:15 p.m. join us back in Telegraph Hill for a session focused on leveraging the Agile Product Portfolio Management application as the Product Development Master Schedule to improve efficiencies, optimize resources, and gain visibility across projects enterprise-wide to improve portfolio profitability. Customer presenters from Broadcom will explain how they’ve leveraged the product to enable a master schedule with enterprise-level, phase-gate program and project collaboration and resource optimization. Again in Telegraph Hill from 3:30 – 4:30 p.m. we have an interesting session with leading semiconductor customer LSI and partner Kalypso on how LSI leveraged Agile PLM to advance from homegrown applications to complete Product Value Chain Management. That type of transition can be challenging, and LSI details how they were able to achieve their goals and the value they gained along the journey – a fascinating account for any company interested in leveraging best practices to innovate their business processes and even end products. Lastly, we’ll wrap up in Telegraph Hill from 5:00 – 6:00 p.m. with a session on “Ensuring New Product Success by Achieving Excellence in New Product Introduction.” This is a cross-industry session, guaranteed to deliver insight in the often elusive practice of creating winning products, and we’re very excited about. According to IDC Manufacturing Insights analyst Joe Barkai, “Product Failures are not necessarily a result of bad ideas…they are a result of suboptimal decisions.” We’ll show you how to wire your business processes to enhance decision-making and maximize product potential. Now, quickly hit your hotel room to freshen up and then catch one of the many complimentary shuttles to the much-anticipated Oracle Customer Appreciation Event on Treasure Island. We have a very exciting show planned – check out what’s in store here. Thursday, October 4 PLM has a light schedule on Thursday this year with just one session, but this again is one of our best sessions on managing the Product Value Chain: at 11:15 a.m – 12:15 p.m.in Telegraph Hill, it’s a customer and partner driven session with Sonoco Products and Deloitte telling their story about how to achieve integrated change control by interfacing Agile PLM with Oracle E-Business Suite. Sonoco Products, a global manufacturer of consumer and industrial packaging materials, with its systems integrator, Deloitte, is doing this by implementing prebuilt integration (Oracle Design-to-Release Integration Pack for Agile Product Lifecycle Management for Process and Oracle Process) to integrate Agile with Oracle Product Hub/Oracle Product Information Management and Oracle E-Business Suite. This session presents a case study of how Sonoco is leveraging this solution to improve data quality and build a framework for stronger master data governance. Even though that ends our PLM line-up at OpenWorld, there will still be many sessions and activities at the conference, so visit the Oracle OpenWorld website to review agendas and build your schedule. And of course, download and bring this guide and the latest version of the Agile PLM Focus-On Document (available soon!). San Francisco is a wonderful city to explore, and we’re glad you’re considering joining the Agile PLM team at Oracle OpenWorld!  I hope to see you there! Follow me before the conference and on site for real-time updates about #OOW12 on Twitter @Kerrie_Foy or @AgilePLM.

    Read the article

  • CodePlex Daily Summary for Saturday, May 12, 2012

    CodePlex Daily Summary for Saturday, May 12, 2012Popular ReleasesKanboxAPI: KanboxAPI beta: ????? Token Info List DownloadMedia Companion: Media Companion 3.502b: It has been a slow week, but this release addresses a couple of recent bugs: Movies Multi-part Movies - Existing .nfo files that differed in name from the first part, were missed and scraped again. Trailers - MC attempted to scrape info for existing trailers. TV Shows Show Scraping - shows available only in the non-default language would not show up in the main browser. The correct language can now be selected using the TV Show Selector for a single show. General Will no longer prompt for ...NewLife XCode ??????: XCode v8.5.2012.0508、XCoder v4.7.2012.0320: X????: 1,????For .Net 4.0?? XCoder????: 1,???????,????X????,?????? XCode????: 1,Insert/Update/Delete???????????????,???SQL???? 2,IEntityOperate?????? 3,????????IEntityTree 4,????????????????? 5,?????????? 6,??????????????dycom: v1.0: DYCom ????????:Silverlight, Windows phone 7.5.NETMF_for_STM32: Beta 1 Release: First public beta release.Google Book Downloader: Google Books Downloader Lite 1.0: Google Books Downloader Lite 1.0Python Tools for Visual Studio: 1.5 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports Cpython, IronPython, Jython and Pypy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0 RC1 Refresh 1: JayData 1.0.0 RC1 Refresh 1 JayData is a unified data access API to webSQL, indexedDB, OData, Facebook and YQL. Overview The major feature of this release is related to OData provider, FunctionImport is now generally supported. Now you can consume OData service operations (WebMethods). We extended the JaySvcUtil to generate the necessary metadata. We included many fixes, such as the Visual Studio 2010 IntelliSense optimalization (RC1 was optimized only to VS11). It's recommended to upgrade...AD Gallery: AD Gallery 1.2.7: NewsFixed a bug which caused the current thumbnail not to be highlighted Added a hook to take complete control over how descriptions are handled, take a look under Documentation for more info Added removeAllImages()51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.4.8: One Click Install from NuGet Data ChangesIncludes 42 new browser properties in both the Lite and Premium data sets. Premium Data includes many new devices including Nokia Lumia 900, BlackBerry 9220 and HTC One, the Samsung Galaxy Tab 2 range and Samsung Galaxy S III. Lite data includes devices released in January 2012. Changes to Version 2.1.4.81. The IsFirstTime method of the RedirectModule will now return the same value when called multiple times for the same request. This was prevent...Mugen Injection: Mugen Injection ver 2.2 (WinRT supported): Added NamedParameterAttribute, OptionalParameterAttribute. Added behaviors ICycleDependencyBehavior, IResolveUnregisteredTypeBehavior. Added WinRT support. Added support for NET 4.5. Added support for MVC 4.NShape - .Net Diagramming Framework for Industrial Applications: NShape 2.0.1: Changes in 2.0.1:Bugfixes: IRepository.Insert(Shape shape) and IRepository.Insert(IEnumerable<Shape> shapes) no longer insert shape connections. Several context menu items did display although the required permission was not granted Display did not reset the visible and active layers when changing the diagram NullReferenceException when pressing Del key and no shape was selected Changed Behavior: LayerCollection.Find("") no longer throws an exception. Improvements: Display does not rese...AcDown????? - Anime&Comic Downloader: AcDown????? v3.11.6: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...sb0t: sb0t 4.64: New commands added: #scribble <url> #adminscribble on #adminscribble offDocument.Editor: 2012.4: Whats new for Document.Editor 2012.4: Improved Template support Improved Options Dialog Minor Bug Fix's, improvements and speed upsJson.NET: Json.NET 4.5 Release 5: New feature - Added ItemIsReference, ItemReferenceLoopHandling, ItemTypeNameHandling, ItemConverterType to JsonPropertyAttribute New feature - Added ItemRequired to JsonObjectAttribute New feature - Added Path to JsonWriterException Change - Improved deserializer call stack memory usage Change - Moved the PDB files out of the NuGet package into a symbols package Fix - Fixed infinite loop from an input error when reading an array and error handling is enabled Fix - Fixed base objec...BlackJumboDog: Ver5.6.1: 2012.05.07 Ver5.6.1 (1)????????????????(Ver5.6.0??)??? (2)HTTP?????SSL????????????(Ver5.6.0??)??? (3)HTTP?????2G??????????????????????????? (4)HTP???? ?????????ExtAspNet: ExtAspNet v3.1.5: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-05-06 v3.1.5 -????????:grid/grid_twogrid.aspx。 +?...SharpDevelop: SharpDevelop 4.2: Please see http://community.sharpdevelop.net/forums/t/15772.aspx for the release announcement.Desktop Google Reader: 1.4.4: Taskbar icon overlay (number of unread items) can now be switched off in preferences (Windows Vista / 7 only) Maximize button now can be toggled to be fullscreen (as befor) or only normal maximize (taskbar stays visible) in preferences List of feeds is now sorted by alphabetNew Projects3D Scene Editor: A generic 3d level editor built using XNA to speed up designing and building game levels.Attribute Based Cache using Unity Interception: Unity interception handler attribute for Caching which allows to apply boiler plate caching pattern to classes, and class members directly, without configuring them in the application configuration file. Configure your choice of Cache Provider (ObjectCache, Azure included) in the Unity IoC Container and apply the attribute to the method which you want to cache AutoCompleteBox for WinRT: None yetBAC2 Bachelor's Thesis Source Code: The source code of my Bachelor's ThesisBAMabase: It's a bamabase.BBSProject: BBSProjectBrain 2 - Game Engine: Brain 2 is a Game Engine that runs on multiple platforms.cobra-winldtp: Cobra - Windows version of Linux Desktop Testing Project (WinLDTP) - http://ldtp.freedesktop.org LDTP is a GUI test automation tool works on both Windows and Linux platform Windows GUI test automation tool written in C# and test scripts can be written in Python for now. Ruby API will be added soon.CS322: C# Programski jezikDoxBotPlugin: The DoxBotPlugin is a Plug-In for Ice-Chat9 that allows a user to use Icechat9 as both a normal IRC client and to switch on bot mode in certain channels to have DoxBotPlugin act on their behalf.dycom: DYCom??(DY Communication)???????????,?????????????????.????????????????. ??????????????????????,?????????????????。Eyes Protector: PL: Program pomagajacy w ochronie oczu przed przemeczeniem zwiazanym ze zbyt dluga praca przy komputerze. EN: ---kshell: ????Linux??????Lumia: This is Lumia project.MacroDoc: MacroDoc is an engine written in C# for composing documents from reusable pieces of structure and content.Mssql: This is Sql Server project.NETMF_for_STM32: This is the Codeplex project for NETMF for STM32 (F4 Edition). Ocular - a free, open source WYSIWYG editor for HTML: Ocular is a free C# WYSIWYG HTML editor, similar to Adobe Dreamweaver. We are always looking for contributors, so please help us!PeopleCredit: Prototype web service to maintain all employee credits.Project Server workflow: This workflow creates the project site for Basic project plan EPT when workflow task is approved.(This is the correction done over the Branching workflow provided with Project Server 2010 SDK). Workflow task is created using PSWApprovalTask Content type in Project Server Workflow Task List. Quick Reminder: PL: Program pomagajacy zapisac szybkie przypomnienia podczas pracy przy komputerze. EN: ---Random Projects: My random projects...Recommendation Engine Demo: How does the Amazon recommendation works? This is about visualizing the item to item collaborations filtering mechanism using a item-to-item matrix table. The item-to-item matrix, the vectors and the calculated data values are displayed. There are n different items and the item recommendation can display up to m items. There are implemented different item-to-item neighborhood functions. A simple max count of seen neighbor items, the Cosine Similarity and the Jaccard Index. A t...SaveSeaTurtle: Sea TurtleSharpGpx: SharpGpx implements an object model for reading and writing GPX (GPS eXchange Format).SlimDo: SlimDo is a scripting language coded in C#Spring: This is Spring.Net projectSQL Server Quick Tools Pack: SQL Server Quick Tools Pack for your sql server SuLD framework: Supported Link Discovery framework (SuLD) is a tool to discover links to multiple Linked Data datasets. Finding is supported by various features like synonym module or autocomplete.TQuery.Net: .Net??????WAAP - World of warcraft Auction house Analysis Project: A project in which we try to analyse prices and auctioneers on the various World of Warcraft Auction housesxhttp.net: The Xhttp.Net framework is a dotnet implementation of the Extended Hypertext Transfer Protocol (http://www.xhttp.org), with simple service integration, full arguments support including Base64 and DateTime, single and multiple asynchronous requests, data streaming, remote API creation from XHTTP service schemas, and a runtime plugin architecture.

    Read the article

  • VLOOKUP in Excel, part 2: Using VLOOKUP without a database

    - by Mark Virtue
    In a recent article, we introduced the Excel function called VLOOKUP and explained how it could be used to retrieve information from a database into a cell in a local worksheet.  In that article we mentioned that there were two uses for VLOOKUP, and only one of them dealt with querying databases.  In this article, the second and final in the VLOOKUP series, we examine this other, lesser known use for the VLOOKUP function. If you haven’t already done so, please read the first VLOOKUP article – this article will assume that many of the concepts explained in that article are already known to the reader. When working with databases, VLOOKUP is passed a “unique identifier” that serves to identify which data record we wish to find in the database (e.g. a product code or customer ID).  This unique identifier must exist in the database, otherwise VLOOKUP returns us an error.  In this article, we will examine a way of using VLOOKUP where the identifier doesn’t need to exist in the database at all.  It’s almost as if VLOOKUP can adopt a “near enough is good enough” approach to returning the data we’re looking for.  In certain circumstances, this is exactly what we need. We will illustrate this article with a real-world example – that of calculating the commissions that are generated on a set of sales figures.  We will start with a very simple scenario, and then progressively make it more complex, until the only rational solution to the problem is to use VLOOKUP.  The initial scenario in our fictitious company works like this:  If a salesperson creates more than $30,000 worth of sales in a given year, the commission they earn on those sales is 30%.  Otherwise their commission is only 20%.  So far this is a pretty simple worksheet: To use this worksheet, the salesperson enters their sales figures in cell B1, and the formula in cell B2 calculates the correct commission rate they are entitled to receive, which is used in cell B3 to calculate the total commission that the salesperson is owed (which is a simple multiplication of B1 and B2). The cell B2 contains the only interesting part of this worksheet – the formula for deciding which commission rate to use: the one below the threshold of $30,000, or the one above the threshold.  This formula makes use of the Excel function called IF.  For those readers that are not familiar with IF, it works like this: IF(condition,value if true,value if false) Where the condition is an expression that evaluates to either true or false.  In the example above, the condition is the expression B1<B5, which can be read as “Is B1 less than B5?”, or, put another way, “Are the total sales less than the threshold”.  If the answer to this question is “yes” (true), then we use the value if true parameter of the function, namely B6 in this case – the commission rate if the sales total was below the threshold.  If the answer to the question is “no” (false), then we use the value if false parameter of the function, namely B7 in this case – the commission rate if the sales total was above the threshold. As you can see, using a sales total of $20,000 gives us a commission rate of 20% in cell B2.  If we enter a value of $40,000, we get a different commission rate: So our spreadsheet is working. Let’s make it more complex.  Let’s introduce a second threshold:  If the salesperson earns more than $40,000, then their commission rate increases to 40%: Easy enough to understand in the real world, but in cell B2 our formula is getting more complex.  If you look closely at the formula, you’ll see that the third parameter of the original IF function (the value if false) is now an entire IF function in its own right.  This is called a nested function (a function within a function).  It’s perfectly valid in Excel (it even works!), but it’s harder to read and understand. We’re not going to go into the nuts and bolts of how and why this works, nor will we examine the nuances of nested functions.  This is a tutorial on VLOOKUP, not on Excel in general. Anyway, it gets worse!  What about when we decide that if they earn more than $50,000 then they’re entitled to 50% commission, and if they earn more than $60,000 then they’re entitled to 60% commission? Now the formula in cell B2, while correct, has become virtually unreadable.  No-one should have to write formulae where the functions are nested four levels deep!  Surely there must be a simpler way? There certainly is.  VLOOKUP to the rescue! Let’s redesign the worksheet a bit.  We’ll keep all the same figures, but organize it in a new way, a more tabular way: Take a moment and verify for yourself that the new Rate Table works exactly the same as the series of thresholds above. Conceptually, what we’re about to do is use VLOOKUP to look up the salesperson’s sales total (from B1) in the rate table and return to us the corresponding commission rate.  Note that the salesperson may have indeed created sales that are not one of the five values in the rate table ($0, $30,000, $40,000, $50,000 or $60,000).  They may have created sales of $34,988.  It’s important to note that $34,988 does not appear in the rate table.  Let’s see if VLOOKUP can solve our problem anyway… We select cell B2 (the location we want to put our formula), and then insert the VLOOKUP function from the Formulas tab: The Function Arguments box for VLOOKUP appears.  We fill in the arguments (parameters) one by one, starting with the Lookup_value, which is, in this case, the sales total from cell B1.  We place the cursor in the Lookup_value field and then click once on cell B1: Next we need to specify to VLOOKUP what table to lookup this data in.  In this example, it’s the rate table, of course.  We place the cursor in the Table_array field, and then highlight the entire rate table – excluding the headings: Next we must specify which column in the table contains the information we want our formula to return to us.  In this case we want the commission rate, which is found in the second column in the table, so we therefore enter a 2 into the Col_index_num field: Finally we enter a value in the Range_lookup field. Important:  It is the use of this field that differentiates the two ways of using VLOOKUP.  To use VLOOKUP with a database, this final parameter, Range_lookup, must always be set to FALSE, but with this other use of VLOOKUP, we must either leave it blank or enter a value of TRUE.  When using VLOOKUP, it is vital that you make the correct choice for this final parameter. To be explicit, we will enter a value of true in the Range_lookup field.  It would also be fine to leave it blank, as this is the default value: We have completed all the parameters.  We now click the OK button, and Excel builds our VLOOKUP formula for us: If we experiment with a few different sales total amounts, we can satisfy ourselves that the formula is working. Conclusion In the “database” version of VLOOKUP, where the Range_lookup parameter is FALSE, the value passed in the first parameter (Lookup_value) must be present in the database.  In other words, we’re looking for an exact match. But in this other use of VLOOKUP, we are not necessarily looking for an exact match.  In this case, “near enough is good enough”.  But what do we mean by “near enough”?  Let’s use an example:  When searching for a commission rate on a sales total of $34,988, our VLOOKUP formula will return us a value of 30%, which is the correct answer.  Why did it choose the row in the table containing 30% ?  What, in fact, does “near enough” mean in this case?  Let’s be precise: When Range_lookup is set to TRUE (or omitted), VLOOKUP will look in column 1 and match the highest value that is not greater than the Lookup_value parameter. It’s also important to note that for this system to work, the table must be sorted in ascending order on column 1! If you would like to practice with VLOOKUP, the sample file illustrated in this article can be downloaded from here. Similar Articles Productive Geek Tips Using VLOOKUP in ExcelImport Microsoft Access Data Into ExcelImport an Access Database into ExcelCopy a Group of Cells in Excel 2007 to the Clipboard as an ImageShare Access Data with Excel in Office 2010 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Quickly Schedule Meetings With NeedtoMeet Share Flickr Photos On Facebook Automatically Are You Blocked On Gtalk? Find out Discover Latest Android Apps On AppBrain The Ultimate Guide For YouTube Lovers Will it Blend? iPad Edition

    Read the article

  • Craftsmanship is ALL that Matters

    - by Wayne Molina
    Today, I'm going to talk about a touchy subject: the notion of working in a company that doesn't use the prescribed "best practices" in its software development endeavours.  Over the years I have, using a variety of pseudonyms, asked this question on popular programming forums.  Although I always add in some minor variation of the story to avoid suspicion that it's the same person posting, the crux of the tale remains the same: A Programmer’s Tale A junior software developer has just started a new job at an average company, creating average line-of-business applications for internal use (the most typical scenario programmers find themselves in).  This hypothetical newbie has spent a lot of time reading up on the "theory" of software development, devouring books, blogs and screencasts from well-known and respected software developers in the community in order to broaden his knowledge and "do what the pros do".  He begins his new job, eager to apply what he's learned on a real-world project only to discover that his new teammates doesn't use any of those concepts and techniques.  They hack their way through development, or in a best-case scenario use some homebrew, thrown-together semblance of a framework for their applications that follows not one of the best practices suggested by the “elite” in the software community - things like TDD (TDD as a "best practice" is the only subjective part of this post, but it's included here due to a very large following of respected developers who consider it one), the SOLID principles, well-known and venerable tools, even version control in a worst case and truly nightmarish scenario.  Our protagonist is frustrated that he isn't doing things the "proper" way - a way he's spent personal time digesting and learning about and, more importantly, a way that some of the top developers in the industry advocate - and turns to a forum to ask the advice of his peers. Invariably the answer I, in the guise of the concerned newbie, will receive is that A) I don't know anything and should just shut my mouth and sling code the bad way like everybody else on the team, and B) These "best practices" are fade or a joke, and the only thing that matters is shipping software to your customers. I am here today to say that anyone who says this, or anything like it, is not only full of crap but indicative of exactly the type of “developer” that has helped to give our industry a bad name.  Here is why: One Who Knows Nothing, Understands Nothing On one hand, you have the cognoscenti of the .NET development world.  Guys like James Avery, Jeremy Miller, Ayende Rahien and Rob Conery; all well-respected and noted programmers that are pretty much our version of celebrities.  These guys write blogs, books, and post videos outlining the "correct" way of writing software to make sure it not only works but is maintainable and extensible and a joy to work with.  They tout the virtues of the SOLID principles, or of using TDD/BDD, or using a mature ORM like NHibernate, Subsonic or even Entity Framework. On the other hand, you have Joe Everyman, Lead Software Developer at Initrode Corporation - in our hypothetical story Joe is the junior developer's new boss.  Joe's been with Initrode for 10 years, starting as the company’s very first programmer and over the years building up a little fiefdom of his own until at the present he’s in charge of all Initrode’s software development.  Joe writes code the same way he always has, without bothering to learn much, if anything.  He looked at NHibernate once and found it was "too hard", so he uses a primitive implementation of the TableDataGateway pattern as a wrapper around SqlClient.SqlConnection and SqlClient.SqlCommand instead of an actual ORM (or, in a better case scenario, has created his own ORM); the thought of using LINQ or Entity Framework or really anything other than his own hastily homebrew solution has never occurred to him.  He doesn't understand TDD and considers “testing” to be using the .NET debugger to step through code, or simply loading up an app and entering some values to see if it works.  He doesn't really understand SOLID, and he doesn't care to.  He's worked as a programmer for years, and that's all that counts.  Right?  WRONG. Who would you rather trust?  Someone with years of experience and who writes books, creates well-known software and is akin to a celebrity, or someone with no credibility outside their own minute environment who throws around their clout and company seniority as the "proof" of their ability?  Joe Everyman may have years of experience at Initrode as a programmer, and says to do things "his way" but someone like Jeremy Miller or Ayende Rahien have years of experience at companies just like Initrode, THEY know ten times more than Joe Everyman knows or could ever hope to know, and THEY say to do things "this way". Here's another way of thinking about it: If you wanted to get into politics and needed advice on the best way to do it, would you rather listen to the mayor of Hicktown, USA or Barack Obama?  One is a small-time nobody while the other is very well-known and, as such, would probably have much more accurate and beneficial advice. NOTE: The selection of Barack Obama as an example in no way, shape, or form suggests a political affiliation or political bent to this post or blog, and no political innuendo should be mistakenly read from it; the intent was merely to compare a small-time persona with a well-known persona in a non-software field.  Feel free to replace the name "Barack Obama" with any well-known Congressman, Senator or US President of your choice. DIY Considered Harmful I will say right now that the homebrew development environment is the WORST one for an aspiring programmer, because it relies on nothing outside it's own little box - no useful skill outside of the small pond.  If you are forced to use some half-baked, homebrew ORM created by your Director of Software, you are not learning anything valuable you can take with you in the future; now, if you plan to stay at Initrode for 10 years like Joe Everyman, this is fine and dandy.  However if, like most of us, you want to advance your career outside a very narrow space you will do more harm than good by sticking it out in an environment where you, to be frank, know better than everybody else because you are aware of alternative and, in almost most cases, better tools for the job.  A junior developer who understands why the SOLID principles are good to follow, or why TDD is beneficial, or who knows that it's better to use NHibernate/Subsonic/EF/LINQ/well-known ORM versus some in-house one knows better than a senior developer with 20 years experience who doesn't understand any of that, plain and simple.  Anyone who disagrees is either a liar, or someone who, just like Joe Everyman, Lead Developer, relies on seniority and tenure rather than adapting their knowledge as things evolve. In many cases, the Joe Everymans of the world act this way out of fear - they cannot possibly fathom that a “junior” could know more than them; after all, they’ve spent 10 or more years in the same company, doing the same job, cranking out the same shoddy software.  And here comes a newbie who hasn’t spent 10+ years doing the same things, with a fresh and often radical take on the craft, and Joe Everyman is afraid he might have to put some real effort into his career again instead of just pointing to his 10 years of service at Initrode as “proof” that he’s good, or that he might have to learn something new to improve; in most cases the problem is Joe Everyman, and by extension Initrode itself, has a mentality of just being “good enough”, and mediocrity is the rule of the day. A Thorn Bush is No Place for a Phoenix My advice is that if you work on a team where they don't use the best practices that some of the most famous developers in our field say is the "right" way to do things (and have legions of people who agree), and YOU are aware of these practices and can see why they work, then LEAVE the company.  Find a company where they DO care about quality, and craftsmanship, otherwise you will never be happy.  There is no point in "dumbing" yourself down to the level of your co-workers and slinging code without care to craftsmanship.  In 95% of these situations there will be no point in bringing it to the attention of Joe Everyman because he won't listen; he might even get upset that someone is trying to "upstage" him and fire the newbie, and replace someone with loads of untapped potential with a drone that will just nod affirmatively and grind out the tasks assigned without question. Find a company that has people smart enough to listen to the "best and brightest", and be happy.  Do not, I repeat, DO NOT waste away in a job working for ignorant people.  At the end of the day software development IS a craft, and a level of craftsmanship is REQUIRED for any serious professional.  When you have knowledgeable people with the credibility to back it up saying one thing, and small-time people who are, to put it bluntly, nobodies in the field saying and doing something totally different because they can't comprehend it, leave the nobodies to their own devices to fade into obscurity.  Work for a company that uses REAL software engineering techniques and really cares about craftsmanship.  The biggest issue affecting our career, and the reason software development has never been the respected, white-collar career it was meant to be, is because hacks and charlatans can pass themselves off as professional programmers without following a lick of good advice from programmers much better at the craft than they are.  These modern day snake-oil salesmen entrench themselves in companies by hoodwinking non-technical businesspeople and customers with their shoddy wares, end up in senior/lead/executive positions, and push their lack of knowledge on everybody unfortunate enough to work with/for/under them, crushing any dissent or voices of reason and change under their tyrannical heel and leaving behind a trail of dismayed and, often, unemployed junior developers who were made examples of to keep up the facade and avoid the shadow of doubt being cast upon them. To sum this up another way: If you surround yourself with learned people, you will learn.  Surround yourself with ignorant people who can't, as the saying goes, see the forest through the trees, and you'll learn nothing of any real value.  There is more to software development than just writing code, and the end goal should not be just "shipping software", it should be shipping software that is extensible, maintainable, and above all else software whose creation has broadened your knowledge in some capacity, even if a minor one.  An eager newbie who knows theory and thirsts for knowledge can easily be moulded and taught the advanced topics, but the same can't be said of someone who only cares about the finish line.  This industry needs more people espousing the benefits of software craftsmanship and proper software engineering techniques, and less Joe Everymans who are unwilling to adapt or foster new ways of thinking. Conclusion - I Cast “Protection from Fire” I am fairly certain this post will spark some controversy and might even invite the flames.  Please keep in mind these are opinions and nothing more.  A little healthy rant and subsequent flamewar can be good for the soul once in a while.  To paraphrase The Godfather: It helps to get rid of the bad blood.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47  | Next Page >