Search Results

Search found 12089 results on 484 pages for 'rule of three'.

Page 471/484 | < Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >

  • Does HTML 5 &ldquo;Rich vs. Reach&rdquo; a False Choice?

    - by andrewbrust
    The competition between the Web and proprietary rich platforms, including Windows, Mac OS, iPhone/iPad, Adobe’s Flash/AIR and Microsoft’s Silverlight, is not new. But with the emergence of HTML 5 and imminent support for it in the next release of the major Web browsers, the battle is heating up. And with the announcements made Wednesday at Google's I/O conference, it's getting kicked up yet another notch. The impact of this platform battle on companies in the media and advertising world, and the developers who serve them, is significant. The most prominent question is whether video and rich media online will shift towards pure HTML and away from plug-ins like Flash and Silverlight. In fact, certain features in HTML 5 make it suitable for development for line of business applications as well, further threatening those plug-in technologies. So what's the deal? Is this real or hype? To answer that question, I've done my own research into HTML 5's features and talked to several media-focused, New York area developers to get their opinions. I present my findings to you in this post. Before bearing down into HTML 5 specifics and practitioners’ quotes, let's set the context. To understand what HTML 5 can do, take a look at this video of Sports Illustrated’s HTML 5 prototype. This should start to get you bought into the idea that HTML 5 could be a game-changer. Next, if you happen to have installed the beta version of Google's Chrome 5 browser, take a look at the page linked to below, and in that page, click on any of the game thumbnails to see what's possible, without a plug-in, in this brave new world. (Note, although the instructions for each game tell you to press the A key to start, press the Z key instead.). Here's the link: http://www.kesiev.com/akihabara As an adjunct to what's enabled by HTML 5, consider the various transforms that are part of CSS 3. If you're running Safari as your browser, the following link will showcase this live; if not, you'll see a bitmap that will give you an idea of what's possible: http://webkit.org/blog/386/3d-transforms Are you starting to get the picture (literally)? What has up until now required browser plug-ins and other patches to HTML, most typically Flash, will soon be renderable, natively, in all major browsers. Moreover, it's looking likely that developers will be able to deliver such content and experiences in these browsers using one base of markup and script code (using straight JavaScript and/or jQuery), without resorting to browser-specific code and workarounds. If you're skeptical of this, I wouldn't blame you, especially with respect to Microsoft's Internet Explorer. However, i can tell you with confidence that even Microsoft is dedicated to full-on HTML 5 support in version 9 of that browser, which is currently under development. So what’s new in HTML 5, specifically, that makes sites like this possible?  The specification documents go into deep detail, and there’s no sense in rehashing them here, but a summary is probably in order.   Here is a non-authoritative, but useful, list of the major new feature areas in HTML 5: 2D drawing capabilities and 3D transforms. 2D drawing instructions can be embedded statically into a Web page; application interactivity and animation can be achieved through script.  As mentioned above, 3D transforms are technically part of version 3 of the CSS (Cascading Style Sheets) spec, rather than HTML 5, but they can nonetheless be thought of as part of the bundle.  They allow for rendering of 3D images and animations that, together with 2D drawing, make HTML-based games much more feasible than they are presently, as the links above demonstrate. Embedded audio and video. A media player can appear directly in a rendered Web page, using HTML markup and no plug-ins. Alternately, player controls can be hidden and the content can play automatically. Major enhancements to form-based input. This includes such things as specification of required fields, embedding of text “hints” into a control, limiting valid input on a field to dates, email addresses or a list of values.  There’s more to this, but the gist is that line-of-business applications, with complicated input and data validation, are supported directly Offline caching, local storage and client-side SQL database. These facilities allow Web applications to function more like native apps, even if no internet connection is available. User-defined data. Data (or metadata – data about data) can easily be embedded statically and/or retrieved and updated with Javascript code. This avoids having to embed that data in a separate file, or within script code. Taken together, these features position HTML to compete with, and perhaps overtake, Adobe’s Flash/AIR (and Microsoft’s Silverlight) as a viable Web platform for media, RIAs (rich internet applications – apps that function more like desktop software than Web sites) and interactive Web content, including games. What do players in the media world think about this?  From the embedded video above, we know what Sports Illustrated (and, therefore, Time Warner) think.  Hulu, the major Internet site for broadcast TV content, is on record as saying HTML5 video does not pass muster with them, at least not yet.  YouTube, on the other hand, already has an experimental HTML 5-based version of their site.  TechCrunch has reported that NetFlix is flirting with HTML 5 too, especially as it pertains to embedded browsers in TV-based devices.  And the New York Times’ Web site now embeds some video clips without resorting to Flash.  They have to – otherwise iPhone, iPod Touch and iPad users couldn’t see them in the Mobile Safari browser. What do media-focused developers think about all this?  I talked to several to get their opinions. Michael Pinto is CEO and Founder of Very Memorable Design whose primary focus has been to help marketing directors get traction online.  The firm’s client roster includes the likes Time, Inc., Scholastic and PBS.  Pinto predicts that “More and more microsites that were done entirely in Flash will be done more and more using jQuery. I can also see slideshows and video now being done without Flash. However if you needed to create a game or highly interactive activity Flash would still be the way to go for the web.” A dissenting view comes from Jesse Erlbaum, CEO of The Erlbaum Group, LLC, which serves numerous clients in the magazine publishing sector.  When I asked Erlbaum whether he thought HTML 5 and jQuery/JavaScript would steal significant market share from Flash, he responded “Not at all!  In particular, not for media and advertising customers!  These sectors are not generally in the business of making highly functional applications, which is the one place where HTML5/jQuery/etc really shines.” Ironically, Pinto’s firm is a heavy user of Flash for its projects and Erlbaum’s develops atop the “LAMP” (Linux, Apache, MySQL and PHP/Perl) stack.  For whatever reason, each firm seems to see the other’s toolset as a more viable choice.  But both agree that the developer tool story around HTML 5 is deficient.  Pinto explains “What’s lost with [HTML 5 and Javascript] techniques is that there isn’t a single widely favored easy-to-use tool of choice for authoring. So with Flash you can get up and running right away and not worry about what is different from one browser to the next.“  Erlbaum agrees, saying: “HTML5/Javascript lacks a sophisticated integrated development environment (IDE) which is an essential part of Flash.  If what someone is trying to make is primarily animation, it's a waste of time…to do this in Javascript.  It can be done much more easily in Flash, and with greater cross-browser compatibility and consistency due to the ubiquity of Flash.” Adobe (maker of Flash since its 2005 acquisition of Macromedia) likely agrees.  And for better or worse, they’ve decided to address this shortcoming of HTML 5, even at risk of diminishing their Flash platfrom. Yesterday Adobe announced that their hugely popular Deamweaver Web design authoring tool would directly support HTML 5 and CSS 3 development.  In fact, the Adobe Dreamweaver CS5 HTML5 Pack is downloadable now from Adobe Labs. Maybe Adobe is bowing to pressure from ardent Web professionals like Scott Kellum, Lead Designer at Channel V Media,  a digital and offline branding firm, serving the media and marketing sectors, among others.  Kellum told me that HTML 5 “…will definitely move people away from Flash. It has many of the same functionalities with faster load times and better accessibility. HTML5 will help Flash as well: with the new caching methods you can now even run Flash apps offline.” Although all three Web developers I interviewed would agree that Flash is still required for more sophisticated applications, Kellum seems to have put his finger on why HTML 5 may nonetheless dominate.  In his view, much of the Web development out there has little need for high-end capabilities: “Most people want to add a little punch to a navigation bar or some video and now you can get the biggest bang for your buck with HTML5, CSS3 and Javascript.” I’ve already mentioned that Google’s ongoing I/O conference, at the Moscone West center in San Francisco, is driving the HTML 5 news cycle, big time.  And Google made many announcements of their own, including the open sourcing of their VP8 video codec, new enterprise-oriented capabilities for its App Engine cloud offering, and the creation of the Chrome Web Store, which the company says will make it easier to find and “install” Web applications, in a fashion similar to  the way users procure native apps on various mobile platforms. HTML 5 looks to be disruptive, especially to the media world.  And even if the technology ends up disappointing, the chatter around it alone is causing big changes in the technology world.  If the richness it promises delivers, then magazine publishers and non-text digital advertisers may indeed have a platform for creating compelling content that loads quickly, is standards-based and will render identically in (the newest versions of) all major Web browsers.  Can this development in the digital arena save the titans of the print world?  I can’t predict, but it’s going to be fun to watch, and the competitive innovation from all players in both industries will likely be immense.

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • Microsoft Channel 9 Interviews Mei Liang to Introduce Sample Browser Extension for Visual Studio 2012 and 2010

    - by Jialiang
    This morning, Microsoft Channel 9 interviewed Mei Liang - Group Manager of Microsoft All-In-One Code Framework - to introduce the newest Sample Browser extension for Visual Studio 2012 &2010.   This extension provides a way for developers to search and download more than 4500 code samples from within Visual Studio, including over 700 Windows 8 samples and more than 1000 All-In-One Code Framework customer-driven code samples. Mei shows us not only the extension, but also the standalone version of the Sample Browser.   http://channel9.msdn.com/Shows/Visual-Studio-Toolbox/Sample-Browser-Visual-Studio-Extension   Microsoft All-In-One Code Framework, working in close partnership with the Visual Studio product team and MSDN Samples Gallery, developed the Sample Browser extension for both Visual Studio 2012 and Visual Studio 2010.  As an effort to evolve the code sample use experience and improve developers' productivity, the Sample Browser allows programmers to search, download and open over 4500 code samples from within Visual Studio with just a few simple clicks.  If no existing code sample can meet the needs, developers can even request a code sample easily from Microsoft thanks to the free “Sample Request Service” offered by Microsoft All-In-One Code Framework.  Through innovations, the teams hope to put the power of tens of thousands of code samples at developers’ fingertips. In short 3 months, the Sample Browser Visual Studio Extension has been installed by 100K global users.  It is also selected as one of the six most highly regarded and commonly used tools for Visual Studio that will make your programming experience feel like never before.   Got to love the All-In-One Code Framework team! You guys know this is THE go to source for code samples. Get this extension and you'll never need to leave VS2012 (well except for bathroom trips, but that's TMI anyway... ;) Read More... From: Greg Duncan (Author of CoolThingOfTheDay) 9/6/2011 12:00 AM The one software design pattern that I have used in just about every application I’ve written is “cut-and-paste,” so the new “Sample Browser” – read sample as a noun not an adjective – is a great boon to my productivity. Read More... From: Jim O'Neil (Microsoft Developer Evangelist) 9/28/2011 12:00 AM Install: http://aka.ms/samplebrowservsx Microsoft All-In-One Code Framework also offers the standalone version of Sample Browser.   The standalone version is particularly useful to Visual Studio Express edition or Visual Studio 2008 users, who cannot install the Sample Browser Visual Studio extension.   From Grassroots’ Passion for Developers to the Innovation of Sample Browser This Sample Browser has come a very long way improving the code sample use experience.  The history can be traced back to a grass-root innovation three years ago.   In early 2009, a few MSDN forum support engineers observed that lots of developers were struggling to work in Visual Studio without adequate code samples. Programming tasks seem harder than they should be when you only read through the documentation.  Just a couple of lines of sample code could answer a lot of questions.   They had a brilliant idea: What if we produce code samples based on developers’ frequently asked programming tasks in forums, social networks and support incidents, and then aggregate all our sample code in a one-stop library to benefit developers?  And what if developers can request code samples directly from Microsoft, free of charge?  This small group of grassroots at Microsoft devoted their nights and weekends to prototyping such a customer-driven code sample library.  This simple idea eventually turned into “Microsoft All-In-One Code Framework”, aka. OneCode.  With the support from more and more passionate developers at Microsoft and the leaders in the Community and Online Support team and Microsoft Commercial Technical Services (CTS), the idea has become a continually growing library with over 1000 customer-driven code samples covering almost all Microsoft development technologies.  These code samples originated from developers’ common pains and needs should be able to help many developers.  However, if developers cannot easily discover the code samples, the effort would still be in vain.  So in early 2010, the team started the idea of Sample Browser to ease the discovery and access of these samples.  In just two months, the first version of Sample Browser was finished and released by a passionate developer.  It was a very simple application, only supporting the basic sample offline search.  Users had to download the whole 100MB sample package containing all samples first, and run the Sample Browser to search locally.   Though developers could not search and download samples on-demand, this simple application laid a solid foundation for the team’s continuous innovations of Sample Browsing experience. In 2011, MSDN Samples Gallery had a big refresh.  The online sample experience was brought to a new level thanks to its PM Steven Wilssens and the gallery team’s effort.  Microsoft All-In-One Code Framework Team saw the opportunity to realize the “on-demand” sample search and download feature with the new gallery.  The two teams formed a strong partnership to upload all the customer-driven code samples to MSDN Samples Gallery, and released the new version of Sample Browser to support “on-demand” sample downloading in April, 2011.  Mei Liang, the Group Manager of Microsoft All-In-One Code Framework, was interviewed by Channel 9 to demo the Sample Browser.  Customers love the effort and the innovation!!  This can be clearly seen from the user comments in the publishing page.   It was very encouraging to the team of All-In-One Code Framework. The team continues innovating and evolving the Sample Browser.  They found the Visual Studio product team this time, and integrated the Sample Browsing experience into the latest Visual Studio 2012.  The newly released Sample Browser Visual Studio extension makes good use of Visual Studio 2012 IDE such as the new Quick Launch bar, the code editor, the toolbar and menus to offer easy access to thousands of code samples from within the development environment.   The Visual Studio Senior Program Manager Lead - Anthony Cangialosi, the Program Manager - Murali Krishna Hosabettu Kamalesha, the MSDN Samples Gallery PM – Steven Wilssens, and the Visual Studio Senior Escalation Engineer - Ed Dore shared lots of insightful suggestions with the team.  Thanks to the brilliant cross-group collaboration inside Microsoft, tens of new features including “Local Language Support” and “Favorite Samples”, as well as a face-lifted user interface, were added to further enhance the user experience. Since the new Sample Browser Visual Studio extension was released, it has received over 100 thousand downloads and five-star ratings.  A customer told the team that he officially falls in LOVE with Microsoft All-In-One Code Framework.   The Sample Browser Innovation for Developers Never Stops! The teams would never stop improving the Sample Browser for developers’ easier lives.   The Microsoft All-In-One Code Framework, Visual Studio and MSDN Samples Gallery teams are working closely to develop the next version of Sample Browser.  Here are the key functions in development or in discussion.  We hope to learn your feedback of the effort.  You can submit your suggestions to the official Visual Studio UserVoice site.  We look forward to hearing from you! 1) Offline Sample Search This is one of the top feature requests that we have received for Sample Browser.   The Sample Browser will support the offline search mode so that developers can search downloaded code samples when they do not have internet access.  This is particularly useful to developers in Enterprises with strict proxy settings. 2) Code Snippet Support and Visual Studio Editor Integration Today, the Sample Browser supports downloading and opening sample project.   However, when developers are searching for code samples, a better user experience would be to see the code snippets in the search result first.  Developers can quickly decide if the code snippet is relevant.   They can also drag and drop the code snippet into the Visual Studio Editor to solve some simple programming tasks.  If developers want to learn more about the sample, they can then choose to download the sample project and open it in Visual Studio. 3) Enterprise Sample Sharing and Searching Large enterprises have many code samples for their own internal tools and APIs that are not appropriate to be shared publicly in MSDN Samples Gallery.   In that case, today’s Sample Browser and MSDN Samples Gallery cannot help these Enterprise developers.  The idea is to create a Code Sample Repository in TFS, and provide an additional Visual Studio extension for Enterprise developers to quickly share code samples to TFS.  The Sample Browser can be configured to connect to the TFS Code Sample Repository to search for and download code samples.  This would potentially enable the Enterprise developers to be more productive. 4) Windows Store Sample Browser With the upcoming release of Windows RT and Microsoft Surface, developers are facing a completely new world of application platform.   Not like laptop, people would often use Microsoft Surface in commute and in travel.  Internet may not be available.  Today’s Visual Studio cannot be installed and run on Windows RT, however, our enthusiastic developers would hope to spend every minute on code.  They love code!   The idea is to create a Windows Store version of Sample Browser. Search and download samples from the online Samples Gallery when the user has internet access. Browse the sample code files and learn the sample documentation of downloaded samples with or without internet access.   In addition to the "browse” function, the Sample Browser could further support “bookmark”, “learning notes”, “code review”, and “quick social sharing". Make full use of the new touch and Windows Store App UI to give developers a new “relaxing” code browsing and learning experience, anytime, anywhere. With Windows Store Sample Browser, developers can enjoy A new relaxing and enjoyable experience for developers to learn code samples You do not have to sit in front of desk and formally open Visual Studio to read code samples.  Many developers get sub-health due to staying in front of desk for a very long time.  With Windows RT, Microsoft Surface and this Windows Store Sample Browser combining with the online MSDN Samples Gallery, developers can sit in a sofa, relaxingly hold the tablet and enjoy to learn their beloved sample code with detailed documentation. Anytime, anywhere Whether you have internet access or not, whether you are at home, in office, or in commute/airplane, developers can always easily access and browse the sample code. Lightweight and fast Particularly for learning a small sample project, the Windows Store Sample Browser would be more lightweight and faster to open and browse the sample code. Please submit your feedback and suggestion to Visual Studio UserVoice.  We look forward to hearing from you and deliver a better and better sample use experience.  Happy Coding!   Special Thanks to People working behind the latest release of Sample Browser Visual Studio Extension and the great partnerships!

    Read the article

  • Windows 8 Will be Here Tomorrow; but Should Silverlight be Gone Today?

    - by andrewbrust
    The software industry lives within an interesting paradox. IT in the enterprise moves slowly and cautiously, upgrading only when safe and necessary.  IT interests intentionally live in the past.  On the other hand, developers, and Independent Software Vendors (ISVs) not only want to use the latest and greatest technologies, but this constituency prides itself on gauging tech’s future, and basing its present-day strategy upon it.  Normally, we as an industry manage this paradox with a shrug of the shoulder and musings along the lines of “it takes all kinds.”  Different subcultures have different tendencies.  So be it. Microsoft, with its Windows operating system (OS), can’t take such a laissez-faire view of the world though.  Redmond relies on IT to deploy Windows and (at the very least) influence its procurement, but it also relies on developers to build software for Windows, especially software that has a dependency on features in new versions of the OS.  It must indulge and nourish developers’ fetish for an early birthing of the next generation of software, even as it acknowledges the IT reality that the next wave will arrive on-schedule in Redmond and will travel very slowly to end users. With the move to Windows 8, and the corresponding shift in application development models, this paradox is certainly in place. On the one hand, the next version of Windows is widely expected sometime in 2012, and its full-scale deployment will likely push into 2014 or even later.  Meanwhile, there’s a technology that runs on today’s Windows 7, will continue to run in the desktop mode of Windows 8 (the next version’s codename), and provides absolutely the best architectural bridge to the Windows 8 Metro-style application development stack.  That technology is Silverlight.  And given what we now know about Windows 8, one might think, as I do, that Microsoft ecosystem developers should be flocking to it. But because developers are trying to get a jump on the future, and since many of them believe the impending v5.0 release of Silverlight will be the technology’s last, not everyone is flocking to it; in fact some are fleeing from it.  Is this sensible?  Is it not unprecedented?  What options does it lead to?  What’s the right way to think about the situation? Is v5.0 really the last major version of the technology called Silverlight?  We don’t know.  But Scott Guthrie, the “father” and champion of the technology, left the Developer Division of Microsoft months ago to work on the Windows Azure team, and he took his people with him.  John Papa, who was a very influential Redmond-based evangelist for Silverlight (and is a Visual Studio Magazine author), left Microsoft completely.  About a year ago, when initial suspicion of Silverlight’s demise reached significant magnitude, Papa interviewed Guthrie on video and their discussion served to dispel developers’ fears; but now they’ve moved on. So read into that what you will and let’s suppose, for the sake of argument, speculation that Silverlight’s days of major revision and iteration are over now is correct.  Let’s assume the shine and glimmer has dimmed.  Let’s assume that any Silverlight application written today, and that therefore any investment of financial and human resources made in Silverlight development today, is destined for rework and extra investment in a few years, if the application’s platform needs to stay current. Is this really so different from any technology investment we make?  Every framework, language, runtime and operating system is subject to change, to improvement, to flux and, yes, to obsolescence.  What differs from project to project, is how near-term that obsolescence is and how disruptive the change will be.  The shift from .NET 1.1. to 2.0 was incremental.  Some of the further changes were too.  But the switch from Windows Forms to WPF was major, and the change from ASP.NET Web Services (asmx) to Windows Communication Foundation (WCF) was downright fundamental. Meanwhile, the transition to the .NET development model for Windows 8 Metro-style applications is actually quite gentle.  The finer points of this subject are covered nicely in Magenic’s excellent white paper “Assessing the Windows 8 Development Platform.” As the authors of that paper (including Rocky Lhotka)  point out, Silverlight code won’t just “port” to Windows 8.  And, no, Silverlight user interfaces won’t either; Metro always supports XAML, but that relationship is not commutative.  But the concepts, the syntax, the architecture and developers’ skills map from Silverlight to Windows 8 Metro and the Windows Runtime (WinRT) very nicely.  That’s not a coincidence.  It’s not an accident.  This is a protected transition.  It’s not a slap in the face. There are few things that are unnerving about this transition, which make it seem markedly different from others: The assumed end of the road for Silverlight is something many think they can see.  Instead of being ignorant of the technology’s expiration date, we believe we know it.  If ignorance is bliss, it would seem our situation lacks it. The new technology involving WinRT and Metro involves a name change from Silverlight. .NET, which underlies both Silverlight and the XAML approach to WinRT development, has just about reached 10 years of age.  That’s equivalent to 80 in human years, or so many fear. My take is that the combination of these three factors has contributed to what for many is a psychologically compelling case that Silverlight should be abandoned today and HTML 5 (the agnostic kind, not the Windows RT variety) should be embraced in its stead.  I understand the logic behind that.  I appreciate the preemptive, proactive, vigilant conscientiousness involved in its calculus.  But for a great many scenarios, I don’t agree with it.  HTML 5 clients, no matter how impressive their interactivity and the emulation of native application interfaces they present may be, are still second-class clients.  They are getting better, especially when hardware acceleration and fast processors are involved.  But they still lag.  They still feel like they’re emulating something, like they’re prototypes, like they’re not comfortable in their own skins.  They are based on compromise, and they feel compromised too. HTML 5/JavaScript development tools are getting better, and will get better still, but they are not as productive as tools for other environments, like Flash, like Silverlight or even more primitive tooling for iOS or Android.  HTML’s roots as a document markup language, rather than an application interface, create a disconnect that impedes productivity.  I do not necessarily think that problem is insurmountable, but it’s here today. If you’re building line-of-business applications, you need a first-class client and you need productivity.  Lack of productivity increases your costs and worsens your backlog.  A second class client will erode user satisfaction, which is never good.  Worse yet, this erosion will be inconspicuous, rather than easily identified and diagnosed, because the inferiority of an HTML 5 client over a native one is hard to identify and, notably, doing so at this juncture in the industry is unpopular.  Why would you fault a technology that everyone believes is revolutionary?  Instead, user disenchantment will remain latent and yet will add to the malaise caused by slower development. If you’re an ISV and you’re coveting the reach of running multi-platform, it’s a different story.  You’ve likely wanted to move to HTML 5 already, and the uncertainty around Silverlight may be the only remaining momentum or pretext you need to make the shift.  You’re deploying many more copies of your application than a line-of-business developer is anyway; this makes the economic hit from lower productivity less impactful, and the wider potential installed base might even make it profitable. But no matter who you are, it’s important to take stock of the situation and do it accurately.  Continued, but merely incremental changes in a development model lead to conservatism and general lack of innovation in the underlying platform.  Periods of stability and equilibrium are necessary, but permanence in that equilibrium leads to loss of platform relevance, market share and utility.  Arguably, that’s already happened to Windows.  The change Windows 8 brings is necessary and overdue.  The marked changes in using .NET if we’re to build applications for the new OS are inevitable.  We will ultimately benefit from the change, and what we can reasonably hope for in the interim is a migration path for our code and skills that is navigable, logical and conceptually comfortable. That path takes us to a place called WinRT, rather than a place called Silverlight.  But considering everything that is changing for the good, the number of disruptive changes is impressively minimal.  The name may be changing, and there may even be some significance to that in terms of Microsoft’s internal management of products and technologies.  But as the consumer, you should care about the ingredients, not the name.  Turkish coffee and Greek coffee are much the same. Although you’ll find plenty of interested parties who will find the names significant, drinkers of the beverage should enjoy either one.  It’s all coffee, it’s all sweet, and you can tell your fortune from the grounds that are left at the end.  Back on the software side, it’s all XAML, and C# or VB .NET, and you can make your fortune from the product that comes out at the end.  Coffee drinkers wouldn’t switch to tea.  Why should XAML developers switch to HTML?

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #038

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 CASE Statement in ORDER BY Clause – ORDER BY using Variable This article is as per request from the Application Development Team Leader of my company. His team encountered code where the application was preparing string for ORDER BY clause of the SELECT statement. Application was passing this string as variable to Stored Procedure (SP) and SP was using EXEC to execute the SQL string. This is not good for performance as Stored Procedure has to recompile every time due to EXEC. sp_executesql can do the same task but still not the best performance. SSMS – View/Send Query Results to Text/Grid/Files Results to Text – CTRL + T Results to Grid – CTRL + D Results to File – CTRL + SHIFT + F 2008 Introduction to SPARSE Columns Part 2 I wrote about Introduction to SPARSE Columns Part 1. Let us understand the concept of the SPARSE column in more detail. I suggest you read the first part before continuing reading this article. All SPARSE columns are stored as one XML column in the database. Let us see some of the advantage and disadvantage of SPARSE column. Deferred Name Resolution How come when table name is incorrect SP can be created successfully but when an incorrect column is used SP cannot be created? 2009 Backup Timeline and Understanding of Database Restore Process in Full Recovery Model In general, databases backup in full recovery mode is taken in three different kinds of database files. Full Database Backup Differential Database Backup Log Backup Restore Sequence and Understanding NORECOVERY and RECOVERY While doing RESTORE Operation if you restoring database files, always use NORECOVER option as that will keep the database in a state where more backup file are restored. This will also keep database offline also to prevent any changes, which can create itegrity issues. Once all backup file is restored run RESTORE command with a RECOVERY option to get database online and operational. Four Different Ways to Find Recovery Model for Database Perhaps, the best thing about technical domain is that most of the things can be executed in more than one ways. It is always useful to know about the various methods of performing a single task. Two Methods to Retrieve List of Primary Keys and Foreign Keys of Database When Information Schema is used, we will not be able to discern between primary key and foreign key; we will have both the keys together. In the case of sys schema, we can query the data in our preferred way and can join this table to another table, which can retrieve additional data from the same. Get Last Running Query Based on SPID PID is returns sessions ID of the current user process. The acronym SPID comes from the name of its earlier version, Server Process ID. 2010 SELECT * FROM dual – Dual Equivalent Dual is a table that is created by Oracle together with data dictionary. It consists of exactly one column named “dummy”, and one record. The value of that record is X. You can check the content of the DUAL table using the following syntax. SELECT * FROM dual Identifying Statistics Used by Query Someone asked this question in my training class of query optimization and performance tuning.  “Can I know which statistics were used by my query?” 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 14 of 31 What are the basic functions for master, msdb, model, tempdb and resource databases? What is the Maximum Number of Index per Table? Explain Few of the New Features of SQL Server 2008 Management Studio Explain IntelliSense for Query Editing Explain MultiServer Query Explain Query Editor Regions Explain Object Explorer Enhancements Explain Activity Monitors SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 15 of 31 What is Service Broker? Where are SQL server Usernames and Passwords Stored in the SQL server? What is Policy Management? What is Database Mirroring? What are Sparse Columns? What does TOP Operator Do? What is CTE? What is MERGE Statement? What is Filtered Index? Which are the New Data Types Introduced in SQL SERVER 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 16 of 31 What are the Advantages of Using CTE? How can we Rewrite Sub-Queries into Simple Select Statements or with Joins? What is CLR? What are Synonyms? What is LINQ? What are Isolation Levels? What is Use of EXCEPT Clause? What is XPath? What is NOLOCK? What is the Difference between Update Lock and Exclusive Lock? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 17 of 31 How will you Handle Error in SQL SERVER 2008? What is RAISEERROR? What is RAISEERROR? How to Rebuild the Master Database? What is the XML Datatype? What is Data Compression? What is Use of DBCC Commands? How to Copy the Tables, Schema and Views from one SQL Server to Another? How to Find Tables without Indexes? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 18 of 31 How to Copy Data from One Table to Another Table? What is Catalog Views? What is PIVOT and UNPIVOT? What is a Filestream? What is SQLCMD? What do you mean by TABLESAMPLE? What is ROW_NUMBER()? What are Ranking Functions? What is Change Data Capture (CDC) in SQL Server 2008? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 19 of 31 How can I Track the Changes or Identify the Latest Insert-Update-Delete from a Table? What is the CPU Pressure? How can I Get Data from a Database on Another Server? What is the Bookmark Lookup and RID Lookup? What is Difference between ROLLBACK IMMEDIATE and WITH NO_WAIT during ALTER DATABASE? What is Difference between GETDATE and SYSDATETIME in SQL Server 2008? How can I Check that whether Automatic Statistic Update is Enabled or not? How to Find Index Size for Each Index on Table? What is the Difference between Seek Predicate and Predicate? What are Basics of Policy Management? What are the Advantages of Policy Management? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 20 of 31 What are Policy Management Terms? What is the ‘FILLFACTOR’? Where in MS SQL Server is ’100’ equal to ‘0’? What are Points to Remember while Using the FILLFACTOR Argument? What is a ROLLUP Clause? What are Various Limitations of the Views? What is a Covered index? When I Delete any Data from a Table, does the SQL Server reduce the size of that table? What are Wait Types? How to Stop Log File Growing too Big? If any Stored Procedure is Encrypted, then can we see its definition in Activity Monitor? 2012 Example of Width Sensitive and Width Insensitive Collation Width Sensitive Collation: A single-byte character (half-width) represented as single-byte and the same character represented as a double-byte character (full-width) are when compared are not equal the collation is width sensitive. In this example we have one table with two columns. One column has a collation of width sensitive and the second column has a collation of width insensitive. Find Column Used in Stored Procedure – Search Stored Procedure for Column Name Very interesting conversation about how to find column used in a stored procedure. There are two different characters in the story and both are having a conversation about how to find column in the stored procedure. Here are two part story Part 1 | Part 2 SQL SERVER – 2012 Functions – FORMAT() and CONCAT() – An Interesting Usage Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video In simple words, in many cases the database move from one place to another place. It is not always possible to back up and restore databases. There are possibilities when only part of the database (with schema and data) has to be moved. In this video we learn that we can easily generate script for schema for data and move from one server to another one. INFORMATION_SCHEMA.COLUMNS and Value Character Maximum Length -1 I often see the value -1 in the CHARACTER_MAXIMUM_LENGTH column of INFORMATION_SCHEMA.COLUMNS table. I understand that the length of any column can be between 0 to large number but I do not get it when I see value in negative (i.e. -1). Any insight on this subject? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • HTG Explains: Do Non-Windows Platforms Like Mac, Android, iOS, and Linux Get Viruses?

    - by Chris Hoffman
    Viruses and other types of malware seem largely confined to Windows in the real world. Even on a Windows 8 PC, you can still get infected with malware. But how vulnerable are other operating systems to malware? When we say “viruses,” we’re actually talking about malware in general. There’s more to malware than just viruses, although the word virus is often used to talk about malware in general. Why Are All the Viruses For Windows? Not all of the malware out there is for Windows, but most of it is. We’ve tried to cover why Windows has the most viruses in the past. Windows’ popularity is definitely a big factor, but there are other reasons, too. Historically, Windows was never designed for security in the way that UNIX-like platforms were — and every popular operating system that’s not Windows is based on UNIX. Windows also has a culture of installing software by searching the web and downloading it from websites, whereas other platforms have app stores and Linux has centralized software installation from a secure source in the form of its package managers. Do Macs Get Viruses? The vast majority of malware is designed for Windows systems and Macs don’t get Windows malware. While Mac malware is much more rare, Macs are definitely not immune to malware. They can be infected by malware written specifically for Macs, and such malware does exist. At one point, over 650,000 Macs were infected with the Flashback Trojan. [Source] It infected Macs through the Java browser plugin, which is a security nightmare on every platform. Macs no longer include Java by default. Apple also has locked down Macs in other ways. Three things in particular help: Mac App Store: Rather than getting desktop programs from the web and possibly downloading malware, as inexperienced users might on Windows, they can get their applications from a secure place. It’s similar to a smartphone app store or even a Linux package manager. Gatekeeper: Current releases of Mac OS X use Gatekeeper, which only allows programs to run if they’re signed by an approved developer or if they’re from the Mac App Store. This can be disabled by geeks who need to run unsigned software, but it acts as additional protection for typical users. XProtect: Macs also have a built-in technology known as XProtect, or File Quarantine. This feature acts as a blacklist, preventing known-malicious programs from running. It functions similarly to Windows antivirus programs, but works in the background and checks applications you download. Mac malware isn’t coming out nearly as quick as Windows malware, so it’s easier for Apple to keep up. Macs are certainly not immune to all malware, and someone going out of their way to download pirated applications and disable security features may find themselves infected. But Macs are much less at risk of malware in the real world. Android is Vulnerable to Malware, Right? Android malware does exist and companies that produce Android security software would love to sell you their Android antivirus apps. But that isn’t the full picture. By default, Android devices are configured to only install apps from Google Play. They also benefit from antimalware scanning — Google Play itself scans apps for malware. You could disable this protection and go outside Google Play, getting apps from elsewhere (“sideloading”). Google will still help you if you do this, asking if you want to scan your sideloaded apps for malware when you try to install them. In China, where many, many Android devices are in use, there is no Google Play Store. Chinese Android users don’t benefit from Google’s antimalware scanning and have to get their apps from third-party app stores, which may contain infected copies of apps. The majority of Android malware comes from outside Google Play. The scary malware statistics you see primarily include users who get apps from outside Google Play, whether it’s pirating infected apps or acquiring them from untrustworthy app stores. As long as you get your apps from Google Play — or even another secure source, like the Amazon App Store — your Android phone or tablet should be secure. What About iPads and iPhones? Apple’s iOS operating system, used on its iPads, iPhones, and iPod Touches, is more locked down than even Macs and Android devices. iPad and iPhone users are forced to get their apps from Apple’s App Store. Apple is more demanding of developers than Google is — while anyone can upload an app to Google Play and have it available instantly while Google does some automated scanning, getting an app onto Apple’s App Store involves a manual review of that app by an Apple employee. The locked-down environment makes it much more difficult for malware to exist. Even if a malicious application could be installed, it wouldn’t be able to monitor what you typed into your browser and capture your online-banking information without exploiting a deeper system vulnerability. Of course, iOS devices aren’t perfect either. Researchers have proven it’s possible to create malicious apps and sneak them past the app store review process. [Source] However, if a malicious app was discovered, Apple could pull it from the store and immediately uninstall it from all devices. Google and Microsoft have this same ability with Android’s Google Play and Windows Store for new Windows 8-style apps. Does Linux Get Viruses? Malware authors don’t tend to target Linux desktops, as so few average users use them. Linux desktop users are more likely to be geeks that won’t fall for obvious tricks. As with Macs, Linux users get most of their programs from a single place — the package manager — rather than downloading them from websites. Linux also can’t run Windows software natively, so Windows viruses just can’t run. Linux desktop malware is extremely rare, but it does exist. The recent “Hand of Thief” Trojan supports a variety of Linux distributions and desktop environments, running in the background and stealing online banking information. It doesn’t have a good way if infecting Linux systems, though — you’d have to download it from a website or receive it as an email attachment and run the Trojan. [Source] This just confirms how important it is to only run trusted software on any platform, even supposedly secure ones. What About Chromebooks? Chromebooks are locked down laptops that only run the Chrome web browser and some bits around it. We’re not really aware of any form of Chrome OS malware. A Chromebook’s sandbox helps protect it against malware, but it also helps that Chromebooks aren’t very common yet. It would still be possible to infect a Chromebook, if only by tricking a user into installing a malicious browser extension from outside the Chrome web store. The malicious browser extension could run in the background, steal your passwords and online banking credentials, and send it over the web. Such malware could even run on Windows, Mac, and Linux versions of Chrome, but it would appear in the Extensions list, would require the appropriate permissions, and you’d have to agree to install it manually. And Windows RT? Microsoft’s Windows RT only runs desktop programs written by Microsoft. Users can only install “Windows 8-style apps” from the Windows Store. This means that Windows RT devices are as locked down as an iPad — an attacker would have to get a malicious app into the store and trick users into installing it or possibly find a security vulnerability that allowed them to bypass the protection. Malware is definitely at its worst on Windows. This would probably be true even if Windows had a shining security record and a history of being as secure as other operating systems, but you can definitely avoid a lot of malware just by not using Windows. Of course, no platform is a perfect malware-free environment. You should exercise some basic precautions everywhere. Even if malware was eliminated, we’d have to deal with social-engineering attacks like phishing emails asking for credit card numbers. Image Credit: stuartpilbrow on Flickr, Kansir on Flickr     

    Read the article

  • A Simple Approach For Presenting With Code Samples

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/07/31/a-simple-approach-for-presenting-with-code-samples.aspxI’ve been getting ready for a presentation and have been struggling a bit with the best way to show and execute code samples. I don’t present often (hardly ever), but when I do I like the presentation to have a lot of succinct and executable code snippets to help illustrate the points that I’m making. Depending on what the presentation is about, I might just want to build an entire sample application that I would run during the presentation. In other cases, however, building a full-blown application might not really be the best way to present the code. The presentation I’m working on now is for an open source utility library for dealing with dates and times. I could have probably cooked up a sample app for accepting date and time input and then contrived ways in which it could put the library through its paces, but I had trouble coming up with one app that would illustrate all of the various features of the library that I wanted to highlight. I finally decided that what I really needed was an approach that met the following criteria: Simple: I didn’t want the user interface or overall architecture of a sample application to serve as a distraction from the demonstration of the syntax of the library that the presentation is about. I want to be able to present small bits of code that are focused on accomplishing a single task. Several of these examples will look similar, and that’s OK. I want each sample to “stand on its own” and not rely much on external classes or methods (other than the library that is being presented, of course). “Debuggable” (not really a word, I know): I want to be able to easily run the sample with the debugger attached in Visual Studio should I want to step through any bits of code and show what certain values might be at run time. As far as I know this rules out something like LinqPad, though using LinqPad to present code samples like this is actually a very interesting idea that I might explore another time. Flexible and Selectable: I’m going to have lots of code samples to show, and I want to be able to just package them all up into a single project or module and have an easy way to just run the sample that I want on-demand. Since I’m presenting on a .NET framework library, one of the simplest ways in which I could execute some code samples would be to just create a Console application and use Console.WriteLine to output the pertinent info at run time. This gives me a “no frills” harness from which to run my code samples, and I just hit ‘F5’ to run it with the debugger. This satisfies numbers 1 and 2 from my list of criteria above, but item 3 is a little harder. By default, just running a console application is going to execute the ‘main’ method, and then terminate the program after all code is executed. If I want to have several different code samples and run them one at a time, it would be cumbersome to keep swapping the code I want in and out of the ‘main’ method of the console application. What I really want is an easy way to keep the console app running throughout the whole presentation and just have it run the samples I want when I want. I could setup a simple Windows Forms or WPF desktop application with buttons for the different samples, but then I’m getting away from my first criteria of keeping things as simple as possible. Infinite Loops To The Rescue I found a way to have a simple console application satisfy all three of my requirements above, and it involves using an infinite loop and some Console.ReadLine calls that will give the user an opportunity to break out and exit the program. (All programs that need to run until they are closed explicitly (or crash!) likely use similar constructs behind the scenes. Create a new Windows Forms project, look in the ‘Program.cs’ that gets generated, and then check out the docs for the Application.Run method that it calls.). Here’s how the main method might look: 1: static void Main(string[] args) 2: { 3: do 4: { 5: Console.Write("Enter command or 'exit' to quit: > "); 6: var command = Console.ReadLine(); 7: if ((command ?? string.Empty).Equals("exit", StringComparison.OrdinalIgnoreCase)) 8: { 9: Console.WriteLine("Quitting."); 10: break; 11: } 12: 13: } while (true); 14: } The idea here is the app prompts me for the command I want to run, or I can type in ‘exit’ to break out of the loop and let the application close. The only trick now is to create a set of commands that map to each of the code samples that I’m going to want to run. Each sample is already encapsulated in a single public method in a separate class, so I could just write a big switch statement or create a hashtable/dictionary that maps command text to an Action that will invoke the proper method, but why re-invent the wheel? CLAP For Your Own Presentation I’ve blogged about the CLAP library before, and it turns out that it’s a great fit for satisfying criteria #3 from my list above. CLAP lets you decorate methods in a class with an attribute and then easily invoke those methods from within a console application. CLAP was designed to take the arguments passed into the console app from the command line and parse them to determine which method to run and what arguments to pass to that method, but there’s no reason you can’t re-purpose it to accept command input from within the infinite loop defined above and invoke the corresponding method. Here’s how you might define a couple of different methods to contain two different code samples that you want to run during your presentation: 1: public static class CodeSamples 2: { 3: [Verb(Aliases="one")] 4: public static void SampleOne() 5: { 6: Console.WriteLine("This is sample 1"); 7: } 8:   9: [Verb(Aliases="two")] 10: public static void SampleTwo() 11: { 12: Console.WriteLine("This is sample 2"); 13: } 14: } A couple of things to note about the sample above: I’m using static methods. You don’t actually need to use static methods with CLAP, but the syntax ends up being a bit simpler and static methods happen to lend themselves well to the “one self-contained method per code sample” approach that I want to use. The methods are decorated with a ‘Verb’ attribute. This tells CLAP that they are eligible targets for commands. The “Aliases” argument lets me give them short and easy-to-remember aliases that can be used to invoke them. By default, CLAP just uses the full method name as the command name, but with aliases you can simply the usage a bit. I’m not using any parameters. CLAP’s main feature is its ability to parse out arguments from a command line invocation of a console application and automatically pass them in as parameters to the target methods. My code samples don’t need parameters ,and honestly having them would complicate giving the presentation, so this is a good thing. You could use this same approach to invoke methods with parameters, but you’d have a couple of things to figure out. When you invoke a .NET application from the command line, Windows will parse the arguments and pass them in as a string array (called ‘args’ in the boilerplate console project Program.cs). The parsing that gets done here is smart enough to deal with things like treating strings in double quotes as one argument, and you’d have to re-create that within your infinite loop if you wanted to use parameters. I plan on either submitting a pull request to CLAP to add this capability or maybe just making a small utility class/extension method to do it and posting that here in the future. So I now have a simple class with static methods to contain my code samples, and an infinite loop in my ‘main’ method that can accept text commands. Wiring this all up together is pretty easy: 1: static void Main(string[] args) 2: { 3: do 4: { 5: try 6: { 7: Console.Write("Enter command or 'exit' to quit: > "); 8: var command = Console.ReadLine(); 9: if ((command ?? string.Empty).Equals("exit", StringComparison.OrdinalIgnoreCase)) 10: { 11: Console.WriteLine("Quitting."); 12: break; 13: } 14:   15: Parser.Run<CodeSamples>(new[] { command }); 16: Console.WriteLine("---------------------------------------------------------"); 17: } 18: catch (Exception ex) 19: { 20: Console.Error.WriteLine("Error: " + ex.Message); 21: } 22:   23: } while (true); 24: } Note that I’m now passing the ‘CodeSamples’ class into the CLAP ‘Parser.Run’ as a type argument. This tells CLAP to inspect that class for methods that might be able to handle the commands passed in. I’m also throwing in a little “----“ style line separator and some basic error handling (because I happen to know that some of the samples are going to throw exceptions for demonstration purposes) and I’m good to go. Now during my presentation I can just have the console application running the whole time with the debugger attached and just type in the alias of the code sample method that I want to run when I want to run it.

    Read the article

  • CodePlex Daily Summary for Tuesday, October 01, 2013

    CodePlex Daily Summary for Tuesday, October 01, 2013Popular ReleasesDotNetNuke® Form and List: 06.00.06: DotNetNuke Form and List 06.00.06 Changes to 6.0.6•Add in Sql to remove 'text on row' setting for UserDefinedTable to make SQL Azure compatible. •Add new azureCompatible element to manifest. •Added a fix for importing templates. Changes to 6.0.2•Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 •Change: Data is now stored in nvarchar(max) instead of ntext Changes to 6.0.1•Scripts now compatible with SQL Azure. Changes to 6.0.0•Icons are shown in module action b...BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.2: Mostly internal code tweaks. added -nosize switch to turn off the size- and gzip-calculations done after minification. removed the comments in the build targets script for the old AjaxMin build task (discussion #458831). Fixed an issue with extended Unicode characters encoded inside a string literal with adjacent \uHHHH\uHHHH sequences. Fixed an IndexOutOfRange exception when encountering a CSS identifier that's a single underscore character (_). In previous builds, the net35 and net20...AJAX Control Toolkit: September 2013 Release: AJAX Control Toolkit Release Notes - September 2013 Release Version 7.0930September 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can b...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.4.apifix-alpha: WDTVHubGen.v2.1.4.apifix-alpha is for testers to figure out if we got the NEW api plugged in ok. thanksVisual Log Parser: VisualLogParser: Portable Visual Log Parser for Dotnet 4.0Random searcher i pochodne: Generatorek playlisty: Generuje playlisty w formacie .m3u. Na razie beta z bety - ale juz dziala i mozna uzywac.sb0t v.5: sb0t 5.15: Fixed bug in join filter. Fixed bug in pm blocking. Added new Crypto and Entities static classes to scripting. Updated the default node list.Trace Reader for Microsoft Dynamics CRM: Trace Reader (1.2013.9.29): Initial releaseAudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download ends. word lis...HD-Trailers.NET Downloader: HD-Trailer.Net Downloader v 2.1.5: This started out as an effort to improve the search for the corr3ct IMDB page for the movie. I think I have done that here. I have run about 200 movies and the correct movie was identified in all cases including some entries that were problematic in the past. I also swatted several bugs that popped up under special circumstances and resulted in exceptions. This version should be quite a bit better than previous versions. Let me know if there are any issues.Wsus Package Publisher: Release v1.3.1309.28: Fix a bug, where WPP crash when running on a computer where Windows was installed in another language than Fr, En or De, and launching the Update Creation Wizard. Fix a bug, where WPP crash if some Multi-Thread job are launch with more than 64 items. Add a button to abort "Install This Update" wizard. Allow WPP to remember which columns are shown last time. Make URL clickable on the Update Information Tab. Add a new feature, when Double-Clicking on an update, the default action exec...Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...AcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...C# Intellisense for Notepad++: Release v1.0.6.0: Added support for classless scripts To avoid the DLLs getting locked by OS use MSI file for the installation.SimpleExcelReportMaker: Serm 0.02: SourceCode and SampleMagick.NET: Magick.NET 6.8.7.001: Magick.NET linked with ImageMagick 6.8.7.0. Breaking changes: - ToBitmap method of MagickImage returns a png instead of a bmp. - Changed the value for full transparency from 255(Q8)/65535(Q16) to 0. - MagickColor now uses floats instead of Byte/UInt16.Media Companion: Media Companion MC3.578b: With the feedback received over the renaming of Movie Folders, and files, there has been some refinement done. As well as I would like to introduce Blu-Ray movie folder support, for Pre-Frodo and Frodo onwards versions of XBMC. To start with, Context menu option for renaming movies, now has three sub options: Movie & Folder, Movie only & Folder only. The option Manual Movie Rename needs to be selected from Movie Preferences, but the autoscrape boxes do not need to be selected. Blu Ray Fo...FFXIV Crafting Simulator: Crafting Simulator 2.3: - Major refactoring of the code behind. - Added a current durability and a current CP textbox.DNN CMS Platform: 07.01.02: Major HighlightsAdded the ability to manage the Vanity URL prefix Added the ability to filter members in the member directory by role Fixed issue where the user could inadvertently click the login button multiple times Fixed issues where core classes could not be used in out of process cache provider Fixed issue where profile visibility submenu was not displayed correctly Fixed issue where the member directory was broken when Convert URL to lowercase setting was enabled Fixed issu...New Projects.netProject: .Net Project 3TINafs-m: Secure file storageASP.NET dhtmlxChart Class: Create different types of charts and render them to webforms.ASP.NET dhtmxGantt Class: Add tasks Add dependencies Use lighboxBestCodeTrainer: Project BestCode is for Training. Black Dragon Online Shop: An online shop with basic functionalities implemented with ASP.NET MVC as a team project in Telerik Academy 2012/2013.Central fovea -X: ????: 1.???????????Cell,??RGB,HSL,X,Y 2.???????? 3.???????????? 4.????????,???????? 5.????????CorvusSKK: SKK-like Japanese Input Method for Windowsdaneshjoo: ?? ??? ???????DBA Toolbox: A tool for the production DBA. A place to store all those scripts, tools, references and queries that make your job as a DBA easier.Devpad IDE: Basic integrated development environment.DXUT for Direct3D 11: Latest version of the Microsoft DXUT framework for Direct3D 11 Win32 desktop applications formerly in the now legacy DirectX SDK.Dynamics CRM Messaging Integration: This project shows how to implement a simple custom messaging solution using Microsoft Dynamics CRM and based on its standard development components.Kockafölde: Kliens program.Medusa's Ebay: Teamwork assignment @ TelerikAcademy 2012/2013. Project is for creating simple (or more complicated) interneTeamwork assignment @ TelerikAcademy 2012/2013.Nauplius.PAS: Nauplius.PAS allows end users to leverage the PowerPoint Conversion Services within SharePoint 2013 to convert PowerPoint documents from one format to another.PayBox payment gateway provider for NB_Store: PayBox payment provider for NB_StorePet passport: This is a teamwork projectProject Euler solutions: Solutions to the ProjectEuler problemsRoboSubSync: Fixes an unsynchronized subtitle in any language according to an already synced subtitle in english, with the help of some advanced algorithm magic... ShopBook: Shopbooksliders: jQuery slidersSQLite to JSON: A simple program to convert SQLite data to JSON.Super64: This is a Commodore 64 emulator that is determined to have extremely unnecessary accuracy.TeamProject1: This is just Demo ProjectXnaHelper: A collection of tools for making 2D XNA Games

    Read the article

  • Trace flags - TF 1117

    - by Damian
    I had a session about trace flags this year on the SQL Day 2014 conference that was held in Wroclaw at the end of April. The session topic is important to most of DBA's and the reason I did it was that I sometimes forget about various trace flags :). So I decided to prepare a presentation but I think it is a good idea to write posts about trace flags, too. Let's start then - today I will describe the TF 1117. I assume that we all know how to setup a TF using starting parameters or registry or in the session or on the query level. I will always write if a trace flag is local or global to make sure we know how to use it. Why do we need this trace flag? Let’s create a test database first. This is quite ordinary database as it has two data files (4 MB each) and a log file that has 1MB. The data files are able to expand by 1 MB and the log file grows by 10%: USE [master] GO CREATE DATABASE [TF1117]  ON  PRIMARY ( NAME = N'TF1117',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117.mdf' ,      SIZE = 4096KB ,      MAXSIZE = UNLIMITED,      FILEGROWTH = 1024KB ), ( NAME = N'TF1117_1',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_1.ndf' ,      SIZE = 4096KB ,      MAXSIZE = UNLIMITED,      FILEGROWTH = 1024KB )  LOG ON ( NAME = N'TF1117_log',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_log.ldf' ,      SIZE = 1024KB ,      MAXSIZE = 2048GB ,      FILEGROWTH = 10% ) GO Without the TF 1117 turned on the data files don’t grow all up at once. When a first file is full the SQL Server expands it but the other file is not expanded until is full. Why is that so important? The SQL Server proportional fill algorithm will direct new extent allocations to the file with the most available space so new extents will be written to the file that was just expanded. When the TF 1117 is enabled it will cause all files to auto grow by their specified increment. That means all files will have the same percent of free space so we still have the benefit of evenly distributed IO. The TF 1117 is global flag so it affects all databases on the instance. Of course if a filegroup contains only one file the TF does not have any effect on it. Now let’s do a simple test. First let’s create a table in which every row will fit to a single page: The table definition is pretty simple as it has two integer columns and one character column of fixed size 8000 bytes: create table TF1117Tab (      col1 int,      col2 int,      col3 char (8000) ) go Now I load some data to the table to make sure that one of the data file must grow: declare @i int select @i = 1 while (@i < 800) begin       insert into TF1117Tab  values (@i, @i+1000, 'hello')        select @i= @i + 1 end I can check the actual file size in the sys.database_files DMV: SELECT name, (size*8)/1024 'Size in MB' FROM sys.database_files  GO   As you can see only the first data file was  expanded and the other has still the initial size:   name                  Size in MB --------------------- ----------- TF1117                5 TF1117_log            1 TF1117_1              4 There is also other methods of looking at the events of file autogrows. One possibility is to create an Extended Events session and the other is to look into the default trace file:     DECLARE @path NVARCHAR(260); SELECT    @path = REVERSE(SUBSTRING(REVERSE([path]),          CHARINDEX('\', REVERSE([path])), 260)) + N'log.trc' FROM    sys.traces WHERE   is_default = 1; SELECT    DatabaseName,                 [FileName],                 SPID,                 Duration,                 StartTime,                 EndTime,                 FileType =                         CASE EventClass                                     WHEN 92 THEN 'Data'                                    WHEN 93 THEN 'Log'             END FROM sys.fn_trace_gettable(@path, DEFAULT) WHERE   EventClass IN (92,93) AND StartTime >'2014-07-12' AND DatabaseName = N'TF1117' ORDER BY   StartTime DESC;   After running the query I can see the file was expanded and how long did the process take which might be useful from the performance perspective.    Now it’s time to turn on the flag 1117. DBCC TRACEON(1117)   I dropped the database and recreated it once again. Then I ran the queries and observed the results. After loading the records I see that both files were evenly expanded: name                  Size in MB --------------------- ----------- TF1117                5 TF1117_log            1 TF1117_1              5 I found also information in the default trace. The query returned three rows. The last one is connected to my first experiment when the TF was turned off.  The two rows shows that first file was expanded by 1MB and right after that operation the second file was expanded, too. This is what is this TF all about J  

    Read the article

  • Going Paperless

    - by Jesse
    One year ago I came to work for a company where the entire development team is 100% “remote”; we’re spread over 3 time zones and each of us works from home. This seems to be an increasingly popular way for people to work and there are many articles and blog posts out there enumerating the advantages and disadvantages of working this way. I had read a lot about telecommuting before accepting this job and felt as if I had a pretty decent idea of what I was getting into, but I’ve encountered a few things over the past year that I did not expect. Among the most surprising by-products of working from home for me has been a dramatic reduction in the amount of paper that I use on a weekly basis. Hoarding In The Workplace Prior to my current telecommute job I worked in what most would consider pretty traditional office environments. I sat in cubicles furnished with an enormous plastic(ish) modular desks, had a mediocre (at best) PC workstation, and had ready access to a seemingly endless supply of legal pads, pens, staplers and paper clips. The ready access to paper, countless conference room meetings, and abundance of available surface area on my desk and in drawers created a perfect storm for wasting paper. I brought a pad of paper with me to every meeting I ever attended, scrawled some brief notes, and then tore that sheet off to keep next to my keyboard to follow up on any needed action items. Once my immediate need for the notes was fulfilled, that sheet would get shuffled off into a corner of my desk or filed away in a drawer “just in case”. I would guess that for all of the notes that I ever filed away, I might have actually had to dig up and refer to 2% of them (and that’s probably being very generous). That said, on those rare occasions that I did have to dig something up from old notes, it was usually pretty important and I ended up being very glad that I saved them. It was only when I would leave a job or move desks that I would finally gather all those notes together and take them to shredding bin to be disposed of. When I left my last job the amount of paper I had accumulated over my three years there was absurd, and I knew coworkers who had substance-abuse caliber paper wasting addictions that made my bad habit look like nail-biting in comparison. A Product Of My Environment I always hated using all of this paper, but simply couldn’t bring myself to stop. It would look bad if I showed up to an important conference room meeting without a pad of paper. What if someone said something profound! Plus, everyone else always brought paper with them. If you saw someone walking down the hallway with a pad of paper in hand you knew they must be on their way to a conference room meeting. Some people even had fancy looking portfolio notebook sheaths that gave their legal pads all the prestige of a briefcase. No one ever worried about running out of fresh paper because there was an endless supply, and there certainly was no shortage of places to store and file used paper. In short, the traditional office was setup for using tons and tons of paper; it’s baked into the culture there. For that reason, it didn’t take long for me to kick the paper habit once I started working from home. In my home office, desk and drawer space are at a premium. I don’t have the budget (or the tolerance) for huge modular office furniture in my spare bedroom. I also no longer have access to a bottomless pit of office supplies stock piled in cabinets and closets. If I want to use some paper, I have to go out and buy it. Finally (and most importantly), all of the meetings that I have to attend these days are “virtual”. We use instant messaging, VOIP, video conferencing, and e-mail to communicate with each other. All I need to take notes during a meeting is my computer, which I happen to be sitting right in front of all day. I don’t have any hard numbers for this, but my gut feeling is that I actually take a lot more notes now than I ever did when I worked in an office. The big difference is I don’t have to use any paper to do so. This makes it far easier to keep important information safe and organized. The Right Tool For The Job When I first started working from home I tried to find a single application that would fill the gap left by the pen and paper that I always had at my desk when I worked in an office. Well, there are no silver bullets and I’ve evolved my approach over time to try and find the best tool for the job at hand. Here’s a quick summary of how I take notes and keep everything organized. Notepad++ – This is the first application I turn to when I feel like there’s some bit of information that I need to write down and save. I use Launchy, so opening Notepad++ and creating a new file only takes a few keystrokes. If I find that the information I’m trying to get down requires a more sophisticated application I escalate as needed. The Desktop – By default, I save every file or other bit of information to the desktop. Anyone who has ever had to fix their parents computer before knows that this is a dangerous game (any file my mother has ever worked on is saved directly to the desktop and rarely moves anywhere else). I agree that storing things on the desktop isn’t a great long term approach to keeping organized, which is why I treat my desktop a bit like my e-mail inbox. I strive to keep both empty (or as close to empty as I possibly can). If something is on my desktop, it means that it’s something relevant to a task or project that I’m currently working on. About once a week I take things that I’m not longer working on and put them into my ‘Notes’ folder. The ‘Notes’ Folder – As I work on a task, I tend to accumulate multiple files associated with that task. For example, I might have a bit of SQL that I’m working on to gather data for a new report, a quick C# method that I came up with but am not yet ready to commit to source control, a bulleted list of to-do items in a .txt file, etc. If the desktop starts to get too cluttered, I create a new sub-folder in my ‘Notes’ folder. Each sub-folder’s name is the current date followed by a brief description of the task or project. Then all files related to that task or project go into that sub folder. By using the date as the first part of the folder name, these folders are automatically sorted in reverse chronological order. This means that things I worked on recently will generally be near the top of the list. Using the built-in Windows search functionality I now have a pretty quick and easy way to try and find something that I worked on a week ago or six months ago. Dropbox – Dropbox is a free service that lets you store up to 2GB of files “in the cloud” and have those files synced to all of the different computers that you use. My ‘Notes’ folder lives in Dropbox, meaning that it’s contents are constantly backed up and are always available to me regardless of which computer I’m using. They also have a pretty decent iPhone application that lets you browse and view all of the files that you have stored there. The free 2GB edition is probably enough for just storing notes, but I also pay $99/year for the 50GB storage upgrade and keep all of my music, e-books, pictures, and documents in Dropbox. It’s a fantastic service and I highly recommend it. Evernote – I use Evernote mostly to organize information that I access on a fairly regular basis. For example, my Evernote account has a running grocery shopping list, recipes that my wife and I use a lot, and contact information for people I contact infrequently enough that I don’t want to keep them in my phone. I know some people that keep nearly everything in Evernote, but there’s something about it that I find a bit clunky, so I tend to use it sparingly. Google Tasks – One of my biggest paper wasting habits was keeping a running task-list next to my computer at work. Every morning I would sit down, look at my task list, cross off what was done and add new tasks that I thought of during my morning commute. This usually resulted in having to re-copy the task list onto a fresh sheet of paper when I was done. I still keep a running task list at my desk, but I’ve started using Google Tasks instead. This is a dead-simple web-based application for quickly adding, deleting, and organizing tasks in a simple checklist style. You can quickly move tasks up and down on the list (which I use for prioritizing), and even create sub-tasks for breaking down larger tasks into smaller pieces. Balsamiq Mockups – This is a simple and lightweight tool for creating drawings of user interfaces. It’s great for sketching out a new feature, brainstorm the layout of a interface, or even draw up a quick sequence diagram. I’m terrible at drawing, so Balsamiq Mockups not only lets me create sketches that other people can actually understand, but it’s also handy because you can upload a sketch to a common location for other team members to access. I can honestly say that using these tools (and having limited resources at home) have lead me to cut my paper usage down to virtually none. If I ever were to return to a traditional office workplace (hopefully never!) I’d try to employ as many of these applications and techniques as I could to keep paper usage low. I feel far less cluttered and far better organized now.

    Read the article

  • A WPF Image Button

    - by psheriff
    Instead of a normal button with words, sometimes you want a button that is just graphical. Yes, you can put an Image control in the Content of a normal Button control, but you still have the button outline, and trying to change the style can be rather difficult. Instead I like creating a user control that simulates a button, but just accepts an image. Figure 1 shows an example of three of these custom user controls to represent minimize, maximize and close buttons for a borderless window. Notice the highlighted image button has a gray rectangle around it. You will learn how to highlight using the VisualStateManager in this blog post.Figure 1: Creating a custom user control for things like image buttons gives you complete control over the look and feel.I would suggest you read my previous blog post on creating a custom Button user control as that is a good primer for what I am going to expand upon in this blog post. You can find this blog post at http://weblogs.asp.net/psheriff/archive/2012/08/10/create-your-own-wpf-button-user-controls.aspx.The User ControlThe XAML for this image button user control contains just a few controls, plus a Visual State Manager. The basic outline of the user control is shown below:<Border Grid.Row="0"        Name="borMain"        Style="{StaticResource pdsaButtonImageBorderStyle}"        MouseEnter="borMain_MouseEnter"        MouseLeave="borMain_MouseLeave"        MouseLeftButtonDown="borMain_MouseLeftButtonDown">  <VisualStateManager.VisualStateGroups>  ... MORE XAML HERE ...  </VisualStateManager.VisualStateGroups>  <Image Style="{StaticResource pdsaButtonImageImageStyle}"         Visibility="{Binding Path=Visibility}"         Source="{Binding Path=ImageUri}"         ToolTip="{Binding Path=ToolTip}" /></Border>There is a Border control named borMain and a single Image control in this user control. That is all that is needed to display the buttons shown in Figure 1. The definition for this user control is in a DLL named PDSA.WPF. The Style definitions for both the Border and the Image controls are contained in a resource dictionary names PDSAButtonStyles.xaml. Using a resource dictionary allows you to create a few different resource dictionaries, each with a different theme for the buttons.The Visual State ManagerTo display the highlight around the button as your mouse moves over the control, you will need to add a Visual State Manager group. Two different states are needed; MouseEnter and MouseLeave. In the MouseEnter you create a ColorAnimation to modify the BorderBrush color of the Border control. You specify the color to animate as “DarkGray”. You set the duration to less than a second. The TargetName of this storyboard is the name of the Border control “borMain” and since we are specifying a single color, you need to set the TargetProperty to “BorderBrush.Color”. You do not need any storyboard for the MouseLeave state. Leaving this VisualState empty tells the Visual State Manager to put everything back the way it was before the MouseEnter event.<VisualStateManager.VisualStateGroups>  <VisualStateGroup Name="MouseStates">    <VisualState Name="MouseEnter">      <Storyboard>        <ColorAnimation             To="DarkGray"            Duration="0:0:00.1"            Storyboard.TargetName="borMain"            Storyboard.TargetProperty="BorderBrush.Color" />      </Storyboard>    </VisualState>    <VisualState Name="MouseLeave" />  </VisualStateGroup></VisualStateManager.VisualStateGroups>Writing the Mouse EventsTo trigger the Visual State Manager to run its storyboard in response to the specified event, you need to respond to the MouseEnter event on the Border control. In the code behind for this event call the GoToElementState() method of the VisualStateManager class exposed by the user control. To this method you will pass in the target element (“borMain”) and the state (“MouseEnter”). The VisualStateManager will then run the storyboard contained within the defined state in the XAML.private void borMain_MouseEnter(object sender,  MouseEventArgs e){  VisualStateManager.GoToElementState(borMain,    "MouseEnter", true);}You also need to respond to the MouseLeave event. In this event you call the VisualStateManager as well, but specify “MouseLeave” as the state to go to.private void borMain_MouseLeave(object sender, MouseEventArgs e){  VisualStateManager.GoToElementState(borMain,     "MouseLeave", true);}The Resource DictionaryBelow is the definition of the PDSAButtonStyles.xaml resource dictionary file contained in the PDSA.WPF DLL. This dictionary can be used as the default look and feel for any image button control you add to a window. <ResourceDictionary  ... >  <!-- ************************* -->  <!-- ** Image Button Styles ** -->  <!-- ************************* -->  <!-- Image/Text Button Border -->  <Style TargetType="Border"         x:Key="pdsaButtonImageBorderStyle">    <Setter Property="Margin"            Value="4" />    <Setter Property="Padding"            Value="2" />    <Setter Property="BorderBrush"            Value="Transparent" />    <Setter Property="BorderThickness"            Value="1" />    <Setter Property="VerticalAlignment"            Value="Top" />    <Setter Property="HorizontalAlignment"            Value="Left" />    <Setter Property="Background"            Value="Transparent" />  </Style>  <!-- Image Button -->  <Style TargetType="Image"         x:Key="pdsaButtonImageImageStyle">    <Setter Property="Width"            Value="40" />    <Setter Property="Margin"            Value="6" />    <Setter Property="VerticalAlignment"            Value="Top" />    <Setter Property="HorizontalAlignment"            Value="Left" />  </Style></ResourceDictionary>Using the Button ControlOnce you make a reference to the PDSA.WPF DLL from your WPF application you will see the “PDSAucButtonImage” control appear in your Toolbox. Drag and drop the button onto a Window or User Control in your application. I have not referenced the PDSAButtonStyles.xaml file within the control itself so you do need to add a reference to this resource dictionary somewhere in your application such as in the App.xaml.<Application.Resources>  <ResourceDictionary>    <ResourceDictionary.MergedDictionaries>      <ResourceDictionary         Source="/PDSA.WPF;component/PDSAButtonStyles.xaml" />    </ResourceDictionary.MergedDictionaries>  </ResourceDictionary></Application.Resources>This will give your buttons a default look and feel unless you override that dictionary on a specific Window or User Control or on an individual button. After you have given a global style to your application and you drag your image button onto a window, the following will appear in your XAML window.<my:PDSAucButtonImage ... />There will be some other attributes set on the above XAML, but you simply need to set the x:Name, the ToolTip and ImageUri properties. You will also want to respond to the Click event procedure in order to associate an action with clicking on this button. In the sample code you download for this blog post you will find the declaration of the Minimize button to be the following:<my:PDSAucButtonImage       x:Name="btnMinimize"       Click="btnMinimize_Click"       ToolTip="Minimize Application"       ImageUri="/PDSA.WPF;component/Images/Minus.png" />The ImageUri property is a dependency property in the PDSAucButtonImage user control. The x:Name and the ToolTip we get for free. You have to create the Click event procedure yourself. This is also created in the PDSAucButtonImage user control as follows:private void borMain_MouseLeftButtonDown(object sender,  MouseButtonEventArgs e){  RaiseClick(e);}public delegate void ClickEventHandler(object sender,  RoutedEventArgs e);public event ClickEventHandler Click;protected void RaiseClick(RoutedEventArgs e){  if (null != Click)    Click(this, e);}Since a Border control does not have a Click event you will create one by using the MouseLeftButtonDown on the border to fire an event you create called “Click”.SummaryCreating your own image button control can be done in a variety of ways. In this blog post I showed you how to create a custom user control and simulate a button using a Border and Image control. With just a little bit of code to respond to the MouseLeftButtonDown event on the border you can raise your own Click event. Dependency properties, such as ImageUri, allow you to set attributes on your custom user control. Feel free to expand on this button by adding additional dependency properties, change the resource dictionary, and even the animation to make this button look and act like you want.NOTE: You can download the sample code for this article by visiting my website at http://www.pdsa.com/downloads. Select “Tips & Tricks”, then select “A WPF Image  Button” from the drop down list.

    Read the article

  • CodePlex Daily Summary for Wednesday, July 24, 2013

    CodePlex Daily Summary for Wednesday, July 24, 2013Popular ReleasesGeoTransformer: GeoTransformer 4.5: Extensions can now be installed and uninstalled from the application. The extensions update the same way as the application - silently and automatically. Added ability to search for caches by pressing CTRL+F in the table views. (Thanks to JanisU for implementing this request) Added ability to remove edited customizations for multiple caches at once (use SHIFT or CTRL to select multiple lines in the table). A new experimental version for Windows 8 RT (on ARM processor) is also made availa...Kartris E-commerce: Kartris v2.5003: This fixes an issue where search engines appear to identify as IE and so trigger the noIE page if there is not a non-responsive skin specified.VG-Ripper & PG-Ripper: VG-Ripper 2.9.45: changes NEW: Added Support for "ImgBabes.com" links NEW: Added Support for "ImagesIon.com" linksFamily Tree Analyzer: Version 1.5.3.0: Added a new Lost Cousins Report to the Lost Cousins tab. This report displays colour coded boxes for each census showing whether census data has been found and entered into Lost Cousins website for all the UK census years.Magelia WebStore Open-source Ecommerce software: Magelia WebStore 2.4: Magelia WebStore version 2.4 introduces new and improved features: Basket and order calculation have been redesigned with a more modular approach geographic zone algorithms for tax and shipping calculations have been re-developed. The Store service has been split in three services (store, basket, order). Product start and end dates have been added. For variant products a unique code has been introduced for the top (variable) product, product attributes can now be defined at the top ...LINQ to Twitter: LINQ to Twitter v2.1.08: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.AndroidCopyAndPaste: Capood: Erstes Release für Evaluation --> Changeset 26675AcDown?????: AcDown????? v4.4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.4.3 ?? ??Bilibili????????????? ???????????? ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2.10?...Magick.NET: Magick.NET 6.8.6.601: Magick.NET linked with ImageMagick 6.8.6.6. These zip files are also available as a NuGet package: https://nuget.org/profiles/dlemstra/C# Intellisense for Notepad++: Initial release: Members auto-complete Integration with native Notepad++ Auto-Completion Auto "open bracket" for methods Right-arrow to accept suggestions51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.19.4: One Click Install from NuGet This release introduces the 51Degrees.mobi IIS Vary Header Fix. When Compression and Caching is used in IIS, the Vary header is overwritten, making intelligent caching with dynamic content impossible. Find out more about installing the Vary Header fix. Changes to Version 2.1.19.4Handlers now have a ‘Count’ property. This is an integer value that shows how many devices in the dataset that use that handler. Provider.cs -> GetDeviceInfoByID to address a problem w...SalarDbCodeGenerator: SalarDbCodeGenerator v2.1.2013.0719: Version 2.1.2013.0719 2013/7/19 Pattern Changes: * DapperContext pattern is added. * All patterns are updated to work with one-to-one relations. Changes: * One-to-one relation is supported. * Minor bug fixes.PantheR's GraphX for .NET: GraphX for .NET v0.9.5 BETA: BETA 0.9.5 + Added GraphArea.SaveAsImage() method that supports different image formats + Added GraphArea.UseNativeObjectArrange property. True by default. If set to False it will use different coordinates handling that helps to soften vertex drag issues to the top and left area sides. + Added GraphArea.Translation property. It is needed to get correct translation coordinates when determining object position from the mouse coordinates. + Added new VertexControl.PositionChanged event along wit....NET Code Migrator for Dynamics CRM: v1.0.12: Combined the main macros, generated macros from a sample organization, and the CreateVisualStudioMacros utility into a single package.DARF: Dynamic Application Runtime Framework: DARF.Demo: This demo is DARF.IDE application plus a sample file (calculator.xda) which describes a very simple calculator. All you need to do is to unzip the package, run DARF.IDE.exe and load calculator.xda then press F5 or select 'Execute' from 'Code' menu to see the execution of the application. The calculator.xda file is fully commented so you can inspect the file and get a feeling of the way DARF works. This sample application makes use of some pre-written blocks (namely button, textbox and ...) ...Player Framework by Microsoft: Player Framework for Windows and WP (v1.3 beta 2): Includes all changes in v1.3 beta 1 Additional support for Windows 8.1 Preview New API (JS): addTextTrack New API (JS): msKeys New API (JS): msPlayToPreferredSourceUri New API (JS): msSetMediaKeys New API (JS): onmsneedkey New API (Xaml): SetMediaStreamSource method New API (Xaml): Stretch property New API (Xaml): StretchChanged event New API (Xaml): AreTransportControlsEnabled property New API (Xaml): IsFullWindow property New API (Xaml): PlayToPreferredSourceUri proper...CodeGen Code Generator: CodeGen 4.2.11: Changes in this release include: Added several new alternate forms of the <FIELD_SELWND> token to provide template developers better control over the case of field selection window names. Also added a new token <FIELD_SELWND_ORIGINAL> to preserve the case of selection window names in the same way that <FIELD_SELWND> used to. Enhanced UI Toolkit window script selection window processing (-ws) so that selection window names are no longer case sensitive (they aren't in UI Toolkit). Also the -w...Outlook 2013 Add-In: Multiple Calendars: As per popular request, this new version includes: - Support for multiple calendars. This can be enabled in the configuration by choosing which ones to show/hide appointments from. In some cases (public folders) it may time out and crash, and so far it only supports "My Calendars", so not shared ones yet. Also they're currently shown in the same font/color so there are no confusions with color categories, but please drop me a line on any suggestions you'd like to see implemented. - Added fri...Circuit Diagram: Circuit Diagram 2.0 Beta 2: New in this release: Show grid in editor Cut/copy/paste support Bug fixesCommunity TFS Build Extensions: July 2013: The July 2013 release contains VS2010 Activities(target .NET 4.0) VS2012 Activities (target .NET 4.5) VS2013 Activities (target .NET 4.5.1) Community TFS Build Manager VS2012 The Community TFS Build Manager can also be found in the Visual Studio Gallery here where updates will first become available. A version supporting VS2010 is also available in the Gallery here.New Projects.NET Weaver: This project is a base project to weave code in existing assemblies.AAP WB: aam aadmi party west bengal facebook appajax call wcf: it's about how to call wcf use ajax in asp.netBMI: Test ProjectCarWebOOB: Website for learning subject about object oriented databaseCloud Ninja Polyglot Persistence: The Cloud Ninja Polyglot Persistence project is a sample that demonstrates the use of multiple and different types of repositories to persist application data.ConsoleGamePacMan: C# console Implementation of popular game Pac-MancrawlerTeam: crawlerDB-Team-Project-Mimosa: Telerik Database Project Team MimosaDemoJS: Nonedeployspsolution: powershell snapin that mimics publishing behaviour of wsp sharepoint solution in visual studio.Du Lich Thanh Nien: Du L?ch Thanh NiênElectronic Commerce Resource Planning: Electronic Commerce Resource PlanningGnuPGNotepad: A notepad based GUI for GnuPG (PGP) which allows you to Encrypt/Decrypt on the fly, or save & load encrypted to disk without using temporary clear text files.modbusclasslabrarycsharp: Summary MvcAppTestSolution: MVCNext Inventory: NextInventory is an open source application. Native in the ASP.Net MVC4.Org Chart in SharePoint 2010 using Google API.: Organization Chart webpart for SharePoint 2010 using Google API and SharePoint list as the data source. Good for small organizations with no AD hierarchy.Parsec SWTOR Parser: .NET Combat Log Parser Windows Application for SWTOR (Star Wars the Old Republic)Portable Imaging Library for .NET: Portable Imaging Library for asynchronous loading, modifying and saving images from any thread (out of UI thread for WPF/WP8).Powershell MetadataExplorer: Windows Powershell SnapIn that includes the Get-ItemMetadata cmdlet. The Get-ItemMetadata cmdlet retrieves extended metadata associated with file system items.Pseudogenerator: Pseudogenerator es un generador de numeros pseudoaleatorios compatible con una variedad de metodos de generacion. Muy util en el area de simulaciones digitales.PushIt!: Very interesting arcade game.S3Console: This project is an initiative to create shell like interactive console application to administer amazon s3 using c#.net.Sample VariableSizedWrapGrid Windows Phone: This example gives an idea of ​​how to make the size of a list item dynamically. In this case a user control that has been created by VariableSizedWrapGrid kinnSchool CMS: School CMS is a school content management system, written in PHP and run from a single MySQL Database. Anyone can use school cms.Sys: This is library of collection of most used .NET components implemented in C#VariousPublicExperiments: This project contains various experiments, publish for educational purposes. These projects may later on be branched to their own repositories.VBDownloader: *VBDownloader* _The open source solution for downloading_wechat demo: wechat demoZing State Explorer: Zing is a state explorer for models of concurrent and asynchronous programs.

    Read the article

  • ADDS: 1 - Introducing and designing

    - by marc dekeyser
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} What is ADDS?  Every Microsoft oriented infrastructure in today's enterprises will depend largely on the active directory version built by Microsoft. It is the foundation stone on which all other products (Exchange, update services, office communicator, the system center family, etc) rely on to get their information. And that is just looking at it from an infrastructure perspective. A well designed and implemented Active Directory implementation makes life for IT personnel and user alike a lot easier. Centralised management and the abilities opened up  by having it in place are ample.  But what is Active Directory Domain Services? We can look at ADDS as a centralised directory containing all objects your infrastructure runs on in one way or another. Since it is a Microsoft product you'll obviously not be seeing linux or mac clients listed in here (exceptions exist) but in general we can say it contains everything your company has in place in one form or another.  The domain name services. The domain naming service (or DNS for short) is a service which translates IP address (the identifiers for each computer in your domain) into readable and easy to understand names. This service is a prequisite for ADDA to work and having wrong record in a DNS server will make any ADDS service fail. Generally speaking a DNS service will be run on the same server as the ADDS service but it is worth wile to remember that this is not necessary. You could, for example, run your DNS services on a linux box (which would need special preparing to host an ADDS integrated DNS zone) and run the ADDS service of another box… Where to start? If the aim is to put in place a first time implementation of ADDS in your enterprise there are plenty of things to consider depending on what you are going to do in the long run. Great care has to be taken when first designing and implementing as having it set up wrong will cause a headache down the line. It is for that reason that I like to start building from the bottom up and start with a generic installation of ADDS (which will still differ for every client) and make it adaptable for future services which can hook in to the existing environment. Adapting existing environments is out of scope for this document (and series) although it is possible to take the pointers and change your existing environment to run in a smoother manor. Take great care when changing things as one small slip of the hand can give you a forest wide failure… Whenever starting with an ADDS deployment I ask the client the following questions:  What are your long term plans and goals?  How flexible do you want it? Are you currently linux heavy and want to keep this or can we go for an all Microsoft design? Those three questions should give some sort of indicator what direction can be taken and if the client has thought about some things themselves :).  The technical side of things  What is next to consider is what kind of infrastructure is already in place. For these series I'll keep it simple and introduce some general concepts without going in to depth on integrating ADDS with other DNS services.  Building from the ground up means we need to consider our layers on which our infrastructure will rely. In my view that goes as follows:  Network (WAN/LAN links and physical sites DNS Namespacing All in one domain or split up in different domains/forests? Security (both for ADDS and physical sites) The network side of things  Looking at how the network is currently set up can potentially teach us a large deal about the client. Do they have multiple physical site? What network speeds exist between these sites, etc… Depending on this information we will design our site links (which controls replication) in future stages. DNS Namespacing Maybe the single most intresting thing to know is what the domain will be named (ADDS will need a DNS domain with the same name) and where this will be hosted. Note that active directory can be set up with a singe name (aka contoso instead of contoso.com) but it is highly recommended to never do this. If you do end up with a domain like that for some reason there will be a lot of services that are going to give you good grief in the future (exchange being one of them). So one of the best practises would be always to use a double name (contoso.com or contoso.lan for example). Internal namespace A single namespace is just what it sounds like. You have a DNS domain which is different internally from what the client has as an external namespace. f.e. contoso.com as an external name (out on the internet) and contoso.lan on the internal network. his setup is has its advantages in that you have more obscurity from the internet in the DNS side of this but it will require additional work to publish services to the web. External namespace Quite like the internal namespace only here you do not differ the internal namespace of the company from what is known on the internet. In this implementation you would host your own DNS servers for the external domain inside the network. Or in other words, any external computer doing a DNS lookup would contact your internal DNS server for the resolution. Generally speaking this set up is a bad idea from the security side of things. Split DNS Whilst using an external namespace design is fairly easy it involves a lot of security risks. Opening up you ADDS DSN servers for lookups exposes your entire network to the internet and should be avoided at any cost. And that is where the "split DNS" design comes in. In this setup up would still have the same namespace internally and externally but you would be using different DNS servers for lookups on the external network who have no records of your internal resources unless you explicitly publish them. All in one or not? In determining your active directory design you can look at the following possibilities:  Single forest, Single domain Single forest, multiple domains Multiple forests, multiple domains I've listed the possibilities for design in increasing order of administrative magnitude. Microsoft recommends trying to use a single forest, single domain in as much situations as possible. It is, however, always possible that you require your services to be seperated from your users in a resource forest with trusts set up between the different forests. To start out I would go with the single forest design to avoid complexity unless there are strict requirements to have multiple forests. Security What kind of security is required on the domain and does this reflect the physical security on the sites? Not every client can afford to have a domain controller in a secluded server room on every site and it is exactly for that reason that Microsoft introduced the RODC (read only domain controller). A RODC is a domain controller that has been limited in functionality, in essence it will only cache the data you explicitly tell it to cache and in the case of a DC compromise (it being stolen) only a limited number of accounts will need to be affected. Th- Th- Th- That’s all folks! Well at least for now! In future editions of this series we’ll be walking through the different task that need to be done and the thought which needs to be put in to it. But for all editions we’ll be going from the concept of running a single forest, single domain with a split DNS setup… See you next time!

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • C#/.NET Little Wonders: Tuples and Tuple Factory Methods

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can really help improve your code by making it easier to write and maintain.  This week, we look at the System.Tuple class and the handy factory methods for creating a Tuple by inferring the types. What is a Tuple? The System.Tuple is a class that tends to inspire a reaction in one of two ways: love or hate.  Simply put, a Tuple is a data structure that holds a specific number of items of a specific type in a specific order.  That is, a Tuple<int, string, int> is a tuple that contains exactly three items: an int, followed by a string, followed by an int.  The sequence is important not only to distinguish between two members of the tuple with the same type, but also for comparisons between tuples.  Some people tend to love tuples because they give you a quick way to combine multiple values into one result.  This can be handy for returning more than one value from a method (without using out or ref parameters), or for creating a compound key to a Dictionary, or any other purpose you can think of.  They can be especially handy when passing a series of items into a call that only takes one object parameter, such as passing an argument to a thread's startup routine.  In these cases, you do not need to define a class, simply create a tuple containing the types you wish to return, and you are ready to go? On the other hand, there are some people who see tuples as a crutch in object-oriented design.  They may view the tuple as a very watered down class with very little inherent semantic meaning.  As an example, what if you saw this in a piece of code: 1: var x = new Tuple<int, int>(2, 5); What are the contents of this tuple?  If the tuple isn't named appropriately, and if the contents of each member are not self evident from the type this can be a confusing question.  The people who tend to be against tuples would rather you explicitly code a class to contain the values, such as: 1: public sealed class RetrySettings 2: { 3: public int TimeoutSeconds { get; set; } 4: public int MaxRetries { get; set; } 5: } Here, the meaning of each int in the class is much more clear, but it's a bit more work to create the class and can clutter a solution with extra classes. So, what's the correct way to go?  That's a tough call.  You will have people who will argue quite well for one or the other.  For me, I consider the Tuple to be a tool to make it easy to collect values together easily.  There are times when I just need to combine items for a key or a result, in which case the tuple is short lived and so the meaning isn't easily lost and I feel this is a good compromise.  If the scope of the collection of items, though, is more application-wide I tend to favor creating a full class. Finally, it should be noted that tuples are immutable.  That means they are assigned a value at construction, and that value cannot be changed.  Now, of course if the tuple contains an item of a reference type, this means that the reference is immutable and not the item referred to. Tuples from 1 to N Tuples come in all sizes, you can have as few as one element in your tuple, or as many as you like.  However, since C# generics can't have an infinite generic type parameter list, any items after 7 have to be collapsed into another tuple, as we'll show shortly. So when you declare your tuple from sizes 1 (a 1-tuple or singleton) to 7 (a 7-tuple or septuple), simply include the appropriate number of type arguments: 1: // a singleton tuple of integer 2: Tuple<int> x; 3:  4: // or more 5: Tuple<int, double> y; 6:  7: // up to seven 8: Tuple<int, double, char, double, int, string, uint> z; Anything eight and above, and we have to nest tuples inside of tuples.  The last element of the 8-tuple is the generic type parameter Rest, this is special in that the Tuple checks to make sure at runtime that the type is a Tuple.  This means that a simple 8-tuple must nest a singleton tuple (one of the good uses for a singleton tuple, by the way) for the Rest property. 1: // an 8-tuple 2: Tuple<int, int, int, int, int, double, char, Tuple<string>> t8; 3:  4: // an 9-tuple 5: Tuple<int, int, int, int, double, int, char, Tuple<string, DateTime>> t9; 6:  7: // a 16-tuple 8: Tuple<int, int, int, int, int, int, int, Tuple<int, int, int, int, int, int, int, Tuple<int,int>>> t14; Notice that on the 14-tuple we had to have a nested tuple in the nested tuple.  Since the tuple can only support up to seven items, and then a rest element, that means that if the nested tuple needs more than seven items you must nest in it as well.  Constructing tuples Constructing tuples is just as straightforward as declaring them.  That said, you have two distinct ways to do it.  The first is to construct the tuple explicitly yourself: 1: var t3 = new Tuple<int, string, double>(1, "Hello", 3.1415927); This creates a triple that has an int, string, and double and assigns the values 1, "Hello", and 3.1415927 respectively.  Make sure the order of the arguments supplied matches the order of the types!  Also notice that we can't half-assign a tuple or create a default tuple.  Tuples are immutable (you can't change the values once constructed), so thus you must provide all values at construction time. Another way to easily create tuples is to do it implicitly using the System.Tuple static class's Create() factory methods.  These methods (much like C++'s std::make_pair method) will infer the types from the method call so you don't have to type them in.  This can dramatically reduce the amount of typing required especially for complex tuples! 1: // this 4-tuple is typed Tuple<int, double, string, char> 2: var t4 = Tuple.Create(42, 3.1415927, "Love", 'X'); Notice how much easier it is to use the factory methods and infer the types?  This can cut down on typing quite a bit when constructing tuples.  The Create() factory method can construct from a 1-tuple (singleton) to an 8-tuple (octuple), which of course will be a octuple where the last item is a singleton as we described before in nested tuples. Accessing tuple members Accessing a tuple's members is simplicity itself… mostly.  The properties for accessing up to the first seven items are Item1, Item2, …, Item7.  If you have an octuple or beyond, the final property is Rest which will give you the nested tuple which you can then access in a similar matter.  Once again, keep in mind that these are read-only properties and cannot be changed. 1: // for septuples and below, use the Item properties 2: var t1 = Tuple.Create(42, 3.14); 3:  4: Console.WriteLine("First item is {0} and second is {1}", 5: t1.Item1, t1.Item2); 6:  7: // for octuples and above, use Rest to retrieve nested tuple 8: var t9 = new Tuple<int, int, int, int, int, int, int, 9: Tuple<int, int>>(1,2,3,4,5,6,7,Tuple.Create(8,9)); 10:  11: Console.WriteLine("The 8th item is {0}", t9.Rest.Item1); Tuples are IStructuralComparable and IStructuralEquatable Most of you know about IComparable and IEquatable, what you may not know is that there are two sister interfaces to these that were added in .NET 4.0 to help support tuples.  These IStructuralComparable and IStructuralEquatable make it easy to compare two tuples for equality and ordering.  This is invaluable for sorting, and makes it easy to use tuples as a compound-key to a dictionary (one of my favorite uses)! Why is this so important?  Remember when we said that some folks think tuples are too generic and you should define a custom class?  This is all well and good, but if you want to design a custom class that can automatically order itself based on its members and build a hash code for itself based on its members, it is no longer a trivial task!  Thankfully the tuple does this all for you through the explicit implementations of these interfaces. For equality, two tuples are equal if all elements are equal between the two tuples, that is if t1.Item1 == t2.Item1 and t1.Item2 == t2.Item2, and so on.  For ordering, it's a little more complex in that it compares the two tuples one at a time starting at Item1, and sees which one has a smaller Item1.  If one has a smaller Item1, it is the smaller tuple.  However if both Item1 are the same, it compares Item2 and so on. For example: 1: var t1 = Tuple.Create(1, 3.14, "Hi"); 2: var t2 = Tuple.Create(1, 3.14, "Hi"); 3: var t3 = Tuple.Create(2, 2.72, "Bye"); 4:  5: // true, t1 == t2 because all items are == 6: Console.WriteLine("t1 == t2 : " + t1.Equals(t2)); 7:  8: // false, t1 != t2 because at least one item different 9: Console.WriteLine("t2 == t2 : " + t2.Equals(t3)); The actual implementation of IComparable, IEquatable, IStructuralComparable, and IStructuralEquatable is explicit, so if you want to invoke the methods defined there you'll have to manually cast to the appropriate interface: 1: // true because t1.Item1 < t3.Item1, if had been same would check Item2 and so on 2: Console.WriteLine("t1 < t3 : " + (((IComparable)t1).CompareTo(t3) < 0)); So, as I mentioned, the fact that tuples are automatically equatable and comparable (provided the types you use define equality and comparability as needed) means that we can use tuples for compound keys in hashing and ordering containers like Dictionary and SortedList: 1: var tupleDict = new Dictionary<Tuple<int, double, string>, string>(); 2:  3: tupleDict.Add(t1, "First tuple"); 4: tupleDict.Add(t2, "Second tuple"); 5: tupleDict.Add(t3, "Third tuple"); Because IEquatable defines GetHashCode(), and Tuple's IStructuralEquatable implementation creates this hash code by combining the hash codes of the members, this makes using the tuple as a complex key quite easy!  For example, let's say you are creating account charts for a financial application, and you want to cache those charts in a Dictionary based on the account number and the number of days of chart data (for example, a 1 day chart, 1 week chart, etc): 1: // the account number (string) and number of days (int) are key to get cached chart 2: var chartCache = new Dictionary<Tuple<string, int>, IChart>(); Summary The System.Tuple, like any tool, is best used where it will achieve a greater benefit.  I wouldn't advise overusing them, on objects with a large scope or it can become difficult to maintain.  However, when used properly in a well defined scope they can make your code cleaner and easier to maintain by removing the need for extraneous POCOs and custom property hashing and ordering. They are especially useful in defining compound keys to IDictionary implementations and for returning multiple values from methods, or passing multiple values to a single object parameter. Tweet Technorati Tags: C#,.NET,Tuple,Little Wonders

    Read the article

  • SATA errors reported during boot: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0

    - by digby280
    I have noticed some error during the Linux boot. They seem to continue to occur after the boot adding lines to the log every few seconds. Once booted this normally does not appear to be causing any problems. However, around 1 in 10 boots results in a kernel panic and the computer has on two or three occasions suddenly rebooted after being powered on for a number of hours. I presume the cause of the reboot is a kernel panic as well. I am running Ubuntu 11.10 and I have had Ubuntu installed on the computer for around a year. I have googled around and not found anything useful. I have provided the kernel log lines and the output of smartctl. Can anyone explain exactly what these errors mean, or better still how to resolve them? Apr 2 16:51:27 dell580 kernel: [ 19.831140] EXT4-fs (sdb2): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 Apr 2 16:51:27 dell580 kernel: [ 19.934194] tg3 0000:03:00.0: eth0: Link is down Apr 2 16:51:28 dell580 kernel: [ 20.929468] tg3 0000:03:00.0: eth0: Link is up at 100 Mbps, full duplex Apr 2 16:51:28 dell580 kernel: [ 20.929471] tg3 0000:03:00.0: eth0: Flow control is on for TX and on for RX Apr 2 16:51:28 dell580 kernel: [ 20.929727] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 2 16:51:29 dell580 kernel: [ 21.609381] EXT4-fs (sdb2): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 Apr 2 16:51:29 dell580 kernel: [ 21.616515] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:29 dell580 kernel: [ 21.616519] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:29 dell580 kernel: [ 21.616525] ata2.00: hard resetting link Apr 2 16:51:29 dell580 kernel: [ 21.934036] ata2.01: hard resetting link Apr 2 16:51:29 dell580 kernel: [ 22.408890] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:29 dell580 kernel: [ 22.408907] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:29 dell580 kernel: [ 22.440934] ata2.00: configured for UDMA/100 Apr 2 16:51:29 dell580 kernel: [ 22.449040] ata2.01: configured for UDMA/133 Apr 2 16:51:29 dell580 kernel: [ 22.449818] ata2: EH complete Apr 2 16:51:33 dell580 kernel: [ 26.122664] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:33 dell580 kernel: [ 26.122670] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:33 dell580 kernel: [ 26.122677] ata2.00: hard resetting link Apr 2 16:51:33 dell580 kernel: [ 26.442684] ata2.01: hard resetting link Apr 2 16:51:34 dell580 kernel: [ 26.925545] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:34 dell580 kernel: [ 26.925561] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:34 dell580 kernel: [ 26.961542] ata2.00: configured for UDMA/100 Apr 2 16:51:34 dell580 kernel: [ 26.969616] ata2.01: configured for UDMA/133 Apr 2 16:51:34 dell580 kernel: [ 26.970400] ata2: EH complete Apr 2 16:51:35 dell580 kernel: [ 28.111180] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:35 dell580 kernel: [ 28.111184] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:35 dell580 kernel: [ 28.111191] ata2.00: hard resetting link Apr 2 16:51:35 dell580 kernel: [ 28.429674] ata2.01: hard resetting link Apr 2 16:51:36 dell580 kernel: [ 28.904557] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:36 dell580 kernel: [ 28.904572] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:36 dell580 kernel: [ 28.936609] ata2.00: configured for UDMA/100 Apr 2 16:51:36 dell580 kernel: [ 28.944692] ata2.01: configured for UDMA/133 Apr 2 16:51:36 dell580 kernel: [ 28.945464] ata2: EH complete Apr 2 16:51:38 dell580 kernel: [ 31.581756] eth0: no IPv6 routers present Apr 2 16:51:38 dell580 kernel: [ 32.103066] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:38 dell580 kernel: [ 32.103074] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:38 dell580 kernel: [ 32.103085] ata2.00: hard resetting link Apr 2 16:51:38 dell580 kernel: [ 32.419669] ata2.01: hard resetting link Apr 2 16:51:39 dell580 kernel: [ 32.894518] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:39 dell580 kernel: [ 32.894533] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:39 dell580 kernel: [ 32.926536] ata2.00: configured for UDMA/100 Apr 2 16:51:39 dell580 kernel: [ 32.934715] ata2.01: configured for UDMA/133 Apr 2 16:51:39 dell580 kernel: [ 32.935578] ata2: EH complete Here's the output of smartctl for the drive. smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.0.0-17-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: SAMSUNG SpinPoint F1 DT Device Model: SAMSUNG HD103UJ Serial Number: S13PJ90QC19706 LU WWN Device Id: 5 0000f0 00b1c7960 Firmware Version: 1AA01113 User Capacity: 1,000,204,886,016 bytes [1.00 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 3b Local Time is: Mon Apr 2 17:13:48 2012 BST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 41) The self-test routine was interrupted by the host with a hard or soft reset. Total time to complete Offline data collection: (11772) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 197) minutes. Conveyance self-test routine recommended polling time: ( 21) minutes. SCT capabilities: (0x003f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 100 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0007 076 076 011 Pre-fail Always - 7940 4 Start_Stop_Count 0x0032 099 099 000 Old_age Always - 521 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 253 253 051 Pre-fail Always - 0 8 Seek_Time_Performance 0x0025 100 100 015 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 642 10 Spin_Retry_Count 0x0033 100 100 051 Pre-fail Always - 0 11 Calibration_Retry_Count 0x0012 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 482 13 Read_Soft_Error_Rate 0x000e 100 100 000 Old_age Always - 0 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 759 184 End-to-End_Error 0x0033 100 100 000 Pre-fail Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 073 069 000 Old_age Always - 27 (Min/Max 16/27) 194 Temperature_Celsius 0x0022 073 067 000 Old_age Always - 27 (Min/Max 16/28) 195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always - 320028 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 099 099 000 Old_age Always - 1494 200 Multi_Zone_Error_Rate 0x000a 100 100 000 Old_age Always - 0 201 Soft_Read_Error_Rate 0x000a 253 253 000 Old_age Always - 0 SMART Error Log Version: 1 ATA Error Count: 211 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 211 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 0f 31 63 8f e1 Error: ICRC, ABRT 15 sectors at LBA = 0x018f6331 = 26174257 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 40 62 8f e1 08 00:01:00.460 READ DMA c8 00 20 00 7c 30 e0 08 00:01:00.450 READ DMA c8 00 00 10 49 8f e1 08 00:01:00.440 READ DMA c8 00 e0 20 d0 30 e0 08 00:01:00.420 READ DMA c8 00 00 c0 59 90 e1 08 00:01:00.400 READ DMA Error 210 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 cf e9 cf 66 e0 Error: ICRC, ABRT 207 sectors at LBA = 0x0066cfe9 = 6737897 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 b8 cf 66 e0 08 00:08:29.780 READ DMA c8 00 60 60 c9 18 e0 08 00:08:29.770 READ DMA c8 00 40 20 c9 18 e0 08 00:08:29.770 READ DMA c8 00 20 00 c9 18 e0 08 00:08:29.760 READ DMA c8 00 20 98 cf 66 e0 08 00:08:29.750 READ DMA Error 209 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 2f d1 74 e0 e0 Error: ICRC, ABRT 47 sectors at LBA = 0x00e074d1 = 14709969 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 00 74 e0 e0 08 00:00:30.940 READ DMA c8 00 20 18 36 de e0 08 00:00:30.930 READ DMA c8 00 08 48 f1 dd e0 08 00:00:30.930 READ DMA c8 00 08 a8 f0 dd e0 08 00:00:30.930 READ DMA c8 00 08 90 f0 dd e0 08 00:00:30.930 READ DMA Error 208 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 7f 21 88 9d e0 Error: ICRC, ABRT 127 sectors at LBA = 0x009d8821 = 10324001 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 a0 00 88 9d e0 08 00:00:27.610 READ DMA c8 00 58 a8 e7 9c e0 08 00:00:27.610 READ DMA c8 00 00 28 e6 9c e0 08 00:00:27.610 READ DMA c8 00 00 e0 e4 9c e0 08 00:00:27.610 READ DMA c8 00 00 90 e0 9c e0 08 00:00:27.600 READ DMA Error 207 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 04 51 26 6a 6a c3 e0 Error: ABRT at LBA = 0x00c36a6a = 12806762 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- ca 00 00 90 69 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 90 68 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 50 65 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 d0 64 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 90 63 c3 e0 08 00:29:39.350 WRITE DMA SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Interrupted (host reset) 90% 638 - # 2 Short offline Interrupted (host reset) 90% 638 - # 3 Extended offline Interrupted (host reset) 90% 638 - # 4 Short offline Interrupted (host reset) 90% 638 - # 5 Extended offline Interrupted (host reset) 90% 638 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • CodePlex Daily Summary for Monday, August 27, 2012

    CodePlex Daily Summary for Monday, August 27, 2012Popular ReleasesHome Access Plus+: v8.0: v8.0827.1800 RELEASE CHANGED TO BETA Any issues, please log them on http://www.edugeek.net/forums/home-access-plus/ This is full release, NO upgrade ZIP will be provided as most files require replacing. To upgrade from a previous version, delete everything but your AppData folder, extract all but the AppData folder and run your HAP+ install Documentation is supplied in the Web Zip The Quota Services require executing a script to register the service, this can be found in there install di...Math.NET Numerics: Math.NET Numerics v2.2.0: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Also available as NuGet packages: PM> Install-Package MathNet.Numerics PM> Install-Package MathNet.Numerics.FSharp New: instead of the special Silverlight build we now provide a portable version supporting .Net 4, Silverlight 5 and .Net Core (WinRT) 4.5: PM> Install-Package MathNet.Numerics.PortableMetodología General Ajustada - MGA: 03.00.08: Cambios Aury: Cambios realizados en el reporte de la MGA: Errores en la generación cuando se seleccionan todas las opciones y Que sólo se imprima la información de la alternativa seleccionada para el proyecto. Cambios John: Integración de código con cambios enviados por Aury Niño. Generación de instaladores. Soporte técnico por correo electrónico y telefónico.Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3391 (September 2012): New features: Extended ReflectionClass libxml error handling, constants TreatWarningsAsErrors MSBuild option OnlyPrecompiledCode configuration option; allows to use only compiled code Fixes: ArgsAware exception fix accessing .NET properties bug fix ASP.NET session handler fix for OutOfProc mode Phalanger Tools for Visual Studio: Visual Studio 2010 & 2012 New debugger engine, PHP-like debugging Lot of fixes of project files, formatting, smart indent, colorization etc. Improved ...WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.06: Whats New Added CKEditor 3.6.4 oEmbed Plugin can now handle short urls changes The Template File can now parsed from an xml file instead of js (More Info...) Style Sets can now parsed from an xml file instead of js (More Info...) Fixed Showing wrong Pages in Child Portal in the Link Dialog Fixed Urls in dnnpages Plugin Fixed Issue #6969 WordCount Plugin Fixed Issue #6973 File-Browser: Fixed Deleting of Files File-Browser: Improved loading time File-Browser: Improved the loa...MabiCommerce: MabiCommerce 1.0.1: What's NewSetup now creates shortcuts Fix spelling errors Minor enhancement to the Map window.ScintillaNET: ScintillaNET 2.5.2: This release has been built from the 2.5 branch. Version 2.5.2 is functionally identical to the 2.5.1 release but also includes the XML documentation comments file generated by Visual Studio. It is not 100% comprehensive but it will give you Visual Studio IntelliSense for a large part of the API. Just make sure the ScintillaNET.xml file is in the same folder as the ScintillaNET.dll reference you're using in your projects. (The XML file does not need to be distributed with your application)....TouchInjector: TouchInjector 1.1: Version 1.1: fixed a bug with the autorun optionWinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BlackJumboDog: Ver5.7.1: 2012.08.25 Ver5.7.1 (1)?????·?????LING?????????????? (2)SMTP???(????)????、?????\?????????????????????Christoc's DotNetNuke Module Development Template: 00.00.09 for DNN6: Probably the final VS2010 Release as I have some new VS2012 templates coming. BEFORE USE YOU need to install the MSBuild Community Tasks available from https://github.com/loresoft/msbuildtasks/downloads For best results you should configure your development environment as described in this blog post Then read this latest blog post about customizing and using these custom templates. Installation is simple To use this template place the ZIP (not extracted) file in your My Documents\Visual ...Visual Studio Team Foundation Server Branching and Merging Guide: v2 - Visual Studio 2012: Welcome to the Branching and Merging Guide Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review Documentation has been reviewed by the quality and recording team All critical bugs have been resolved Known Issues / Bugs Spelling, grammar and content revisions are in progress. Hotfix will be published.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.62: Fix for issue #18525 - escaped characters in CSS identifiers get double-escaped if the character immediately after the backslash is not normally allowed in an identifier. fixed symbol problem with nuget package. 4.62 should have nuget symbols available again. Also want to highlight again the breaking change introduced in 4.61 regarding the renaming of the DLL from AjaxMin.dll to AjaxMinLibrary.dll to fix strong-name collisions between the DLL and the EXE. Please be aware of this change and...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.65: As some of you may know we were planning to release version 2.70 much later (the end of September). But today we have to release this intermediate version (2.65). It fixes a critical issue caused by a third-party assembly when running nopCommerce on a server with .NET 4.5 installed. No major features have been introduced with this release as our development efforts were focused on further enhancements and fixing bugs. To see the full list of fixes and changes please visit the release notes p...MyRouter (Virtual WiFi Router): MyRouter 1.2.9: . Fix: Some missing changes for fixing the window subclassing crash. · Fix: fixed bug when Run MyRouter at the first Time. · Fix: Log File · Fix: improve performance speed application · fix: solve some Exception.TFS Project Test Migrator: TestPlanMigration v1.0.0: Release 1.0.0 This first version do not create the test cases in the target project because the goal was to restore a Test Plan + Test Suite hierarchy after a manual user deletion without restoring all the Project Collection Database. As I discovered, deleting a Test Plan will do the following : - Delete all TestSuiteEntry (the link between a Test Suite node and a Test Case) - Delete all TestSuite (the nodes in the test hierarchy), including root TestSuite - Delete the TestPlan Test c...ERPStore eCommerce FrontOffice: ERPStore.Core V4.0.0.2 MVC4 RTM: ERPStore.Core V4.0.0.2 MVC4 RTM (Code Source)ZXing.Net: ZXing.Net 0.8.0.0: sync with rev. 2393 of the java version improved API, direct support for multiple barcode decoding, wrapper for barcode generating many other improvements and fixes encoder and decoder command line clients demo client for emguCV dev documentation startedMFCMAPI: August 2012 Release: Build: 15.0.0.1035 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgePulse: Pulse Beta 5: Whats new in this release? Well to start with we now have Wallbase.cc Authentication! so you can access favorites or NSFW. This version requires .NET 4.0, you probably already have it, but if you don't it's a free and easy download from Microsoft. Pulse can bet set to start on Windows startup now too. The Wallpaper setter has settings now, so you can change the background color of the desktop and the Picture Position (Tile/Center/Fill/etc...) I've switched to Windows Forms instead of WPF...New ProjectsAStar Sample WPF Application by Ben Scharbach: The A* Pathfinding component contains an A* Manager, with three A* path finding engines, allowing for 3 simultaneous path searches on PC and XBOX. - By BenContactor: Programs and Schematics for sending a Short Message Service from a computer.JCI Page: gfdsgfdsgfdsL-Calc: Przykladowy kalkulator liczacy indukcyjnosc poprzez odczyt czestotliwosci obwodu rezonansowego.LibXmlSocket: XmlSocket LibraryNavegar: WPF Navigation pour MVVM Light et SimpleIocNUnit Comparisons: A set of libraries which extend the NUnit constraint framework to support deeper comparisons and report detailed differences useful to test debugging. ORMAC: ORMAC is a micro .NET ORM PersonalDataCenter: PDCNet1Project WarRoom: An experimental chatroom jQuery widget for instant messaging functionality on web applications.Quantum.Net: Quantum.Net is a free computation library developp in C#. Contains some mathematics functions and physical element.Specification Framework: A small framework to get started with a type safe variation of the specification pattern.Sphere Community: Sphere Community Pack 2.0TouchInjector: Generate Windows 8 Touch from TUIO messages.

    Read the article

  • Web API, JavaScript, Chrome &amp; Cross-Origin Resource Sharing

    - by Brian Lanham
    The team spent much of the week working through this issues related to Chrome running on Windows 8 consuming cross-origin resources using Web API.  We thought it was resolved on day 2 but it resurfaced the next day.  We definitely resolved it today though.  I believe I do not fully understand the situation but I am going to explain what I know in an effort to help you avoid and/or resolve a similar issue. References We referenced many sources during our trial-and-error troubleshooting.  These are the links we reference in order of applicability to the solution: Zoiner Tejada JavaScript and other material from -> http://www.devproconnections.com/content1/topic/microsoft-azure-cors-141869/catpath/windows-azure-platform2/page/3 WebDAV Where I learned about “Accept” –>  http://www-jo.se/f.pfleger/cors-and-iis? IT Hit Tells about NOT using ‘*’ –> http://www.webdavsystem.com/ajax/programming/cross_origin_requests Carlos Figueira Sample back-end code (newer) –> http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d (older version) –> http://code.msdn.microsoft.com/CORS-support-in-ASPNET-Web-01e9980a   Background As a measure of protection, Web designers (W3C) and implementers (Google, Microsoft, Mozilla) made it so that a request, especially a JSON request (but really any URL), sent from one domain to another will only work if the requestee “knows” about the requester and allows requests from it. So, for example, if you write a ASP.NET MVC Web API service and try to consume it from multiple apps, the browsers used may (will?) indicate that you are not allowed by showing an “Access-Control-Allow-Origin” error indicating the requester is not allowed to make requests. Internet Explorer (big surprise) is the odd-hair-colored step-child in this mix. It seems that running locally at least IE allows this for development purposes.  Chrome and Firefox do not.  In fact, Chrome is quite restrictive.  Notice the images below. IE shows data (a tabular view with one row for each day of a week) while Chrome does not (trust me, neither does Firefox).  Further, the Chrome developer console shows an XmlHttpRequest (XHR) error. Screen captures from IE (left) and Chrome (right). Note that Chrome does not display data and the console shows an XHR error. Why does this happen? The Web browser submits these requests and processes the responses and each browser is different. Okay, so, IE is probably the only one that’s truly different.  However, Chrome has a specific process of performing a “pre-flight” check to make sure the service can respond to an “Access-Control-Allow-Origin” or Cross-Origin Resource Sharing (CORS) request.  So basically, the sequence is, if I understand correctly:  1)Page Loads –> 2)JavaScript Request Processed by Browser –> 3)Browsers Prepares to Submit Request –> 4)[Chrome] Browser Submits Pre-Flight Request –> 5)Server Responds with HTTP 200 –> 6)Browser Submits Request –> 7)Server Responds with Data –> 8)Page Shows Data This situation occurs for both GET and POST methods.  Typically, GET methods are called with query string parameters so there is no data posted.  Instead, the requesting domain needs to be permitted to request data but generally nothing more is required.  POSTs on the other hand send form data.  Therefore, more configuration is required (you’ll see the configuration below).  AJAX requests are not friendly with this (POSTs) either because they don’t post in a form. How to fix it. The team went through many iterations of self-hair removal and we think we finally have a working solution.  The trial-and-error approach eventually worked and we referenced many sources for the information.  I indicate those references above.  There are basically three (3) tasks needed to make this work. Assumptions: You are using Visual Studio, Web API, JavaScript, and have Cross-Origin Resource Sharing, and several browsers. 1. Configure the client Joel Cochran centralized our “cors-oriented” JavaScript (from here). There are two calls including one for GET and one for POST function(url, data, callback) {             console.log(data);             $.support.cors = true;             var jqxhr = $.post(url, data, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("post", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send(data);                     } else {                         console.log(">" + jqXhHR.status);                         alert("corsAjax.post error: " + status + ", " + errorThrown);                     }                 });         }; The GET CORS JavaScript function (credit to Zoiner Tejada) function(url, callback) {             $.support.cors = true;             var jqxhr = $.get(url, null, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("get", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send();                     } else {                         alert("CORS is not supported in this browser or from this origin.");                     }                 });         }; The POST CORS JavaScript function (credit to Zoiner Tejada) Now you need to call these functions to get and post your data (instead of, say, using $.Ajax). Here is a GET example: corsAjax.get(url, function(data) { if (data !== null && data.length !== undefined) { // do something with data } }); And here is a POST example: corsAjax.post(url, item); Simple…except…you’re not done yet. 2. Change Web API Controllers to Allow CORS There are actually two steps here.  Do you remember above when we mentioned the “pre-flight” check?  Chrome actually asks the server if it is allowed to ask it for cross-origin resource sharing access.  So you need to let the server know it’s okay.  This is a two-part activity.  a) Add the appropriate response header Access-Control-Allow-Origin, and b) permit the API functions to respond to various methods including GET, POST, and OPTIONS.  OPTIONS is the method that Chrome and other browsers use to ask the server if it can ask about permissions.  Here is an example of a Web API controller thus decorated: NOTE: You’ll see a lot of references to using “*” in the header value.  For security reasons, Chrome does NOT recognize this is valid. [HttpHeader("Access-Control-Allow-Origin", "http://localhost:51234")] [HttpHeader("Access-Control-Allow-Credentials", "true")] [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST")] [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control")] [HttpHeader("Access-Control-Max-Age", "3600")] public abstract class BaseApiController : ApiController {     [HttpGet]     [HttpOptions]     public IEnumerable<foo> GetFooItems(int id)     {         return foo.AsEnumerable();     }     [HttpPost]     [HttpOptions]     public void UpdateFooItem(FooItem fooItem)     {         // NOTE: The fooItem object may or may not         // (probably NOT) be set with actual data.         // If not, you need to extract the data from         // the posted form manually.         if (fooItem.Id == 0) // However you check for default...         {             // We use NewtonSoft.Json.             string jsonString = context.Request.Form.GetValues(0)[0].ToString();             Newtonsoft.Json.JsonSerializer js = new Newtonsoft.Json.JsonSerializer();             fooItem = js.Deserialize<FooItem>(new Newtonsoft.Json.JsonTextReader(new System.IO.StringReader(jsonString)));         }         // Update the set fooItem object.     } } Please note a few specific additions here: * The header attributes at the class level are required.  Note all of those methods and headers need to be specified but we find it works this way so we aren’t touching it. * Web API will actually deserialize the posted data into the object parameter of the called method on occasion but so far we don’t know why it does and doesn’t. * [HttpOptions] is, again, required for the pre-flight check. * The “Access-Control-Allow-Origin” response header should NOT NOT NOT contain an ‘*’. 3. Headers and Methods and Such We had most of this code in place but found that Chrome and Firefox still did not render the data.  Interestingly enough, Fiddler showed that the GET calls succeeded and the JSON data is returned properly.  We learned that among the headers set at the class level, we needed to add “ACCEPT”.  Note that I accidentally added it to methods and to headers.  Adding it to methods worked but I don’t know why.  We added it to headers also for good measure. [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPA... [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destin... Next Steps That should do it.  If it doesn’t let us know.  What to do next?  * Don’t hardcode the allowed domains.  Note that port numbers and other domain name specifics will cause problems and must be specified.  If this changes do you really want to deploy updated software?  Consider Miguel Figueira’s approach in the following link to writing a custom HttpHeaderAttribute class that allows you to specify the domain names and then you can do it dynamically.  There are, of course, other ways to do it dynamically but this is a clean approach. http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d

    Read the article

  • How to Audit and Monitor BI Publisher Reports Access?

    - by kanichiro.nishida
    Do you know who is accessing to which report at what time at your reporting environment ? As you delivered the BI Publisher reports to the production environment and your users start using them as part of their daily business operations you might wonder such questions. With compliance becoming an integral part of any business requirement, auditing your reporting environment is also becoming one of the most critical and hot agenda in today’s enterprise reporting deployments. Also, I believe that auditing the reporting environment is not just for the compliance, but also the way to understand how your users are using the reports and be able to improve the user reporting experience. BI Publisher have introduced Enterprise Level Auditing feature with its 11G release, with an integration of Oracle Fusion Middleware Audit Framework, which comes out of the box with the installation. Yes, this is another great example of the benefit of its tight integration with Fusion Middleware introduced with BI Publisher 11g release. What Information Can I Know about our Reporting Environment? With this new Auditing feature you can now gain the following insights. When a particular user login or logout What report is accessed by who and when and how How long does it take to process a particular report Yes, it’s all there. This is a great news for 10G users, right ? I used to be one of them working with many different IT organizations and were craving for this, but it’s here now with 11G! How Can I Access to the Auditing Information? With the Fusion Middleware Auditing Framework, BI Publisher feed such information either to a log file or to a database. If you decided to get the data into the database then, of course you know, you can use BI Publisher to report and publish, or visualize the data to gain more insights. One thing though, in order to feed the data it requires a few extra steps, which I’ll cover it later.  Regardless of whether it’s the log file or the database to store the Auditing data, first, you need to enable the Auditing feature, which is not enabled as default. So, let’s take a look at how to enable it. How to Enable Auditing Feature? Here is a quick list of the steps: Enable Auditing related properties in BI Publisher configuration file Copy component_events.xml file to Fusion Middleware Audit Framework’s location Enable Auditing Policy with Fusion Middleware Control (Enterprise Manager) Restart WebLogic Server Enable Auditing related properties in BI Publisher configuration file Open xmlp-server-config.xml file, which is located under $BI_HOME/ user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Configuration directory. Set the following three properties values to ‘true’. AUDIT_ENABLED MONITORING_ENABLED AUDIT_JPS_INTEGRATION The ‘AUDIT_JPS_INTEGRATION’ is not in the file as default, so you need to add this. Here is an example of how it looks for the xmlp-server-config.xml file after the modification. <?xml version="1.0" encoding="UTF-8" standalone="no"?><xmlpConfigxmlns="http://xmlns.oracle.com/oxp/xmlp"> <property name="SAW_SERVER" value="adc6160510"/> <property name="SAW_SESSION_TIMEOUT" value="90"/> <property name="DEBUG_LEVEL" value="exception"/> <property name="SAW_PORT" value="7001"/> <property name="SAW_PASSWORD" value=""/> <property name="SAW_PROTOCOL" value="http"/> <property name="SAW_VERSION" value="v6"/> <property name="SAW_USERNAME" value=""/> <property name="SAW_URL_SUFFIX" value="analytics/saw.dll"/> <property name="MONITORING_ENABLED" value="true"/> <property name="MONITORING_DEFAULT_HISTORY_SIZE" value="30"/> <property name="AUDIT_ENABLED" value="true"/> <property name="JSESSION_RESET_DISABLED" value="true"/> <property name="SECURITY_MODEL" value="ORACLE_AS_JPS"/> <property name="AUDIT_JPS_INTEGRATION" value="true"/> </xmlpConfig>   Copy component_events.xml file to Audit Framework’s location There is a Audit related configuration file provided by BI Publisher that needs to be copied to the Audit Framework location. 1. Go to the following directory. $BI_HOME /oracle_common/modules/oracle.iau_11.1.1/components 2. Create a directory called ‘xmlpserver’ 3. Copy component_events.xml file from /user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Audit To the newly created ‘xmlpserver’ directory. Enable Auditing Policy with Fusion Middleware Control (EM) Now you can set a level of the auditing for each BI Publisher’s auditing type by using Fusion Middleware Control (a.k.a. Enterprise Manager). 1. Login to Fusion Middleware Control UI http://hostname:port/em (e.g. reporting.oracle.com:7001/em) 2. Access to Audit Policy configuration UI from the menu Under WebLogic Domain, right-click bifoundation_domain, select Security and then click Audit Policy.   3. Set Audit Level for BI Publisher. While you can select ‘Custom’ to set a customized level of Auditing for each component, I’m selecting ‘Medium’ for this exercise.   Restart WebLogic Server After all the above settings, now you need to restart the WebLogic Server instance in order to take those changes in effect. If you’re on Windows you can simply do this by selecting ‘Stop BI Servers’ and ‘Start BI Servers’ from the Start menu. If you’re on Linux then you can run ‘stopWebLogic.sh’ and ‘startWebLogic.sh’, which can be found under $BI_HOME/user_projects/domains/bifoundation_domain/bin Start Auditing! Now assuming that you have completed the above steps successfully, then from this point on any reporting activity should be audited and stored in the auditing log file, which can be found at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/auditlogs/xmlpserver/audit.log And here is a sample of the log file: 2011-02-18 02:25:49.928 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "200" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 86608512 486989824 24517 169 - - - 2011-02-18 02:25:49.929 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "200" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:25:49.554 "" "ReportDataProcess" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "260" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" - - - - - - - - - - - - - - - - - 34980200 554033152 - 134 - - - 2011-02-18 03:25:50.282 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "263" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 16158944 554033152 24517 503 - - - 2011-02-18 03:25:50.282 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "263" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:30:00.448 "barack.obama" "UserLogin" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000406,0" - - - - "bipublisher(11.1.1)" "UserSession" "26" "" - - - - - - - - - - - - - - - - - - - - - - - - - From the above log file you can tell a user ‘steve.jobs’ was running some reports like ‘Balance Letter’ around afternoon on 2/18 and another user ‘barack.obama’ logged into the system at 3:30 on the same day. Yes, every login and log out will be recorded, and every report access will be recorded in this log file. Now, looking at this text file to understand what’s going on is pretty overwhelming. And accessing to this log file, which is located at the server’s file system where the BI Publisher/WebLogic Server are running, is another challenge in typical deployment scenarios. And that’s where the database storage option for the Auditing data  comes into a picture. I’ll talk about this tomorrow, so stay tuned!  

    Read the article

  • JQGrdi PDF Export

    - by thanigai
    Originally posted on: http://geekswithblogs.net/thanigai/archive/2013/06/17/jqgrdi-pdf-export.aspxJQGrid PDF Export The aim of this article is to address the PDF export from client side grid frameworks. The solution is done using the ASP.Net MVC 4 and VisualStudio 2012. The article assumes the developer to have a fair amount of knowledge on ASP.Net MVC and C#. Tools Used Visual Studio 2012 ASP.Net MVC 4 Nuget Package Manager JQGrid  is one of the client grid framework built on top of the JQuery framework. It helps in building a beautiful grid with paging, sorting and exiting options. There are also other features available as extension plugins and developers can write their own if needed. You can download the JQgrid from the  JQGrid  homepage or as NUget package. I have given below the command to download the JQGrid through the package manager console. From the tools menu select “Library Package Manager” and then select “Package Manager Console”. I have given the screenshot below. This command will pull down the latest JQGrid package and adds them in the script folder. Once the script is downloaded and referenced in the project update the bundleconfig file to add the script reference in the pages. Bundleconfig can be found in the  App_Start  folder in the project structure. bundles .Add (newStyleBundle(“~/Content/jqgrid”).Include (“~/Content/ui.jqgrid.css”)); bundles.Add( newScriptBundle( “~/bundles/jquerygrid”) .Include( “~/Scripts/jqGrid/jquery.jqGrid*”)); Once added the config’s refer the bundles to the Views/Shared/LayoutPage.cshtml. Add the following lines to the head section of the page. @Styles.Render(“~/Content/jqgrid”) Add the following lines to the end of the page before html close tags. @Scripts.Render(“~/bundles/jquery”) @Scripts.Render(“~/bundles/jqueryui”) @Scripts.Render(“ ~/bundles/jquerygrid”)              That’s all to be done from the view perspective. Once these steps are done the developer can start coding for the JQGrid. In this example we will modify the HomeController for the demo. The index action will be the default action. We will add an argument for this index action. Let it be nullable bool. It’s just to mark the pdf request. In the Index.cshtml we will add a table tag with an id “ gridTable “. We will use this table for making the grid. Since JQGrid is an extension for the JQUery we will initialize the grid setting at the  script  section of the page. This script section is marked at the end of the page to improve performance. The script section is placed just below the bundle reference for JQuery and JQueryUI. This is the one of improvement factors from “ why slow” provided by yahoo. < tableid=“gridTable”class=“scroll”></ table> < inputtype=“button”value=“Export PDF”onclick=“exportPDF();“/>  @section scripts { <scripttype=“text/javascript”> $(document).ready(function(){$(“#gridTable”).jqGrid({datatype:“json”,url:‘@Url.Action(“GetCustomerDetails”)‘,mtype:‘GET’,colNames:["CustomerID","CustomerName","Location","PrimaryBusiness"],colModel:[{name:"CustomerID",width:40,index:"CustomerID",align:"center"},{name:"CustomerName",width:40,index:"CustomerName",align:"center"},{name:"Location",width:40,index:"Location",align:"center"},{name:"PrimaryBusiness",width:40,index:"PrimaryBusiness",align:"center"},],height:250,autowidth:true,sortorder:“asc”,rowNum:10,rowList:[5,10,15,20],sortname:“CustomerID”,viewrecords:true});});  function exportPDF (){ document . location = ‘ @ Url . Action ( “Index” ) ?pdf=true’ ; } </ script >  } The exportPDF methos just sets the document location to the Index action method with PDF Boolean as true just to mark for download PDF. An inmemory list collection is used for demo purpose. The  GetCustomerDetailsmethod is the server side action method that will provide the data as JSON list. We will see the method explanation below. [ HttpGet] publicJsonResultGetCustomerDetails(){ varresult=new { total=1, page=1, records=customerList.Count(), rows=( customerList.Select( e=>new { id=e.CustomerID, cell=newstring[]{ e.CustomerID.ToString(), e.CustomerName, e.Location, e.PrimaryBusiness}})) .ToArray()}; returnJson( result,  JsonRequestBehavior.AllowGet); }   JQGrid can understand the response data from server in certain format. The server method shown above is taking care of formatting the response so that JQGrid understand the data properly. The response data should contain totalpages, current page, full record count, rows of data with id and remaining columns as string array. The response is built using an anonymous object and will be sent as a MVC JsonResult. Since we are using HttpGet it’s better to mark the attribute as HttpGet and also the JSON requestbehavious as AllowGet. The inmemory list is initialized in the homecontroller constructor for reference. Public class HomeController : Controller{ private readonly Ilist < CustomerViewModel > customerList ; public HomeController (){ customerList=newList<CustomerViewModel>() { newCustomerViewModel{ CustomerID=100, CustomerName=“Sundar”, Location=“Chennai”, PrimaryBusiness=“Teacing”}, newCustomerViewModel{ CustomerID=101, CustomerName=“Sudhagar”, Location=“Chennai”, PrimaryBusiness=“Software”}, newCustomerViewModel{ CustomerID=102, CustomerName=“Thivagar”, Location=“China”, PrimaryBusiness=“SAP”}, }; }  publicActionResultIndex( bool?pdf){ if ( !pdf.HasValue){ returnView( customerList);} else{ stringfilePath=Server.MapPath( “Content”)  +“Sample.pdf”; ExportPDF( customerList,  new string[]{  “CustomerID”,  “CustomerName”,  “Location”,  “PrimaryBusiness” },  filePath); return File ( filePath ,  “application/pdf” , “list.pdf” ); }}   The index actionmethod has a Boolean argument named “pdf”. It’s used to indicate for PDF download. When the application starts this method is first hit for initial page request. For PDF operation a filename is generated and then sent to the  ExportPDF  method which will take care of generating the PDF from the datasource. The  ExportPDF method is listed below.  Private static void ExportPDF<TSource>(IList<TSource>customerList,string [] columns, string filePath){ FontheaderFont=FontFactory.GetFont( “Verdana”,  10,  Color.WHITE); Fontrowfont=FontFactory.GetFont( “Verdana”,  10,  Color.BLUE); Documentdocument=newDocument( PageSize.A4);  PdfWriter writer = PdfWriter . GetInstance ( document ,  new FileStream ( filePath ,  FileMode . OpenOrCreate )); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }  foreach  ( var item in customerList ) { foreach ( varcolumnincolumns){ stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } }  document.Add( table); document.Close(); }   iTextSharp is one of the pioneer in PDF export. It’s an opensource library readily available as NUget library. This command will pulldown latest available library. I am using the version 4.1.2.0. The latest version may have changed. There are three main things in this library. Document This is the document class which takes care of creating the document sheet with particular size. We have used A4 size. There is also an option to define the rectangle size. This document instance will be further used in next methods for reference. PdfWriter PdfWriter takes the filename and the document as the reference. This class enables the document class to generate the PDF content and save them in a file. Font Using the FONT class the developer can control the font features. Since I need a nice looking font I am giving the Verdana font. Following this PdfPTable and PdfPCell are used for generating the normal table layout. We have created two set of fonts for header and footer. Font headerFont=FontFactory .GetFont(“Verdana”, 10, Color .WHITE); Font rowfont=FontFactory .GetFont(“Verdana”, 10, Color .BLUE);   We are getting the header columns as string array. Columns argument array is looped and header is generated. We are using the headerfont for this purpose. PdfWriter writer=PdfWriter .GetInstance(document, newFileStream (filePath, FileMode.OpenOrCreate)); document.Open(); PdfPTabletable=newPdfPTable( columns.Length); foreach ( varcolumnincolumns){ PdfPCellcell=newPdfPCell( newPhrase( column,  headerFont)); cell.BackgroundColor=Color.BLACK; table.AddCell( cell); }   Then reflection is used to generate the row wise details and form the grid. foreach  (var item in customerList){ foreach ( varcolumnincolumns) { stringvalue=item.GetType() .GetProperty( column) .GetValue( item) .ToString(); PdfPCellcell5=newPdfPCell( newPhrase( value,  rowfont)); table.AddCell( cell5); } } document . Add ( table ); document . Close ();   Once the process id done the pdf table is added to the document and document is closed to write all the changes to the filepath given. Then the control moves to the controller which will take care of sending the response as a JSON result with a filename. If the file name is not given then the PDF will open in the same page otherwise a popup will open up asking whether to save the file or open file. Return File(filePath, “application/pdf”,“list.pdf”);   The final result screen is shown below. PDF file opened below to show the output. Conclusion: This is how the export pdf is done for JQGrid. The problem area that is addressed here is the clientside grid frameworks won’t support PDF’s export. In that time it’s better to have a fine grained control over the data and generated PDF. iTextSharp has helped us to achieve our goal.

    Read the article

  • New Big Data Appliance Security Features

    - by mgubar
    The Oracle Big Data Appliance (BDA) is an engineered system for big data processing.  It greatly simplifies the deployment of an optimized Hadoop Cluster – whether that cluster is used for batch or real-time processing.  The vast majority of BDA customers are integrating the appliance with their Oracle Databases and they have certain expectations – especially around security.  Oracle Database customers have benefited from a rich set of security features:  encryption, redaction, data masking, database firewall, label based access control – and much, much more.  They want similar capabilities with their Hadoop cluster.    Unfortunately, Hadoop wasn’t developed with security in mind.  By default, a Hadoop cluster is insecure – the antithesis of an Oracle Database.  Some critical security features have been implemented – but even those capabilities are arduous to setup and configure.  Oracle believes that a key element of an optimized appliance is that its data should be secure.  Therefore, by default the BDA delivers the “AAA of security”: authentication, authorization and auditing. Security Starts at Authentication A successful security strategy is predicated on strong authentication – for both users and software services.  Consider the default configuration for a newly installed Oracle Database; it’s been a long time since you had a legitimate chance at accessing the database using the credentials “system/manager” or “scott/tiger”.  The default Oracle Database policy is to lock accounts thereby restricting access; administrators must consciously grant access to users. Default Authentication in Hadoop By default, a Hadoop cluster fails the authentication test. For example, it is easy for a malicious user to masquerade as any other user on the system.  Consider the following scenario that illustrates how a user can access any data on a Hadoop cluster by masquerading as a more privileged user.  In our scenario, the Hadoop cluster contains sensitive salary information in the file /user/hrdata/salaries.txt.  When logged in as the hr user, you can see the following files.  Notice, we’re using the Hadoop command line utilities for accessing the data: $ hadoop fs -ls /user/hrdataFound 1 items-rw-r--r--   1 oracle supergroup         70 2013-10-31 10:38 /user/hrdata/salaries.txt$ hadoop fs -cat /user/hrdata/salaries.txtTom Brady,11000000Tom Hanks,5000000Bob Smith,250000Oprah,300000000 User DrEvil has access to the cluster – and can see that there is an interesting folder called “hrdata”.  $ hadoop fs -ls /user Found 1 items drwx------   - hr supergroup          0 2013-10-31 10:38 /user/hrdata However, DrEvil cannot view the contents of the folder due to lack of access privileges: $ hadoop fs -ls /user/hrdata ls: Permission denied: user=drevil, access=READ_EXECUTE, inode="/user/hrdata":oracle:supergroup:drwx------ Accessing this data will not be a problem for DrEvil. He knows that the hr user owns the data by looking at the folder’s ACLs. To overcome this challenge, he will simply masquerade as the hr user. On his local machine, he adds the hr user, assigns that user a password, and then accesses the data on the Hadoop cluster: $ sudo useradd hr $ sudo passwd $ su hr $ hadoop fs -cat /user/hrdata/salaries.txt Tom Brady,11000000 Tom Hanks,5000000 Bob Smith,250000 Oprah,300000000 Hadoop has not authenticated the user; it trusts that the identity that has been presented is indeed the hr user. Therefore, sensitive data has been easily compromised. Clearly, the default security policy is inappropriate and dangerous to many organizations storing critical data in HDFS. Big Data Appliance Provides Secure Authentication The BDA provides secure authentication to the Hadoop cluster by default – preventing the type of masquerading described above. It accomplishes this thru Kerberos integration. Figure 1: Kerberos Integration The Key Distribution Center (KDC) is a server that has two components: an authentication server and a ticket granting service. The authentication server validates the identity of the user and service. Once authenticated, a client must request a ticket from the ticket granting service – allowing it to access the BDA’s NameNode, JobTracker, etc. At installation, you simply point the BDA to an external KDC or automatically install a highly available KDC on the BDA itself. Kerberos will then provide strong authentication for not just the end user – but also for important Hadoop services running on the appliance. You can now guarantee that users are who they claim to be – and rogue services (like fake data nodes) are not added to the system. It is common for organizations to want to leverage existing LDAP servers for common user and group management. Kerberos integrates with LDAP servers – allowing the principals and encryption keys to be stored in the common repository. This simplifies the deployment and administration of the secure environment. Authorize Access to Sensitive Data Kerberos-based authentication ensures secure access to the system and the establishment of a trusted identity – a prerequisite for any authorization scheme. Once this identity is established, you need to authorize access to the data. HDFS will authorize access to files using ACLs with the authorization specification applied using classic Linux-style commands like chmod and chown (e.g. hadoop fs -chown oracle:oracle /user/hrdata changes the ownership of the /user/hrdata folder to oracle). Authorization is applied at the user or group level – utilizing group membership found in the Linux environment (i.e. /etc/group) or in the LDAP server. For SQL-based data stores – like Hive and Impala – finer grained access control is required. Access to databases, tables, columns, etc. must be controlled. And, you want to leverage roles to facilitate administration. Apache Sentry is a new project that delivers fine grained access control; both Cloudera and Oracle are the project’s founding members. Sentry satisfies the following three authorization requirements: Secure Authorization:  the ability to control access to data and/or privileges on data for authenticated users. Fine-Grained Authorization:  the ability to give users access to a subset of the data (e.g. column) in a database Role-Based Authorization:  the ability to create/apply template-based privileges based on functional roles. With Sentry, “all”, “select” or “insert” privileges are granted to an object. The descendants of that object automatically inherit that privilege. A collection of privileges across many objects may be aggregated into a role – and users/groups are then assigned that role. This leads to simplified administration of security across the system. Figure 2: Object Hierarchy – granting a privilege on the database object will be inherited by its tables and views. Sentry is currently used by both Hive and Impala – but it is a framework that other data sources can leverage when offering fine-grained authorization. For example, one can expect Sentry to deliver authorization capabilities to Cloudera Search in the near future. Audit Hadoop Cluster Activity Auditing is a critical component to a secure system and is oftentimes required for SOX, PCI and other regulations. The BDA integrates with Oracle Audit Vault and Database Firewall – tracking different types of activity taking place on the cluster: Figure 3: Monitored Hadoop services. At the lowest level, every operation that accesses data in HDFS is captured. The HDFS audit log identifies the user who accessed the file, the time that file was accessed, the type of access (read, write, delete, list, etc.) and whether or not that file access was successful. The other auditing features include: MapReduce:  correlate the MapReduce job that accessed the file Oozie:  describes who ran what as part of a workflow Hive:  captures changes were made to the Hive metadata The audit data is captured in the Audit Vault Server – which integrates audit activity from a variety of sources, adding databases (Oracle, DB2, SQL Server) and operating systems to activity from the BDA. Figure 4: Consolidated audit data across the enterprise.  Once the data is in the Audit Vault server, you can leverage a rich set of prebuilt and custom reports to monitor all the activity in the enterprise. In addition, alerts may be defined to trigger violations of audit policies. Conclusion Security cannot be considered an afterthought in big data deployments. Across most organizations, Hadoop is managing sensitive data that must be protected; it is not simply crunching publicly available information used for search applications. The BDA provides a strong security foundation – ensuring users are only allowed to view authorized data and that data access is audited in a consolidated framework.

    Read the article

  • FMw Diagnostic Framework : Automatic Capture of Diagnostic Data Upon First Failure!

    - by Daniel Mortimer
    Introduction There is nothing more frustrating than a problem that "cannot be reproduced". Logs, configuration files have been analysed but there just isn't enough information to establish the root cause. The issue maybe closed, but you are left with the feeling that the problem will raise its ugly head again in the future. Trouble is, to resolve such issues you need to capture diagnostic data at the exact time the incident occurs. Step forward Fusion Middleware Diagnostic Framework!  Diagnostic Framework monitors WebLogic Managed Servers and delivers "Automatic capture of diagnostic data upon first failure". To quote fromOracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems "When a critical error occurs ... the Diagnostic Framework automatically collects diagnostics, such as thread dumps, DMS metric dumps, and WebLogic Diagnostics Framework (WLDF) server image dumps ... The data is stored in a file-based repository and is accessible with command-line utilities." In other words the data collected upon first failure - especially the thread and image dumps - provides a snapshot of the system as or immediately after the problem occurs. The table below shows the type of WebLogic Server issues which fall into the scope of Diagnostic Framework How to Configure Diagnostic Framework? Depending on your Fusion Middleware product choice you may not need to do anything! Diagnostic Framework is automatically installed, configured and initiated for any WebLogic Domain which has the Oracle Java Required Files (JRF) template applied. This template is applied by default whenever you configure WebLogic Managed Servers for products such as Portal / Forms / Reports / Discoverer Identity Management ( OID , OAM , OIM etc) WebCenter SOA Check your WebLogic Domain directory structure. If you have an "adr" sub directory under DOMAIN_HOME/servers/<servername>/ then JRF template has been applied and Diagnostic Framework will be in play. Should the "adr" sub directory not exist, review the advice given in My Oracle Support article How to Apply FMW ( EM ) Control and JRF to a WebLogic Domain and Managed Servers [ID 947043.1] If you are working with a standalone WebLogic Server solution and applying Oracle JRF is not acceptable, consider using WLDF - WebLogic Diagnostic Framework. (Fusion Middleware Diagnostic Framework makes use of WLDF under the covers.) Couple of useful links about WLDF are listed below Configuring and Using the Diagnostics Framework for Oracle WebLogic Server 11g WebLogic Diagnostics Framework-A Very Useful Tool [A nice blog which describes a WLDF use case] How to Get Started With Diagnostic Framework To be frank, the Fusion Middleware Administrator's Guide is the best place to start your learning Oracle Fusion Middleware Administrator's Guide 11g Release 1 (11.1.1)Chapter 13 Diagnosing Problems A lot of reading here,  but if you are in hurry and just want to get the right information to Oracle Support to help resolve your issue, check out the next section below. How to Upload Diagnostic Framework Incident Data to Oracle Support Some Background Information There are three interfaces to the Repository: Enterprise Manager Cloud Control (Support Workbench) WLST (Command Line) ADRCI (Command Line) The Enterprise Manager Cloud Control does provide a nice GUI interface to search, view and package diagnostic framework incidents. However, this software is not to be confused with Fusion Middleware (EM) Control. Cloud Control (formerly known as Grid Control) is part of the Enterprise Manager media package. EM Cloud Control has it's own install and configuration story. Therefore, for the benefit of those yet to install and play with Cloud Control, I am going to describe how to use the command line tools. Ideally, you would only need to one command line interface, but currently I suggest using both - mainly due to the fact that ADRCI SHOW INCIDENTS does not reveal the description behind the Diagnostic Framework error code. Instructions: Note: WLST and ADRCI are case sensitive when it comes to handling parameter values. If you make a mistake, expect an unfriendly syntax error message. 1) Find the incident Note: The managed server which you are troubleshooting must be up and running. If the managed server is down, ensure the domain's Admin Server is accessible. If you cannot connect to the Admin Server or the Managed Server the example WLST commands will not work. a) Launch WLST  Note: Use the WLST which resides in the "oracle_common" directory (not WL_HOME/common/bin) otherwise you will get a syntax error like the one below Traceback (innermost last):  File "<console>", line 1, in ?NameError: listIncidents MW_HOME/oracle_common/common/bin/wlst.sh b) Connect to the managed server or the admin server e.g. wls:/offline> connect('weblogic','welcome1','t3://localhost:7020') c) Run the command wls:/MyDomain/serverConfig> listIncidents() This will list the incidents for the server to which you have connected. If you have connected to the Admin Server and want to list the incidents for a managed server within the domain, use the command wls:/MyDomain/serverConfig> listIncidents(adrHome='diag\ofm\MyDomain\MyManagedServer' ,server='MyManagedServer') Example output Incident Id     Problem Key              Incident Time         1       DFW-99998 [java.lang.NullPointerException] [oracle.error.simulator.ErrorSimulator.createNullPointerException][errorWebApp_1-0-0-0]        Fri Nov 02 10:38:46 GMT 2012  The piece highlighted in bold is the description you do not see when using the ADRCI 'SHOW INCIDENT' command. Make a note of the incident id. You are ready to move to step 2 2. Package the incident a) Set up the environment - example commands below are for Unix cd <DOMAIN_HOME>/bin . ./setDomainEnv.sh If you want ADRCI to run a Remote Diagnostic Agent collection (recommended) at generate package time, point ORACLE_HOME at oracle_common ORACLE_HOME=$MW_HOME/oracle_common; export ORACLE_HOME To prevent ADRCI from running RDA at generate package time, point ORACLE_HOME at WL_HOME/server/adr directory.  ORACLE_HOME=$WL_HOME/server/adr; export ORACLE_HOME b) Launch adrci $WL_HOME/server/adr/adrci c) Set BASE and HOMEPATH adrci> SET BASE /oracle/middleware/user_projects/domains/ mydomain/servers/mymanagedserver/adr adrci> SET HOMEPATH diag/ofm/mydomain/mymanagedserver d)  Optionally run SHOW INCIDENTS e.g. adrci> SHOW INCIDENTS -MODE DETAIL ADR Home = /oracle/middleware/user_projects/domains/mydomain/ servers/mymanagedserver/adr/diag/ofm/mydomain/mymanagedserver:***********************************************************************************************************************************INCIDENT INFO RECORD 1**********************************************************   INCIDENT_ID                   1   STATUS                        ready   CREATE_TIME                   2012-11-02 10:38:46.468000 +00:00   PROBLEM_ID                    1   CLOSE_TIME                    <NULL>   FLOOD_CONTROLLED              none   ERROR_FACILITY                DFW   ERROR_NUMBER                  99998   ERROR_ARG1                    <NULL>   ERROR_ARG2                    <NULL>   ERROR_ARG3                    <NULL>   ERROR_ARG4                    <NULL>   ERROR_ARG5                    <NULL>   ERROR_ARG6                    <NULL>   ERROR_ARG7                    <NULL>   ERROR_ARG8                    <NULL>   ERROR_ARG9                    <NULL>   ERROR_ARG10                   <NULL>   ERROR_ARG11                   <NULL>   ERROR_ARG12                   <NULL>   SIGNALLING_COMPONENT          <NULL>   SIGNALLING_SUBCOMPONENT       <NULL>   SUSPECT_COMPONENT             <NULL>   SUSPECT_SUBCOMPONENT          <NULL>   ECID                          5162744c6a2eea5e:155ff445:13ac0aae7cb:-8000-0000000000000325   IMPACTS                       01 rows fetched e)  Create a logical package IPS CREATE PACKAGE INCIDENT incident_number e.g. adrci> IPS CREATE PACKAGE INCIDENT 1Created package 1 based on incident id 1, correlation level typical f) Generate the package IPS GENERATE PACKAGE package_number IN path e.g. adrci> IPS GENERATE PACKAGE 1 IN /tmp Generated package 1 in file /tmp/DFW99998j_20121102113633_COM_1.zip, mode complete Note: If the generate package command hangs, ADRCI may be experiencing an issue when running RDA. To avoid such trouble, exit ADRCI and point the ORACLE_HOME environment variable at WL_HOME/server/adr 3) Upload the package zip to Oracle Support via your Service Request a) Log into My Oracle Support and locate your Service Request b) Click on "Add Attachments c) And upload the zip file

    Read the article

  • Ubuntu 12.04 does not see windows already install on my computer (dual installation)

    - by jacinta
    I was trying to install the ubuntu 12.4 along side windows 7 on my new HP Pavilion 64k desktop with windows 7 computer but Ubuntu said that ( This computer has no detected operating system) and some one said (I suggest you chkdsk your Windows partition. I also suggest you resize the NTFS in WIndows then install Ubuntu to the free space.) Therefore I did (To shrink a simple or spanned volume using the Windows interface In Disk Management, right-click the simple or spanned volume you want to shrink. Click Shrink Volume…. Follow the instructions on your screen.) Then When I try to install ubuntu 12.4 after doing this, I received the same error. I was going to undo what I did but I see that I lose 1g when I do that so now what do I do? it says I can do a new simple volume and maybe then the space will no longer be unallocated. Please help me. I think I have a bad cd (ubuntu 12.4) cause from my research I see that I am not suppose to get a screen saying that (The computer has no detected operating system) I think this is a bad cd and I hope I did not mess up my computer. Please help. .................................................................................... O k I think I am following what you said about how to edit my question irrational john. I did chkdsk as you and actionparsnip (andrew-woodhead666) told me to AND ALSO did a lot of other things before I found out how to chkdsk. No problems. Thank you. Then I put back the space (extended) I took from system. I still was only able to put back 15 and not 16 so it is up to 99mb not back to 100mb. Then I shrank HP (C) as you told me, to 10 13,240 mb which is (12.93gb Unallocated). I did not change it into NTSF by doing the (New Simple Volume Action) I just left it. Then I tried to install UBUNTU 12.04 live CD amd64 and it gave me the results it was sometimes giving me before which is result (THAT Ubuntu) does not tell me weather I have or have not an already installed windows7. It just goes to a window that would have showed me information on what I have and on the bottom (DEVICE FOR BOOT LOADER INSTALLATION /dev/sda ) and the option to go BACK, QUIT, or INSTALL. (I think it is the INSTALLATION TYPE window). Therefore I do what I have been doing and I QUIT. What do I do now? Sorry that it seems like I cannot do anything on my own. On the Youtube video how to install ubuntu dual-boot alongside windows UBUNTU is installed so easy. The installation option page gives 3 options including dual instillation and the disk even lets you use a slider to slide to the size of the partition size you want. Yet my UBUNTU live cd is a mess and I checked it as one of you guys told me and got back information that it is good. Oh well this guy says you should press a control key to tell which device you are using to install ubuntu before the screen comes up. I guess cause it is old. This page also shows you easy stuff that do not show up on my cd. how to dual-boot UBUNTU and windows 7 P.S.. I saw this on the windows 7 website windows.microsoft.com/en-US/windows7/Formatting-disks-and-drives-frequently-asked-questions CREATE A BOOT PARTITION I HAD TO LEAVE OUT THE HTTP STUFF CAUSE I AM ONLY ALLOWED 2 ON A PAGE IT SAID To create a boot partition Warning Warning If you are installing different versions of Windows, you must install the earliest version first. If you don't do this, your computer may become inoperable. Open Computer Management by clicking the Start button Picture of the Start button, clicking Control Panel, clicking System and Security, clicking Administrative Tools, and then double-clicking Computer Management.? Administrator permission required If you're prompted for an administrator password or confirmation, type the password or provide confirmation. In the left pane, under Storage, click Disk Management. Right-click an unallocated region on your hard disk, and then click New Simple Volume. In the New Simple Volume Wizard, click Next. Type the size of the volume you want to create in megabytes (MB) or accept the maximum default size, and then click Next. Accept the default drive letter or choose a different drive letter to identify the volume, and then click Next. In the Format Partition dialog box, do one of the following: If you don't want to format the volume right now, click Do not format this volume, and then click Next. To format the volume with the default settings, click Next. For more information about formatting, see Formatting disks and drives: frequently asked questions. Review your choices, and then click Finish. AND THIS ON ANOTHER PAGE. Formatting disks and drives: frequently asked questions Hard disks, the primary storage devices on your computer, need to be formatted before you can use them. When you format a disk, you configure it with a file system so that Windows can store information on the disk. Hard disks in new computers running Windows are already formatted. If you buy an additional hard disk to expand the storage of your computer, you might need to format it. Storage devices such as USB flash drives and flash memory cards usually come preformatted by the manufacturer, so you probably won't need to format them. CDs and DVDs, on the other hand, use different formats from hard disks and removable storage devices. For information about formatting CDs and DVDs, see Which CD or DVD format should I use? Warning Warning Formatting erases any existing files on a hard disk. If you format a hard disk that has files on it, the files will be deleted. WHAT I DID WAS I GOT TO COMPUTER MANAGEMENT SECTION THEN I CLICKED ON DRIVE HP(C) (it put stripes on to show it is selected) Then I click on ACTION selected ALL TASKS AND THEN selected SHRINK VOLUME and then chose how much space from what it was giving me that I wanted. (12.93gb) AND THAT WAS ALL I DID. THEN I TRIED TO INSTALL UBUNTU i NEVER GOT THE 3RD SCREEN THAT IS IN THE VIDEO I INCLUDED (THE YOUTUBE WITH THE ENGLISH GUY) INSTALLATION TYPE I ALSO DID NOT GET THE 4TH SCREEN THAT ALLOWS YOU TO SELECT PARTITION SIZE what i got next was the 2nd INSTILLATION TYPE window shown on the (LINUX BS DOS.COM) PAGE THAT I INCLUDED and it showed no information about any drives (no drives /partition or stuff was shown) only the Boot Loader statement and the dev/sda bar and that's why i did not press install but chose to QUIT. SORRY I JUST NOW SAW YOUR ANSWER IRRATIONAL JOHN. I SHRANK HP(C) BY 12.93GB MY UNALLOCATED SPACE IS NOW 12.93GB HP(C) = 907.17gb NTSF...YOU ARE CORRECT WITH EVERYTHING YOU SAID This is what i read on (http://)windows.microsoft.com/en-US/windows7/Create-a-boot-partition I am only allowed 2 links Create a boot partition You must be logged on as an administrator to perform these steps. A boot partition is a partition that contains the files for the Windows operating system. If you want to install a second operating system on your computer (called a dual-boot or multiboot configuration), you need to create another partition on the hard disk, and then install the additional operating system on the new partition. Your hard disk would then have one system partition and two boot partitions. (A system partition is the partition that contains the hardware-related files. These tell the computer where to look to start Windows.) To create a partition on a basic disk, there must be unallocated disk space on your hard disk. With Disk Management, you can create a maximum of three primary partitions on a hard disk. You can create extended partitions, which include logical drives within them, if you need more partitions on the disk. Picture of disk space in Computer ManagementUnallocated disk space If there is no unallocated space, you will either need to create space by shrinking or deleting an existing partition or by using a third-party partitioning tool to repartition your hard disk. For more information, see Can I repartition my hard disk? To create a boot partition Warning Warning If you are installing different versions of Windows, you must install the earliest version first. If you don't do this, your computer may become inoperable. Open Computer Management by clicking the Start button Picture of the Start button, clicking Control Panel, clicking System and Security, clicking Administrative Tools, and then double-clicking Computer Management.? Administrator permission required If you're prompted for an administrator password or confirmation, type the password or provide confirmation. In the left pane, under Storage, click Disk Management. Right-click an unallocated region on your hard disk, and then click New Simple Volume. In the New Simple Volume Wizard, click Next. Type the size of the volume you want to create in megabytes (MB) or accept the maximum default size, and then click Next. Accept the default drive letter or choose a different drive letter to identify the volume, and then click Next. In the Format Partition dialog box, do one of the following: If you don't want to format the volume right now, click Do not format this volume, and then click Next. To format the volume with the default settings, click Next. For more information about formatting, see Formatting disks and drives: frequently asked questions. Review your choices, and then click Finish. I did what you told me @irrational john and this is the screen shot. I ENTERED ubuntu@ubuntu:~$ sudo os-prober computer did not respond so I entered ubuntu@ubuntu:~$ sudo apt-get -y remove dmraid computer responded with Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: dmraid 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 141 kB disk space will be freed. (Reading database ... 147515 files and directories currently installed.) Removing dmraid ... update-initramfs is disabled since running on read-only media Processing triggers for man-db ... I entered ubuntu@ubuntu:~$ sudo os-prober Computer Responded with /dev/sda1:Windows 7 (loader):Windows:chain /dev/sda3:Windows Recovery Environment (loader):Windows1:chain ubuntu@ubuntu:~$ ............... @obsessiveFOSS I don't know what is a Grub menu and I do not know what is the Ubuntu boot option The answer you gave to me was correct. This one {This apparently removes the dmraid metadata. After doing that, you can use the desktop icon Install Ubuntu 12.04 LTS to start the Ubuntu installer. This time the Installation Type window should contain the option to Install Ubuntu alongside Windows 7.} This is what I decided to do. I did not see the rest of your help 'till now. Never the less. I think the best thing for me to do now is to get a cheap used laptop and either do a dual installation or just install Ubuntu on to it. This way if I have any issues that I cannot solve like the one I had here, at least I will still have a usable computer to work on and to use to get answers with because I am not an expert like the people on this forum. Thanks a lot I will try to keep learning and do research enough to some day help someone else.

    Read the article

  • CodePlex Daily Summary for Tuesday, July 23, 2013

    CodePlex Daily Summary for Tuesday, July 23, 2013Popular ReleasesHome Access Plus+: v9.3: UPDATED RELEASE Fixed: Issue with enabling LDAP encryption Added: Encrypted/Signed LDAP Queries Fixed: Wrong Modernizr reference on the setup page Changed: Code on the AD Tree Picker Upgraded: Dynatree Fixed: Removed Dynatree Debug Data Added: Time bar for the current day Added: Code to block booking/removing of bookings after the lesson time has past Fixed: New sidebar position issue Added: Decode for the NotesMagelia WebStore Open-source Ecommerce software: Magelia WebStore 2.4: Magelia WebStore version 2.4 introduces new and improved features: Basket and order calculation have been redesigned with a more modular approach geographic zone algorithms for tax and shipping calculations have been re-developed. The Store service has been split in three services (store, basket, order). Product start and end dates have been added. For variant products a unique code has been introduced for the top (variable) product, product attributes can now be defined at the top ...GoAgent GUI: GoAgent GUI 1.3.0 Beta: ??????????: Windows XP SP3 Windows Vista SP1 Windows 7 Windows 8 Windows 8.1 ??Windows 8.1???????????? ????: .Net Framework 4.0 http://59.111.20.19/download/17718/33240761/4/exe/69/54/17718_1/dotNetFx40_Full_x86_x64.exe Microsoft Visual C++ 2010 Redistributable Package http://59.111.20.23/download/4894298/8555584/1/exe/28/176/1348305610524_944/vcredist_x86.exe ???????????????。 ?????????????,??????????,??????????????(????) ?????Windows XP?Windows 7????,???????????,?issue??????。uygw@outlook.comLINQ to Twitter: LINQ to Twitter v2.1.08: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.AndroidCopyAndPaste: Capood: Erstes Release für Evaluation --> Changeset 26675fastJSON: v2.0.17: 2.0.17 - added serialization of static fields and properties - added dynamic object support and testKartris E-commerce: Kartris v2.5002: This is the release version of Kartris with the responsive interface based on Zurb's Foundation framework. It will work on any screen size, down to and including mobile devices such as the iphone. We've also fixed issues with Apple iOS products being treated by ASP.NET as 'downscale' devices, which broke some functionality in Kartris. This release also includes native Bitcoin support (no third party services required - it communicates directly with the standard Bitcoin client via a JSON API)...VG-Ripper & PG-Ripper: PG-Ripper 1.4.15: changes NEW: Added Support for "ImgChili.net" links NEW: Added Support for "ImgBabes.com" linksAcDown?????: AcDown????? v4.4.3: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.4.3 ?? ??Bilibili????????????? ???????????? ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2.10?...Magick.NET: Magick.NET 6.8.6.601: Magick.NET linked with ImageMagick 6.8.6.6. These zip files are also available as a NuGet package: https://nuget.org/profiles/dlemstra/C# Intellisense for Notepad++: Initial release: Members auto-complete Integration with native Notepad++ Auto-Completion Auto "open bracket" for methods Right-arrow to accept suggestions51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.19.4: One Click Install from NuGet This release introduces the 51Degrees.mobi IIS Vary Header Fix. When Compression and Caching is used in IIS, the Vary header is overwritten, making intelligent caching with dynamic content impossible. Find out more about installing the Vary Header fix. Changes to Version 2.1.19.4Handlers now have a ‘Count’ property. This is an integer value that shows how many devices in the dataset that use that handler. Provider.cs -> GetDeviceInfoByID to address a problem w...Centrify DirectControl PowerShell Module: 1.5.709: -fix- Computer password is correctly set when preparing Computer account for Self-Service join and hostname is longer than 14 charsDLog: DLog v1.0: Log v1.0SalarDbCodeGenerator: SalarDbCodeGenerator v2.1.2013.0719: Version 2.1.2013.0719 2013/7/19 Pattern Changes: * DapperContext pattern is added. * All patterns are updated to work with one-to-one relations. Changes: * One-to-one relation is supported. * Minor bug fixes.Player Framework by Microsoft: Player Framework for Windows and WP (v1.3 beta 2): Includes all changes in v1.3 beta 1 Additional support for Windows 8.1 Preview New API (JS): addTextTrack New API (JS): msKeys New API (JS): msPlayToPreferredSourceUri New API (JS): msSetMediaKeys New API (JS): onmsneedkey New API (Xaml): SetMediaStreamSource method New API (Xaml): Stretch property New API (Xaml): StretchChanged event New API (Xaml): AreTransportControlsEnabled property New API (Xaml): IsFullWindow property New API (Xaml): PlayToPreferredSourceUri proper...Outlook 2013 Add-In: Multiple Calendars: As per popular request, this new version includes: - Support for multiple calendars. This can be enabled in the configuration by choosing which ones to show/hide appointments from. In some cases (public folders) it may time out and crash, and so far it only supports "My Calendars", so not shared ones yet. Also they're currently shown in the same font/color so there are no confusions with color categories, but please drop me a line on any suggestions you'd like to see implemented. - Added fri...Circuit Diagram: Circuit Diagram 2.0 Beta 2: New in this release: Show grid in editor Cut/copy/paste support Bug fixesOrchard Project: Orchard 1.7 RC: Planning releasedTerminals: Version 3.1 - Release: Changes since version 3.0:15992 Unified usage of icons in user interface Added context menu in Organize favorites grid Fixed:34219 34210 34223 33981 34209 Install notes:No changes in database (use database from release 3.0) No upgrade of configuration, passwords, credentials or favorites See also upgrade notes for release 3.0New ProjectsAerCloud.net FaceTools Client - Python, Linux & Windows: This project source code provides a step by step guide for using AerCloud.net Framework as a Service API. For more information please visit http://www.aercloudCode Snippet Manager: It is a simple Code Snippet Manager.. that I use when presenting tutorials. Features: - Insert a code snippet from the clipboard into program - Delete code sniDeutschWortSammler: A tiny English-German dictionary.DNN Testimonials: Virtual Essentials is a custom software development company. We strive to put joy back into the lives of people by removing the pain points through technologyDXWTEC: legsGoldenDream TeamWork: CourseWork for DB Course in Telerik Academy. Kpaint: Our work with Kpaint has proven to have provide highly accurate results mostly with the exception of occasion jittery of the paint brush. The voice recognition LAura: LAura checks the presence status of the currently signed in User and colorizes the Windows Window Color.MAPIFolders: MAPIFolders is intended as a successor to PFDAVAdmin and ExFolders.Microsoft Lab of Things Analytics: This is a project by students at University College London. We are creating analytics that will add value to home automation leveraging Microsoft Lab of Things.MySitelongdream: My siteMyTestSolution: this is a solution used just for laboratory purposeOrchestrator EventLog Integration Pack: Orchestrator Integration Pack to write spezific events in any Event Log.Planter's Supermarket: sdfResize images before upload: It will resize all images in folder recursively and store them in a store accountsampletranscodeffmpeg: a project for transcode for joyseeSCSM Perf Test Harness: Source Code for Travis Wright's LoadGen Tool for Service Manager. Simple Compression: Simple Compression is a .Net library that provides compression methods for use in an application.The Memory Game: This was my first WPF application.What day of the week is your birthday?: A program that can find the day of the week you were born!!!!!Windows Recovery Software: Recover Data Software for windows helps you to recover all type of data which has been lost due to several reasons.Xjs: Javascript framework to allow faster web development on the browser side

    Read the article

  • Building Publishing Pages in Code

    - by David Jacobus
    Originally posted on: http://geekswithblogs.net/djacobus/archive/2013/10/27/154478.aspxOne of the Mantras we developers try to follow: Ensure that the solution package we deliver to the client is complete.  We build Web Parts, Master Pages, Images, CSS files and other artifacts that we push to the client with a WSP (Solution Package) And then we have them finish the solution by building their site pages by adding the web parts to the site pages.       I am a proponent that we,  the developers,  should minimize this time consuming work and build these site pages in code.  I found a few blogs and some MSDN documentation but not really a complete solution that has all these artifacts working in one solution.   What I am will discuss and provide a solution for is a package that has: 1.  Master Page 2.  Page Layout 3.  Page Web Parts 4.  Site Pages   Most all done in code without the development team or the developers having to finish up the site building process spending a few hours or days completing the site!  I am not implying that in Development we do this. In fact,  we build these pages incrementally testing our web parts, etc. I am saying that the final action in our solution is that we take all these artifacts and add them to the site pages in code, the client then only needs to activate a few features and VIOLA their site appears!.  I had a project that had me build 8 pages like this as part of the solution.   In this blog post, I am taking a master page solution that I have called DJGreenMaster.  On My Office 365 Development Site it looks like this:     It is a generic master page for a SharePoint 2010 site Along with a three column layout.  Centered with a footer that uses a SharePoint List and Web Part for the footer links.  I use this master page a lot in my site development!  Easy to change the color and site logo with a little CSS.   I am going to add a few web parts for discussion purposes and then add these web parts to a site page in code.    Lets look at the solution package for DJ Green Master as that will be the basis project for building the site pages:   What you are seeing  is a complete solution to add a Master Page to a site collection which contains: 1.  Master Page Module which contains the Master Page and Page Layout 2.  The Footer Module to add the Footer Web Part 3.  Miscellaneous modules to add images, JQuery, CSS and subsite page 4.  3 features and two feature event receivers: a.  DJGreenCSS, used to add the master page CSS file to Style Sheet Library and an Event Receiver to check it in. b.  DJGreenMaster used to add the Master Page and Page Layout.  In an Event Receiver change the master page to DJGreenMaster , create the footer list and check the files in. c.  DJGreenMasterWebParts add the Footer Web Part to the site collection. I won’t go over the code for this as I will give it to you at the end of this blog post. I have discussed creating a list in code in a previous post.  So what we have is the basis to begin what is germane to this discussion.  I have the first two requirements completed.  I need now to add page web parts and the build the pages in code.  For the page web parts, I will use one downloaded from Codeplex which does not use a SharePoint custom list for simplicity:   Weather Web Part and another downloaded from MSDN which is a SharePoint Custom Calendar Web Part, I had to add some functionality to make the events color coded to exceed the built-in 10 overlays using JQuery!    Here is the solution with the added projects:     Here is a screen shot of the Weather Web Part Deployed:   Here is a screen shot of the Site Calendar with JQuery:     Okay, Now we get to the final item:  To create Publishing pages.   We need to add a feature receiver to the DJGreenMaster project I will name it DJSitePages and also add a Event Receiver:       We will build the page at the site collection level and all of the code necessary will be contained in the event receiver.   Added a reference to the Microsoft.SharePoint.Publishing.dll contained in the ISAPI folder of the 14 Hive.   First we will add some static methods from which we will call  in our Event Receiver:   1: private static void checkOut(string pagename, PublishingPage p) 2: { 3: if (p.Name.Equals(pagename, StringComparison.InvariantCultureIgnoreCase)) 4: { 5: 6: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.None) 7: { 8: p.CheckOut(); 9: } 10:   11: if (p.ListItem.File.CheckOutType == SPFile.SPCheckOutType.Online) 12: { 13: p.CheckIn("initial"); 14: p.CheckOut(); 15: } 16: } 17: } 18: private static void checkin(PublishingPage p,PublishingWeb pw) 19: { 20: SPFile publishFile = p.ListItem.File; 21:   22: if (publishFile.CheckOutType != SPFile.SPCheckOutType.None) 23: { 24:   25: publishFile.CheckIn( 26:   27: "CheckedIn"); 28:   29: publishFile.Publish( 30:   31: "published"); 32: } 33: // In case of content approval, approve the file need to add 34: //pulishing site 35: if (pw.PagesList.EnableModeration) 36: { 37: publishFile.Approve("Initial"); 38: } 39: publishFile.Update(); 40: }   In a Publishing Site, CheckIn and CheckOut  are required when dealing with pages in a publishing site.  Okay lets look at the Feature Activated Event Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3:   4:   5:   6: object oParent = properties.Feature.Parent; 7:   8:   9:   10: if (properties.Feature.Parent is SPWeb) 11: { 12:   13: currentWeb = (SPWeb)oParent; 14:   15: currentSite = currentWeb.Site; 16:   17: } 18:   19: else 20: { 21:   22: currentSite = (SPSite)oParent; 23:   24: currentWeb = currentSite.RootWeb; 25:   26: } 27: 28:   29: //create the publishing pages 30: CreatePublishingPage(currentWeb, "Home.aspx", "ThreeColumnLayout.aspx","Home"); 31: //CreatePublishingPage(currentWeb, "Dummy.aspx", "ThreeColumnLayout.aspx","Dummy"); 32: }     Basically we are calling the method Create Publishing Page with parameters:  Current Web, Name of the Page, The Page Layout, Title of the page.  Let’s look at the Create Publishing Page method:   1:   2: private void CreatePublishingPage(SPWeb site, string pageName, string pageLayoutName, string title) 3: { 4: PublishingSite pubSiteCollection = new PublishingSite(site.Site); 5: PublishingWeb pubSite = null; 6: if (pubSiteCollection != null) 7: { 8: // Assign an object to the pubSite variable 9: if (PublishingWeb.IsPublishingWeb(site)) 10: { 11: pubSite = PublishingWeb.GetPublishingWeb(site); 12: } 13: } 14: // Search for the page layout for creating the new page 15: PageLayout currentPageLayout = FindPageLayout(pubSiteCollection, pageLayoutName); 16: // Check or the Page Layout could be found in the collection 17: // if not (== null, return because the page has to be based on 18: // an excisting Page Layout 19: if (currentPageLayout == null) 20: { 21: return; 22: } 23:   24: 25: PublishingPageCollection pages = pubSite.GetPublishingPages(); 26: foreach (PublishingPage p in pages) 27: { 28: //The page allready exists 29: if ((p.Name == pageName)) return; 30:   31: } 32: 33:   34:   35: PublishingPage newPage = pages.Add(pageName, currentPageLayout); 36: newPage.Description = pageName.Replace(".aspx", ""); 37: // Here you can set some properties like: 38: newPage.IncludeInCurrentNavigation = true; 39: newPage.IncludeInGlobalNavigation = true; 40: newPage.Title = title; 41: 42: 43:   44:   45: 46:   47: //build the page 48:   49: 50: switch (pageName) 51: { 52: case "Homer.aspx": 53: checkOut("Courier.aspx", newPage); 54: BuildHomePage(site, newPage); 55: break; 56:   57:   58: default: 59: break; 60: } 61: // newPage.Update(); 62: //Now we can checkin the newly created page to the “pages” library 63: checkin(newPage, pubSite); 64: 65: 66: }     The narrative in what is going on here is: 1.  We need to find out if we are dealing with a Publishing Web.  2.  Get the Page Layout 3.  Create the Page in the pages list. 4.  Based on the page name we build that page.  (Here is where we can add all the methods to build multiple pages.) In the switch we call Build Home Page where all the work is done to add the web parts.  Prior to adding the web parts we need to add references to the two web part projects in the solution. using WeatherWebPart.WeatherWebPart; using CSSharePointCustomCalendar.CustomCalendarWebPart;   We can then reference them in the Build Home Page method.   Let’s look at Build Home Page: 1:   2: private static void BuildHomePage(SPWeb web, PublishingPage pubPage) 3: { 4: // build the pages 5: // Get the web part manager for each page and do the same code as below (copy and paste, change to the web parts for the page) 6: // Part Description 7: SPLimitedWebPartManager mgr = web.GetLimitedWebPartManager(web.Url + "/Pages/Home.aspx", System.Web.UI.WebControls.WebParts.PersonalizationScope.Shared); 8: WeatherWebPart.WeatherWebPart.WeatherWebPart wwp = new WeatherWebPart.WeatherWebPart.WeatherWebPart() { ChromeType = PartChromeType.None, Title = "Todays Weather", AreaCode = "2504627" }; 9: //Dictionary<string, string> wwpDic= new Dictionary<string, string>(); 10: //wwpDic.Add("AreaCode", "2504627"); 11: //setWebPartProperties(wwp, "WeatherWebPart", wwpDic); 12:   13: // Add the web part to a pagelayout Web Part Zone 14: mgr.AddWebPart(wwp, "g_685594D193AA4BBFABEF2FB0C8A6C1DD", 1); 15:   16: CSSharePointCustomCalendar.CustomCalendarWebPart.CustomCalendarWebPart cwp = new CustomCalendarWebPart() { ChromeType = PartChromeType.None, Title = "Corporate Calendar", listName="CorporateCalendar" }; 17:   18: mgr.AddWebPart(cwp, "g_20CBAA1DF45949CDA5D351350462E4C6", 1); 19:   20:   21: pubPage.Update(); 22:   23: } Here is what we are doing: 1.  We got  a reference to the SharePoint Limited Web Part Manager and linked/referenced Home.aspx  2.  Instantiated the a new Weather Web Part and used the Manager to add it to the page in a web part zone identified by ID,  thus the need for a Page Layout where the developer knows the ID’s. 3.  Instantiated the Calendar Web Part and used the Manager to add it to the page. 4. We the called the Publishing Page update method. 5.  Lastly, the Create Publishing Page method checks in the page just created.   Here is a screen shot of the page right after a deploy!       Okay!  I know we could make a home page look much better!  However, I built this whole Integrated solution in less than a day with the caveat that the Green Master was already built!  So what am I saying?  Build you web parts, master pages, etc.  At the very end of the engagement build the pages.  The client will be very happy!  Here is the code for this solution Code

    Read the article

< Previous Page | 467 468 469 470 471 472 473 474 475 476 477 478  | Next Page >