Search Results

Search found 10494 results on 420 pages for 'beyond the documentation'.

Page 232/420 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • Redaction in AutoVue

    - by [email protected]
    As the trend to digitize all paper assets continues, so does the push to digitize all the processes around these assets. One such process is redaction - removing sensitive or classified information from documents. While for some this may conjure up thoughts of old CIA documents filled with nothing but blacked out pages, there are actually many uses for redaction today beyond military and government. Many companies have a need to remove names, phone numbers, social security numbers, credit card numbers, etc. from documents that are being scanned in and/or released to the public or less privileged users - insurance companies, banks and legal firms are a few examples. The process of digital redaction actually isn't that far from the old paper method: Step 1. Find a folder with a big red stamp on it labeled "TOP SECRET" Step 2. Make a copy of that document, since some folks still need to access the original contents Step 3. Black out the text or pages you want to hide Step 4. Release or distribute this new 'redacted' copy So where does a solution like AutoVue come in? Well, we've really been doing all of these things for years! 1. With AutoVue's VueLink integration and iSDK, we can integrate to virtually any content management system and view documents of almost any format with a single click. Finding the document and opening it in AutoVue: CHECK! 2. With AutoVue's markup capabilities, adding filled boxes (or other shapes) around certain text is a no-brainer. You can even leverage AutoVue's powerful APIs to automate the addition of markups over certain text or pre-defined regions using our APIs. Black out the text you want to hide: CHECK! 3. With AutoVue's conversion capabilities, you can 'burn-in' the comments into a new file, either as a TIFF, JPEG or PDF document. Burning-in the redactions avoids slip-ups like the recent (well-publicized) TSA one. Through our tight integrations, the newly created copies can be directly checked into the content management system with no manual intervention. Make a copy of that document: CHECK! 4. Again, leveraging AutoVue's integrations, we can now define rules in the system based on a user's privileges. An 'authorized' user wishing to view the document from the repository will get exactly that - no redactions. An 'unauthorized' user, when requesting to view that same document, can get redirected to open the redacted copy of the same document. Release or distribute the new 'redacted' copy: CHECK! See this movie (WMV format, 2mins, 20secs, no audio) for a quick illustration of AutoVue's redaction capabilities. It shows how redactions can be added based on text searches, manual input or pre-defined templates/regions. Let us know what you think in the comments. And remember - this is all in our flagship AutoVue product - no additional software required!

    Read the article

  • Oracle Functional Testing Suite Advanced Pack for Oracle EBS Now Available

    - by Anne Carlson (Oracle Development)
    There’s new news about automated testing of E-Business Suite using the Oracle Application Testing Suite, a.k.a, “OATS”. E-Business Suite Development is pleased to announce the availability of the new Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite. The new pack, available with the latest release of Oracle Application Testing Suite (12.4.0.2), provides pre-built test components and flows to automate the in-depth testing of Oracle E-Business Suite applications. Designed for use with the Oracle Application Testing Suite and its Oracle Flow Builder capability, these pre-built components and flows can help Oracle E-Business Suite customers to significantly reduce the time and effort needed to create and maintain automated test scripts. The Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite is available now for EBS 12.1.3, and availability for EBS 12.2 is planned. Some Background on Automating Testing with Oracle Application Testing Suite and Oracle Flow Builder      Testing complex packaged applications like Oracle E-Business Suite can be time-consuming and challenging for organizations, hampering their ability to upgrade to latest releases or apply latest patches. Oracle Application Testing Suite offers organizations a unique and powerful testing platform for Oracle E-Business Suite and other Oracle applications. With the 12.3.0.1 release of Oracle Application Testing Suite, we introduced the Oracle Flow Builder testing framework and accompanying starter pack of pre-built test components and flows. The starter pack, which contains over 2000 components and 200 flows, provides broad coverage of commonly-used base functionality and is designed to jump-start the test automation effort. Using Oracle Flow Builder, even non-technical testers can create working test scripts using the pre-built components that Oracle provides. Each component represents an atomic test operation such as “create an invoice batch” or “apply an invoice hold.” Testers can assemble the pre-built components into test flows, and combine test flows with spreadsheet data to drive the testing of multiple data conditions. The Oracle Flow Builder framework allows customers to add, modify and extend the pre-built components to address new functionality and customizations of the Oracle E-Business Suite. Using Oracle Flow Builder’s component-based test generation framework instead of a traditional record/playback approach has allowed the EBS Quality Assurance team to reduce their test automation effort by 60%. E-Business Suite customers can significantly reduce their test automation effort using Oracle Application Testing Suite with Oracle Flow Builder and the pre-built test components and flows that Oracle provides. Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite Improves Test Coverage With the Oracle Application Testing Suite 12.4.0.2 and the new Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite, we are now delivering a significant number of additional test components and flows beyond those contained in the Oracle Flow Builder starter pack. These additional test components and flows provide 70-80% test coverage and enable the automation of detailed and complex test flows across the following Oracle E-Business Suite products: Oracle Asset Lifecycle Management Oracle Channel Revenue Management Oracle Discrete Manufacturing Oracle Incentive Compensation Oracle Lease and Finance Management Oracle Process Manufacturing Oracle Procurement Oracle Project Management Oracle Property Manager Oracle Service Downloads You can download the Oracle Functional Testing Suite Advanced Pack for Oracle E-Business Suite from the Oracle Technology Network. References Oracle Applications Testing Suite YouTube: Oracle Flow Builder Training YouTube: Oracle Applications Testing Suite and Flow Builder Demonstration Oracle Functional Testing Suite Advanced Pack Readme for E-Business Suite, id=1905989.1">Note 1905989.1 Related Articles Automate Testing Using Oracle Application Testing Suite with Flow Builder for E-Business Suite EBS 12.1.1 Test Starter Kit Now Available for Oracle Applications Testing Suite Oracle Application Testing Suite 9.0 Supported with Oracle E-Business Suite Using the Oracle Application Testing Suite with EBS: Interim Update #1

    Read the article

  • Application Demos in UPK

    - by [email protected]
    Over the years, User Productivity Kit has expanded to include solutions to many project challenges. As of UPK 3.6.1, solutions are provided for pre and post application go-live learning, application testing, system documentation, presentation output, and more. New in UPK 3.6.1 are additional features that can be used effectively for application demo purposes. This can come in handy when you need to do a demo but don't want to show or can't show the live application. Maybe you're doing a presentation for a group of project stakeholders and want to focus on the business workflow implemented by the application rather than the mechanics of using it. Or possibly, you need to show the application but you're disconnected from any network preventing you from running the live application. In any of these cases, a presentation aid that represents the live application is what's needed. Previous versions of the UPK topic player would allow you to do this but would always show those UPK user interface elements that help a user learn the application. When you're presenting the narrative live, the UPK bubbles can be a distraction. UPK 3.6.1 provides some new features that allow you to control whether the bubbles display. There are two ways to hide bubbles in a topic. The first is a topic property that allows you to control bubbles across the entire topic. There are 3 settings for the Show Bubbles topic property. The default setting is Use frame settings which allows you to control whether bubbles display on a frame by frame basis. When you choose Always, the bubbles will always display regardless of the frame setting. The final choice is Never. Choosing Never will hide every bubble in your topic with one setting change. As with Always, choosing Never will ignore the frame setting. The second way to control the bubbles is at the frame level. First ensure that the topic's Show Bubbles property is set to Use frame settings. Navigate to the frame on which you want to turn off the bubble and click the Display bubble for this frame button to turn off the bubble. When you play the topic, the bubble will no longer be displayed. Depending on your needs, you might also use another longstanding UPK feature that allows you to control whether the action area displays on a frame. Just click the Action area on/off button to toggle its display. I've found the frame properties to be useful beyond creating presentation aids. When creating "See It!" only topics for more advanced users, I may hide the bubbles on some of the more straightforward frames. For example, if I have a form where one needs to fill out an address, I may display the first bubble in the sequence and explain what the subsequent steps are doing. I then hide bubbles on the remaining frames which are the more mechanical steps of entering the address. We'd like to hear your thoughts on this new UPK feature. Use the comments below to tell us how you've used it. John Zaums Senior Director, Product Development Oracle User Productivity Kit

    Read the article

  • Something for the weekend - Whats the most complex query?

    - by simonsabin
    Whenever I teach about SQL Server performance tuning I try can get across the message that there is no such thing as a table. Does that sound odd, well it isn't, trust me. Rather than tables you need to consider structures. You have 1. Heaps 2. Indexes (b-trees) Some people split indexes in two, clustered and non-clustered, this I feel confuses the situation as people associate clustered indexes with sorting, but don't associate non clustered indexes with sorting, this is wrong. Clustered and non-clustered indexes are the same b-tree structure(and even more so with SQL 2005) with the leaf pages sorted in a linked list according to the keys of the index.. The difference is that non clustered indexes include in their structure either, the clustered key(s), or the row identifier for the row in the table (see http://sqlblog.com/blogs/kalen_delaney/archive/2008/03/16/nonclustered-index-keys.aspx for more details). Beyond that they are the same, they have key columns which are stored on the root and intermediary pages, and included columns which are on the leaf level. The reason this is important is that this is how the optimiser sees the world, this means it can use any of these structures to resolve your query. Even if your query only accesses one table, the optimiser can access multiple structures to get your results. One commonly sees this with a non-clustered index scan and then a key lookup (clustered index seek), but importantly it's not restricted to just using one non-clustered index and the clustered index or heap, and that's the challenge for the weekend. So the challenge for the weekend is to produce the most complex single table query. For those clever bods amongst you that are thinking, great I will just use lots of xquery functions, sorry these are the rules. 1. You have to use a table from AdventureWorks (2005 or 2008) 2. You can add whatever indexes you like, but you must document these 3. You cannot use XQuery, Spatial, HierarchyId, Full Text or any open rowset function. 4. You can only reference your table once, i..e a FROM clause with ONE table and no JOINs 5. No Sub queries. The aim of this is to show how the optimiser can use multiple structures to build the results of a query and to also highlight why the optimiser is doing that. How many structures can you get the optimiser to use? As an example create these two indexes on AdventureWorks2008 create index IX_Person_Person on Person.Person (lastName, FirstName,NameStyle,PersonType) create index IX_Person_Person on Person.Person(BusinessentityId,ModifiedDate)with drop_existing    select lastName, ModifiedDate   from Person.Person  where LastName = 'Smith' You will see that the optimiser has decided to not access the underlying clustered index of the table but to use two indexes above to resolve the query. This highlights how the optimiser considers all storage structures, clustered indexes, non clustered indexes and heaps when trying to resolve a query. So are you up to the challenge for the weekend to produce the most complex single table query? The prize is a pdf version of a popular SQL Server book, or a physical book if you live in the UK.  

    Read the article

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • Visual Studio &amp; TFS &ndash; List of addins, extensions, patches and hotfixes &ndash; Latest and Greatest

    - by terje
    This post is a list of the addins and extensions we (I ) recommend for use in Inmeta.  It’s coming up all the time – what to install, where are the download sites, etc etc, and thus I thought it better to post it here and keep it updated. The basics are Visual Studio 2010 connected to a Team Foundation Server 2010.  The edition of Visual Studio I use is the Ultimate Edition, but as many stay with the Premium Edition I’ve marked the extensions which only works with the Ultimate with a . I’ve also split the group into Recommended (which means Required) and Optional (which means Recommended) and Nice to Have (which means Optional) .   The focus is to get a setup which can be used for a complete coding experience for the whole ALM process.  The Code Gallery is found either through the Tools/Extension Manager menu in Visual Studio or through this link. The ones to really download is the Recommended category.  Then consider the Optional based on your needs.  The list of course reflects what I use for my work , so it is by no means complete, and for some of the tools there are equally useful alternatives.  The components directly associated with Visual Studio from Microsoft should be common, see the Microsoft column.     Product Available on Code Gallery Latest Version License Rec/Opt/N2H Applicable to Microsoft TFS Power Tools Sept 2010 Complete setup msi on link, split into parts on CG Sept 2010 Free Recommended TFS integration Yes Productivity Power Tools Yes 10.0.11019.3 Free Recommended Coding Yes Code Contracts No 1.4.30903 Free Recommended Coding & Quality Yes Code Contracts Editor Extensions Yes 1.4.30903 Free Recommended Coding & Quality Yes VSCommands Yes 3.6.4.1 Lite version Free (Good enough) Nice to have Coding No Power Commands Yes 1.0.2.3 Free Recommended Coding Yes FeaturePack 2   No.  MSDN Subscriber download under Visual Studio 2010 FP2 Part of MSDN Subscription Recommended Modeling & Testing Yes ReSharper No (Trial only) 5.1.1 Licensed Recommended Coding & Quality No dotTrace No 4.0.1 Licensed Optional Quality No NDepends No (Trial only) Licensed Optional Quality No tangible T4 editor Yes 1.950 Lite version Free (Good enough) Optional Coding (T4 templates) No Reflector No (Trial of Pro version only) 6.5 Lite version Free (Good enough) Recommended Coding/Investigation No LinqPad No 4.26.2 Licensed Nice to have Coding No Beyond Compare No 3.1.11 Licensed Recommended Coding/Investigation No Pex and Moles No (Moles available alone on CG) . Complete on MSDN Subscriber download under Visual Studio 2010 0.94.51023 Part of MSDN Subscription Optional Coding & Unit Testing Yes ApexSQL No Licensed Nice to have SQL No                 Some important Patches, upgrades and fixes Product Date Information Rec/Opt Applicable to Scrolling context menu KB2345133 and KB2413613 October 2010 Here Recommended Visual Studio MTM Patch October 2010 Here and here  KB2387011 Recommended (if you use MTM) MTM Data warehouse fix June 2010 Iteration dates fails with SQL 2008 R2.  KB2222312. Affects Burndown chart in Agile workbook Only for SQL 2008 R2 Server Upgrade 2008 to 2010 issue and hotfix August 2010 Fixes problems with labels and branches which are lost during upgrade. Apply before upgrade. Note: This has been fixed in the latest re-release of the TFS Server dated Aug 5th 2010. See here. Recommends downloading the latest bits. Only if upgrade from 2008 from earlier bits Server

    Read the article

  • Text Expansion Awareness for UX Designers: Points to Consider

    - by ultan o'broin
    Awareness of translated text expansion dynamics is important for enterprise applications UX designers (I am assuming all source text for translation is in English, though apps development can takes place in other natural languages too). This consideration goes beyond the standard 'character multiplication' rule and must take into account the avoidance of other layout tricks that a designer might be tempted to try. Follow these guidelines. For general text expansion, remember the simple rule that the shorter the word is in the English, the longer it will need to be in English. See the examples provided by Richard Ishida of the W3C and you'll get the idea. So, forget the 30 percent or one inch minimum expansion rule of the old Forms days. Unfortunately remembering convoluted text expansion rules, based as a percentage of the US English character count can be tough going. Try these: Up to 10 characters: 100 to 200% 11 to 20 characters: 80 to 100% 21 to 30 characters: 60 to 80% 31 to 50 characters: 40 to 60% 51 to 70 characters: 31 to 40% Over 70 characters: 30% (Source: IBM) So it might be easier to remember a rule that if your English text is less than 20 characters then allow it to double in length (200 percent), and then after that assume an increase by half the length of the text (50%). (Bear in mind that ADF can apply truncation rules on some components in English too). (If your text is stored in a database, developers must make sure the table column widths can accommodate the expansion of your text when translated based on byte size for the translated character and not numbers of characters. Use Unicode. One character does not equal one byte in the multilingual enterprise apps world.) Rely on a graceful transformation of translated text. Let all pages to resize dynamically so the text wraps and flow naturally. ADF pages supports this already. Think websites. Don't hard-code alignments. Use Start and End properties on components and not Left or Right. Don't force alignments of components on the page by using texts of a certain length as spacers. Use proper label positioning and anchoring in ADF components or other technologies. Remember that an increase in text length means an increase in vertical space too when pages are resized. So don't hard-code vertical heights for any text areas. Don't be tempted to manually create text or printed reports this way either. They cannot be translated successfully, and are very difficult to maintain in English. Use XML, HTML, RTF and so on. Check out what Oracle BI Publisher offers. Don't force wrapping by using tricks such as /n or /t characters or HTML BR tags or forced page breaks. Once the text is translated the alignment will be destroyed. The position of the breaking character or tag would need to be moved anyway, or even removed. When creating tables, then use table components. Don't use manually created tables that reply on word length to maintain column and row alignment. For example, don't use codeblock elements in HTML; use the proper table elements instead. Once translated, the alignment of manually formatted tabular data is destroyed. Finally, if there is a space restriction, then don't use made-up acronyms, abbreviations or some form of daft text speak to save space. Besides being incomprehensible in English, they may need full translations of the shortened words, even if they can be figured out. Use approved or industry standard acronyms according to the UX style rules, not as a space-saving device. Restricted Real Estate on Mobile Devices On mobile devices real estate is limited. Using shortened text is fine once it is comprehensible. Users in the mobile space prefer brevity too, as they are on the go, performing three-minute tasks, with no time to read lengthy texts. Using fragments and lightning up on unnecessary articles and getting straight to the point with imperative forms of verbs makes sense both on real estate and user experience grounds.

    Read the article

  • SQL SERVER – What is a Technology Evangelist?

    - by pinaldave
    When you hear that someone is an “evangelist” the first thing that might pop into your mind is the Christian church.  In fact, the term did come from Christianity, and basically means someone who spreads the news about their faith.  In the technology world, the same definition is true. Technology evangelists are individuals who, professionally or in their spare time, spread the news about the latest new products.  Sounds like a salesperson, right?  No they are absolutely different. Salespeople also keep up to date with a large number of people, and like to convince others to buy their product – and some will go to any lengths to sell!  An evangelist, on the other hand, is brutally honest about the product, even if sometimes it means not making a sale.  An evangelist is out there to tell the TRUTH.  A salesperson needs to make sales. An Evangelist offers a Solution independent of Technology used – a Salesperson offers Particular Technology. With this definition in mind, you can probably think of a few technology evangelists you already know.  Maybe it’s a relative or a neighbor, someone who loves keeping up with the latest trends and is always willing to tell you about them if you ask even the simplest question.  And, in fact, they probably are evangelists and don’t even know it.  For a long time, the work of technology evangelism was in the hands of community and community technology leaders. Luckily now various organizations have understood the importance of the community and helping community to reach their goals. This has lead them to create role of “Technology Evangelists”. Let me talk about one of the most famous Evangelist of the SQL Server technology. Technology Evangelist only belongs to technology and above any country, race, location or any other thing. They are dedicated to the technology. Vinod Kumar is such a man, who have given a lot to community. For years he was a Technology Evangelist for Microsoft, and maintained a blog that was dedicated to spreading his enthusiasm for his favorite products.  He is one of the most respected Evangelists in the field, and has done a lot of work to define the job for other professionals. Vinod’s career has since progressed to the Microsoft Technology Center (read his post), but he is continuing to be a strong presence in the evangelism community.  I have a lot of respect for Vinod.  He has done a lot for the community and technology evangelism.  Everybody has dream to serve community the way he does, and he is a great role model for evangelists everywhere. On his blog, Vinod created one of the best descriptions of a Technology Evangelist.  It defined the position and also made the distinction between evangelist and salesperson extremely clear.  I will include the highlights of that list here, because no one can say it better than Vinod: Bundle of energy – Passion is their middle name Wonderful Story tellers Empathy, Trust, Loyalty, Openness, Accessibility and Warmth Technology Enthusiast – Doers Love people, people and more people – Community oriented Unique Style and Leadership qualities !!! Self-Confident, Self-Motivated but a student (To read the full list, see: Evangelism Beyond Borders with Evangelists) His blog is a must-read for anyone interested in technology evangelism as a career or simply a hobby.  His advice about how to gain an audience and become a trusted advisor is the best in the business. I think there is an evangelist in everyone. I, too, consider myself a technology evangelist.  Regular readers of this blog will recognize that I am dedicated to bringing information to the masses, and that I pride myself on being both brutally and honest and giving every product fair consideration. I think there is no better way of saying following subject. “Once an Evangelist – Always an Evangelist!” Reference: Pinal Dave (http://blog.SQLAuthority.com)     Filed under: About Me, Database, MVP, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Evangelist

    Read the article

  • IntelliTrace As a Learning Tool for MVC2 in a VS2010 Project

    - by Sam Abraham
    IntelliTrace is a new feature in Visual Studio 2010 Ultimate Edition. I see this valuable tool as a “Program Execution Recorder” that captures information about events and calls taking place as soon as we hit the VS2010 play (Start Debugging) button or the F5 key. Many online resources already discuss IntelliTrace and the benefit it brings to both developers and testers alike so I see no value of just repeating this information.  In this brief blog entry, I would like to share with you how I will be using IntelliTrace in my upcoming talk at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (check http://www.fladotnet.com for more information), as a learning tool to demonstrate the internals of the lifecycle of an MVC2 application.  I will also be providing some helpful links that cover IntelliTrace in more detail at the end of my article for reference. IntelliTrace is setup by default to only capture execution events. Microsoft did such a great job on optimizing its recording process that I haven’t even felt the slightest performance hit with IntelliTrace running as I was debugging my solutions and projects.  For my purposes here however, I needed to capture more information beyond execution events, so I turned on the option for capturing calls in addition to events as shown in Figures 1 and 2. Changing capture options will require us to stop our debugging session and start over for the new settings to take place. Figure 1 – Access IntelliTrace options via the Tools->Options menu items Figure 2 – Change IntelliTrace Options to capture call information as well as events Notice the warning with regards to potentially degrading performance when selecting to capture call information in addition to the default events-only setting. I have found this warning to be sure true. My subsequent tests showed slowness in page load times compared to rendering those same exact pages with the “event-only” option selected. Execution recording is auto-started along with the new debugging session of our project. At this point, we can simply interact with the application and continue executing normally until we decide to “playback” the code we have executed so far.  For code replay, first step is to “break” the current execution as show in Figure 3.   Figure 3 – Break to replay recording A few tries later, I found a good process to quickly find and demonstrate the MVC2 page lifecycle. First-off, we start with the event view as shown in Figure 4 until we find an interesting event that needs further studying.  Figure 4 – Going through IntelliTrace’s events and picking as specific entry of interest We now can, for instance, study how the highlighted HTTP GET request is being handled, by clicking on the “Calls View” for that particular event. Notice that IntelliTrace shows us all calls that took place in servicing that GET request. Double clicking on any call takes us to a more granular view of the call stack within that clicked call, up until getting to a specific line of code where we can do a line-by-line replay of the execution from that point onwards using F10 or F11 just like our typical good old VS2008 debugging helped us accomplish. Figure 5 – switching to call view on an event of interest Figure 6 – Double clicking on call shows a more granular view of the call stack. In conclusion, the introduction of IntelliTrace as a new addition to the VS developers’ tool arsenal enhances development and debugging experience and effectively tackles the “no-repro” problem. It will also hopefully enhance my audience’s experience listening to me speaking about  an MVC2 page lifecycle which I can now easily visually demonstrate, thereby improving the probability of keeping everybody awake a little longer. IntelliTrace References: http://msdn.microsoft.com/en-us/magazine/ee336126.aspx http://msdn.microsoft.com/en-us/library/dd264944(VS.100).aspx

    Read the article

  • Things to install on a new machine – revisited

    - by RoyOsherove
    as I prepare to get a new dev machine at work, I write the things I am going to install on it, before writing the first line of code on that machine: Control Freak Tools: Everything Search Engine – a free and amazingly fast search engine for files all over your machine. (just file names, not inside files). This is so fast I use it almost as a replacement for my start menu, but it’s also great for finding those files that get hidden and tucked away in dark places on my system. Ever had a situation where you needed to see exactly how many copies of X.dll were hiding on your machine and where? this tool is perfect for that. Google Chrome. It’s just fast. very fast. and Firefox has become the IE of alternative browsers in terms of speed and memory. Don’t even get me started on IE. TweetDeck – get a complete view of what’s up on twitter Total Commander – my still favorite file manager, over five years now. KatMouse – will scroll any window your hovering on, even if it’s not an active window, when you use scroll the wheel on it. PowerIso or Daemon Tools – for loading up ISO images of discs LogMeIn Ignition – quick access to your LogMeIn computers for online Backup: JungleDisk or BackBlaze KeePass – save important passwords MS Security Essentials – free anti virus that’s quoest and doesn’t make a mess of your system. for home: uTorrent – a torrent client that can read rss feeds (like the ones from ezrss.it ) Camtasia Studio and SnagIt – for recording and capturing the screen, and then adding cool effects on top. Foxit PDF Reader – much faster that adove reader. Toddler Keys (for home) – for when your baby wants to play with your keyboard. Live Writer – for writing blog posts for Lenovo ThinkPads – Lenovo System Update – if you have a “custom” system instead of the one that came built in, this will keep all your lenovo drivers up to date. FileZilla – for FTP stuff All the utils from sysinternals, (or try the live-links) especially: AutoRuns for deciding what’s really going to load at startup, procmon to see what’s really going on with processes in your system   Developer stuff: Reflector. Pure magic. Time saver. See source code of any compiled assembly. Resharper. Great for productivity and navigation across your source code FinalBuilder – a commercial build automation tool. Love it. much better than any xml based time hog out there. TeamCity – a great visual and friendly server to manage continuous integration. powerful features. Test Lint – a free addin for vs 2010 I helped create, that checks your unit tests for possible problems and hints you about it. TestDriven.NET – a great test runner for vs 2008 and 2010 with some powerful features. VisualSVN – a commercial tool if you use subversion. very reliable addin for vs 2008 and 2010 Beyond Compare – a powerful file and directory comparison tool. I love the fact that you can right click in windows exporer on any file and select “select left side to compare”, then right click on another file and select “compare with left side”. Great usability thought! PostSharp 2.0 – for addind system wide concepts into your code (tracing, exception management). Goes great hand in hand with.. SmartInspect – a powerful framework and viewer for tracing for your application. lots of hidden features. Crypto Obfuscator – a relatively new obfuscation tool for .NET that seems to do the job very well. Crypto Licensing – from the same company –finally a licensing solution that seems to really fit what I needed. And it works. Fiddler 2 – great for debugging and tracing http traffic to and from your app. Debugging Tools for Windows and DebugDiag  - great for debugging scenarios. still wanting more? I think this should keep you busy for a while.   Regulator and Regulazy – for testing and generating regular expressions Notepad 2 – for quick editing and viewing with syntax highlighting

    Read the article

  • Oracle’s New Memory-Optimized x86 Servers: Getting the Most Out of Oracle Database In-Memory

    - by Josh Rosen, x86 Product Manager-Oracle
    With the launch of Oracle Database In-Memory, it is now possible to perform real-time analytics operations on your business data as it exists at that moment – in the DRAM of the server – and immediately return completely current and consistent data. The Oracle Database In-Memory option dramatically accelerates the performance of analytics queries by storing data in a highly optimized columnar in-memory format.  This is a truly exciting advance in database technology.As Larry Ellison mentioned in his recent webcast about Oracle Database In-Memory, queries run 100 times faster simply by throwing a switch.  But in order to get the most from the Oracle Database In-Memory option, the underlying server must also be memory-optimized. This week Oracle announced new 4-socket and 8-socket x86 servers, the Sun Server X4-4 and Sun Server X4-8, both of which have been designed specifically for Oracle Database In-Memory.  These new servers use the fastest Intel® Xeon® E7 v2 processors and each subsystem has been designed to be the best for Oracle Database, from the memory, I/O and flash technologies right down to the system firmware.Amongst these subsystems, one of the most important aspects we have optimized with the Sun Server X4-4 and Sun Server X4-8 are their memory subsystems.  The new In-Memory option makes it possible to select which parts of the database should be memory optimized.  You can choose to put a single column or table in memory or, if you can, put the whole database in memory.  The more, the better.  With 3 TB and 6 TB total memory capacity on the Sun Server X4-4 and Sun Server X4-8, respectively, you can memory-optimize more, if not your entire database.   Sun Server X4-8 CMOD with 24 DIMM slots per socket (up to 192 DIMM slots per server) But memory capacity is not the only important factor in selecting the best server platform for Oracle Database In-Memory.  As you put more of your database in memory, a critical performance metric known as memory bandwidth comes into play.  The total memory bandwidth for the server will dictate the rate in which data can be stored and retrieved from memory.  In order to achieve real-time analysis of your data using Oracle Database In-Memory, even under heavy load, the server must be able to handle extreme memory workloads.  With that in mind, the Sun Server X4-8 was designed with the maximum possible memory bandwidth, providing over a terabyte per second of total memory bandwidth.  Likewise, the Sun Server X4-4 also provides extreme memory bandwidth in an even more compact form factor with over half a terabyte per second, providing customers with scalability and choice depending on the size of the database.Beyond the memory subsystem, Oracle’s Sun Server X4-4 and Sun Server X4-8 systems provide other key technologies that enable Oracle Database to run at its best.  The Sun Server X4-4 allows for up 4.8 TB of internal, write-optimized PCIe flash while the Sun Server X4-8 allows for up to 6.4 TB of PCIe flash.  This enables dramatic acceleration of data inserts and updates to Oracle Database.  And with the new elastic computing capability of Oracle’s new x86 servers, server performance can be adapted to your specific Oracle Database workload to ensure that every last bit of processing power is utilized.Because Oracle designs and tests its x86 servers specifically for Oracle workloads, we provide the highest possible performance and reliability when running Oracle Database.  To learn more about Sun Server X4-4 and Sun Server X4-8, you can find more details including data sheets and white papers here. Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software.  He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers. 

    Read the article

  • Oracle WebCenter - Well Connected

    - by Brian Dirking
    800x600 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} An good post from Dan Elam on the state of the ECM industry (http://www.aiim.org/community/blogs/community/ECM-Vendors-go-to-War) . For those of you who don’t know Dan, he is one of the major forces in the content management industry. He founded eVisory and IMERGE Consulting, he is an AIIM Fellow and a former US Technical Expert to the International Standards Organization (ISO), and has been a driving force behind EmTag, AIIM’s Emerging Technologies Group. His post is interesting – it starts out talking about our Moveoff Documentum campaign, but then it becomes a much deeper insight into the ECM industry. Dan points out that Oracle has been making quiet strides in the ECM industry. In fact, analysts share this view Oracle, pointing out Oracle is growing greater than 20% annually while many of the big vendors are shrinking. And as Dan points out, this cements Oracle as one of the big five in the ECM space – the same week that Autonomy was removed from the Gartner Magic Quadrant for ECM. One of the key things points out is that Oracle WebCenter is well connected. WebCenter has out-of-the-box connections to key enterprise applications such as E-Business Suite, PeopleSoft, Siebel and JD Edwards. Those out-of-the-box integrations make it easy for organizations to drive content right into the places where it is needed, in the midst of business processes. At the same time, WebCenter provides composite interface capabilities to bring together two or more of these enterprise applications onto the same screen. Combine that with the capabilities of Oracle Social Network, you start to see how Oracle is providing a full platform for user engagement. But beyond those connections, WebCenter can also connect to other content management systems. It can index and search those systems from a single point of search, bringing back results in a single combined hitlist. WebCenter can also extend records management capabilities into Documentum, SharePoint, and email archiving systems. From a single console, records managers can define a series, set a retention schedule, and place holds – without having to go to each system to make these updates. Dan points out that there are some new competitive dynamics – to be sure. And it is interesting when a system can interact with another system, enforce dispositions and holds, and enable users to search and retrieve content. Oracle WebCenter is providing the infrastructure to build on, and the interfaces to drive user engagement. It’s an interesting time.

    Read the article

  • Today's Links (6/29/2011)

    - by Bob Rhubart
    Event-Driven SOA: Events meet Services | Guido Schmutz Oracle ACE Director Guido Schmutz shows you how to achieve extreme loose coupling within a Service-Oriented Architecture by using event-driven interactions. Misconceptions About Software Architecture | Sanjeev Kumar A concise, to-the-point, and informative article by Sanjeev Kumar. Good Leaders Acknowledge What Can't Be Done - Jeffrey Pfeffer - Harvard Business Review "None of us likes to admit to bad decisions," says Jeffrey Pfeffer. "But imagine how much harder that is for someone who has been chosen to lead a large organization precisely because he or she is thought to have the power to see the future more clearly and chart a wise course." Suboptimal Thinking within Enterprise Architecture | James McGovern McGovern says: "We need to remember that enterprises live and thrive beyond just the current person at the helm." Boundaryless Information Flow | Richard Veryard "If all the boundaries are removed or porous, then the (extended) enterprise or ecosystem becomes like a giant sponge, in which all information permeates the whole," Veryard says. "Some people may think that's a good idea, but it's not what I'd call loose coupling." Coming to a City Near You: Oracle Business Analytics Summits | Rob Reynolds This series of events includes a Technology and Architecture track. New Date for Implementation of Sun Hands-On Course Requirement (Oracle Certification) As announced on the Oracle Certification website, Java Architect, Java Developer, Solaris System Administrator and Solaris Security Administrator certification tracks will include a new mandatory course attendance requirement. VirtualBox 4.0.10 is now available for download | Bob Netherton Netherton shares information on the new release. Updated Technical Best Practices whitepaper | Anthony Shorten The Technical Best Practices whitepaper has been updated with the latest advice. "New advice includes new installation advice, advanced settings, new security settings and advice for both Oracle WebLogic and IBM WebSphere installations," says Shorten. Kscope 11 ADF, AIA and Business Rules | Peter Paul van de Beek Whitehorses Solution Architect Peter Paul van de Beek shares his impressions of KScope11 presentations by Markus Eisele, Sten Vesterli, and Edwin Biemond. Amazon AWS for the learning experience | Andrej Koelewijn "Using AWS changes your expectations how your internal data center should operate," says Koelewijn. BPMN is dead, long live BPEL! (SOA Partner Community Blog) Jürgen Kress shares information -- including a long list of speakers -- for the SOA & BPM Integration Days 2011 conference, October 12th & 13th 2011 in Düsseldorf. InfoQ: HTML5 and the Dawn of Rich Mobile Web Applications James Pearce introduces cross-platform web apps development using HTML5 and web frameworks, such as jQTouch, jQuery Mobile, Sencha Touch, PhoneGap, outlining what makes a good framework. InfoQ: Interview and Book Excerpt: CMMI for Development "Frameworks like TOGAF are used to define an architecture that aligns IT assets and resources to support key business needs and processes of key stakeholders," says SEI's Mike Konrad. "But the individual application systems, capabilities, services, networks, and other IT assets and infrastructure still need to be acquired, developed, or sustained." InfoQ: Architecting a Cloud-Scale Identity Fabric | Eric Olden "The most cited reason for not moving to the cloud is concern about security," says Olden. "In particular, managing user identity and access in the cloud is a tough problem to solve and a big security concern for organizations."

    Read the article

  • Innovative SPARC: Lighting a Fire Under Oracle's New Hardware Business

    - by Paulo Folgado
    "There's a certain level of things you can do with commercially available parts," says Oracle Executive Vice President Mike Splain. But, he notes, you can do so much more if you design the parts yourself. Mike Splain,EVP, OracleYou can, for example, design cryptographic accelerators into your microprocessors so customers can run their networks fully encrypted if they choose.Of course, it helps if you've already built multiple processing "cores" into those chips so they can handle all that encrypting and decrypting while still getting their other work done.System on a ChipAs the leader of Oracle Microelectronics, Mike knows how implementing clever innovations in silicon can give systems a real competitive advantage.The SPARC microprocessors that his team designed at Sun pioneered the concept of multiple cores several years ago, and the UltraSPARC T2 processor--the industry's first "system on a chip"--packs up to eight cores per chip, each running as many as eight threads at once. That's the most cores and threads of any general-purpose processor. Looking back, Mike points out that the real value of large enterprise-class servers was their ability to run a lot of very large applications in parallel."The beauty of our CMT [chip multi-threading] machines is you can get that same kind of parallel-processing capability at a much lower cost and in a much smaller footprint," he says.The Whole StackWhat has Mike excited these days is that suddenly the opportunity to innovate is much bigger as part of Oracle."In my group, we used to look up the software stack and say, 'We can do any innovation we want, provided the only thing we have to change is what's in the Solaris operating system'--or maybe Java," he says. "If we wanted to change things beyond that, we'd have to go outside the walls of Sun and we'd have to convince the vendors: 'You have to align with us, you have to test with us, you have to build for us, and then you'll reap the benefits.' Now we get access to the entire stack. We can look all the way through the stack and say, 'Okay, what would make the database go faster? What would make the middleware go faster?'"Changing the WorldMike and his microelectronics team also like the fact that Oracle is not just any software company. We're #1 in database, middleware, business intelligence, and more."We're like all the other engineers from Sun; we believe we can change the world, if we can just figure out how to get people to pay attention to us," he says. "Now there's a mechanism at Oracle--much more so than we ever had at Sun."He notes, too, that every innovation in SPARC has involved some combination of hardware and softwareoptimization."Take our cryptography framework, for example. Sure, we can accelerate rapidly, but the Solaris OS has to provide the right set of interfaces that applications can tap into," Mike says. "Same thing with our multicore architecture. We have to have software that can utilize all those threads and run in parallel." His engineers, he points out, have never been interested in producing chips that sell as mere components."Our chips are always designed to go into systems and be combined with various pieces of software," he says. "Our job is to enable the creation of systems."

    Read the article

  • Regular Expressions Cookbook Is in The Money—Win a Copy

    - by Jan Goyvaerts
    %COOKBOOKFRAME%You may have heard some people say that most book authors never get any royalties. That’s not true because most authors get an advance royalty that is paid before the book is published. That’s the author’s main incentive for writing the book, at least as far as money is concerned. (If money is your main concern, don’t write books.) What is true is that most authors never see any money beyond the advance royalty. Royalty rates are very low. A 10% royalty of the publisher’s price is considered normal. The publisher’s price is usually 45% of the retail price. So if you pay full price in a bookstore, the author gets 4.5% of your money. If there’s more than one author, they split the royalty. It doesn’t take a math degree to figure out that a book needs to sell quite a few copies for the royalty to add up to a meaningful amount of money. But Steven and I must have done something right. Regular Expressions Cookbook is in the money. My royalty statement for the 3rd quartier of 2009, which is the 2nd quarter that the book was on the market, came with a check. I actually received it last month but didn’t get around to blogging about. The amount of the check is insignificant. The point is that the balance is no longer negative. I’m taking this opportunity to pat myself and my co-author on the back. To celebrate the occassion O’Reilly has offered to sponsor a give-away of five (5) copies of Regular Expressions Cookbook. These are the rules of the game: You must post a comment to this blog article including your actual name and actual email address. Names are published, email addresses are not. Comments are moderated by myself (Jan Goyvaerts). If I consider a comment to be offensive or spam it will not be published and not be eligible for any prize. If you don’t know what to say in the comment, just wish me a happy 100000nd birthday, so I don’t have to feel so bad about entering the 6-bit era. Each person commenting has only one chance to win, regardless of the number of comments posted. O’Reilly will be provided with the names and email addresses of the winners (and those email addresses only) in order to arrange delivery. Each winner can choose to receive a printed copy or ebook (DRM-free PDF). If you choose the printed book, O’Reilly pays for shipping to anywhere in the world but not for any duties or taxes your country may impose on books imported from the USA. If you choose the ebook, you’ll need to create an O’Reilly account that is then granted access to the PDF download. You can make your choice after you’ve won, so it doesn’t influence your chance of winning. Contest ends 28 February 2010, GMT+7 (Thai time). Chosen by five calls to Random(78)+1 in Delphi 2010, the winners are: 48: Xiaozu 45: David Chisholm 19: Miquel Burns 33: Aaron Rice 17: David Laing Thanks to everybody who participated. The winners have been notified by email on how to collect their prize.

    Read the article

  • Dude, what’s up with POP Forums vNext?

    - by Jeff
    Yeah, it has been awhile. I posted v9.2 back in January, about five months ago. That’s a real change from the release pace I had there for awhile. Let me explain what’s going on. First off, in the interim, I re-launched CoasterBuzz, which required a lot of my attention for about two of those months. That’s a good thing though, because that site is just about the best test bed I could ask for. The other thing is that I committed to make the next version use ASP.NET MVC 4, which is now at the RC stage. I didn’t think much about when they’d hit their RTW point, but RC is good enough for me. To that end, there is enough change in the next version that I recently decided to make it a major version upgrade, and finish up the loose ends and science projects to make it whole. Here’s what’s in store… Mobile views: I sat on this or a long time. Originally, I was going to use jQuery Mobile, and waited and waited for a new release, but in the end, decided against using it. Sometimes buttons would unexplainably not work, I felt like I was fighting it at times, and the CSS just felt too heavy. I rolled my own mobile sugar at a fraction of the size, and I think you’ll find it easy to modify. And it’s Metro-y, of course! Re-do of background services: A number of things run in the background, and I did quite a bit of “reimagining” of that code. It’s the weirdness of running services in a Web site context, because so many folks can’t run a bona fide service on their host’s box. The biggest change here is that these service no longer start up by default. You’ll need to call a new method from global.asax called PopForumsActivation.StartServices(). This is also a precursor to running the app in a Web farm (new data layer and caching is the second part of that). I learned about this the hard way when I had three apps using the forum library code but only one was actually the forum. The services were all running three times as often with race conditions and hits on the same data. That was particularly bad for e-mail. CSS clean up: It’s still not ideal, but it’s getting better. That’s one of those things that comes with integrating to a real site… you discover all of the dumb things you did. The mobile CSS is particularly easier to live with. Bug fixes: There are a whole lot of them. Most were minor, but it’s feeling pretty solid now. So that’s where I am. I’m going to call it v10.0, and I’m going to really put forth some effort toward finishing the mobile experience and getting through the remaining bugs. The roadmap beyond that will likely not be feature oriented, but rather work on some other things, like making it run in Azure, perhaps using SQL CE, a better install experience, etc. As usual, I’ll post the latest here. Stay tuned!

    Read the article

  • Guest (and occasional co-host) on Jesse Liberty's Yet Another Podcast

    - by Jon Galloway
    I was a recent guest on Jesse Liberty's Yet Another Podcast talking about the latest Visual Studio, ASP.NET and Azure releases. Download / Listen: Yet Another Podcast #75–Jon Galloway on ASP.NET/ MVC/ Azure Co-hosted shows: Jesse's been inviting me to co-host shows and I told him I'd show up when I was available. It's a nice change to be a drive-by co-host on a show (compared with the work that goes into organizing / editing / typing show notes for Herding Code shows). My main focus is on Herding Code, but it's nice to pop in and talk to Jesse's excellent guests when it works out. Some shows I've co-hosted over the past year: Yet Another Podcast #76–Glenn Block on Node.js & Technology in China Yet Another Podcast  #73 - Adam Kinney on developing for Windows 8 with HTML5 Yet Another Podcast #64 - John Papa & Javascript Yet Another Podcast #60 - Steve Sanderson and John Papa on Knockout.js Yet Another Podcast #54–Damian Edwards on ASP.NET Yet Another Podcast #53–Scott Hanselman on Blogging Yet Another Podcast #52–Peter Torr on Windows Phone Multitasking Yet Another Podcast #51–Shawn Wildermuth: //build, Xaml Programming & Beyond And some more on the way that haven't been released yet. Some of these I'm pretty quiet, on others I get wacky and hassle the guests because, hey, not my podcast so not my problem. Show notes from the ASP.NET / MVC / Azure show: What was just released Visual Studio 2012 Web Developer features ASP.NET 4.5 Web Forms Strongly Typed data controls Data access via command methods Similar Binding syntax to ASP.NET MVC Some context: Damian Edwards and WebFormsMVP Two questions from Jesse: Q: Are you making this harder or more complicated for Web Forms developers? Short answer: Nothing's removed, it's just a new option History of SqlDataSource, ObjectDataSource Q: If I'm using some MVC patterns, why not just move to MVC? Short answer: This works really well in hybrid applications, doesn't require a rewrite Allows sharing models, validation, other code between Web Forms and MVC ASP.NET MVC Adaptive Rendering (oh, also, this is in Web Forms 4.5 as well) Display Modes Mobile project template using jQuery Mobile OAuth login to allow Twitter, Google, Facebook, etc. login Jon (and friends') MVC 4 book on the way: Professional ASP.NET MVC 4 Windows 8 development Jesse and Jon announce they're working on a new book: Pro Windows 8 Development with XAML and C# Jon and Jesse agree that it's nice to be able to write Windows 8 applications using the same skills they picked up for Silverlight, WPF, and Windows Phone development. Compare / contrast ASP.NET MVC and Windows 8 development Q: Does ASP.NET and HTML5 development overlap? Jon thinks they overlap in the MVC world because you're writing HTML views without controls Jon describes how his web development career moved from a preoccupation with server code to a focus on user interaction, which occurs in the browser Jon mentions his NDC Oslo presentation on Learning To Love HTML as Beautiful Code Q: How do you apply C# / XAML or HTML5 skills to Windows 8 development? Q: If I'm a XAML programmer, what's the learning curve on getting up to speed on ASP.NET MVC? Jon describes the difference in application lifecycle and state management Jon says it's nice that web development is really interactive compared to application development Q: Can you learn MVC by reading a book? Or is it a lot bigger than that? What is Azure, and why would I use it? Jon describes the traditional Azure platform mode and how Azure Web Sites fits in Q: Why wouldn't Jesse host his blog on Azure Web Sites? Domain names on Azure Web Sites File hosting options Q: Is Azure just another host? How is it different from any of the other shared hosting options? A: Azure gives you the ability to scale up or down whenever you want A: Other services are available if or when you want them

    Read the article

  • A debugging experience with "highly compatible" ASP.NET 4.5

    - by Jeff
    I have to admit that I will pretty much upgrade software for no reason other than being on the latest version. I won't do it if it's super expensive (Adobe gets money from me about once every three or four years at best), but particularly with frameworks and stuff generally available as part of my MSDN subscription, I'll be bleeding edge. CoasterBuzz was running on the MVC 4 framework pretty much as soon as they did a "go live" license for it. I didn't really jump in head-first with Windows 8 and Visual Studio 2012, in part because I just wasn't interested in doing the reinstalls for each new version. Turns out there weren't that many revisions anyway. But when the final versions were released a week and a half ago, I jumped in. I saw on one of the Microsoft sites that .Net 4.5 was a "highly compatible in-place update" to the framework. Good enough for me. I was obviously running it by default in Windows 8, and installed it on my production server. I suppose it's "highly compatible," except when it isn't. Three of my sites are running with various flavors of the MVC version of POP Forums. All of them stopped working under ASP.NET 4.5. It was not immediately obvious what the problem might be beyond an exception indicating that there were no repository classes registered with Ninject, which I use for dependency injection in the forums. This was made all the more weird by the fact that it ran fine locally in the dev Web host. My first instinct was to spin up a Windows Server VM on my local box and put the remote debugger on it. (Side note: running multiple VM's on a Retina MacBook Pro with 16 gigs of RAM is pretty much the most awesome thing ever. I can't believe this computer is for real, and not a 50-pound tower under my desk.) What might have been going on in IIS that doesn't happen in Visual Studio? In the debugging process, I realized that I might be looking in the wrong place. POP Forums creates a Ninject container using a method called from a PreApplicationStartMethod attribute, and at that time registers a module (what Ninject uses to map interfaces to implementations) that maps all of the core dependencies. It also creates an instance of an HttpModule that originally hosted the "services" (search indexing, mailer, etc.), but now just records errors. That's all well and good, but the actual repository mapping, where data is actually read or persisted, happens in Application_Start() in global.asax. The idea there is that you can swap out the SqlSingleWebServer repos for something tuned for multiple servers, Oracle or something else. Of course, if I used something like StructureMap, which does convention-based mapping for dependency injection (a class implementing ISettingsRepository called SettingsRepository is automagically mapped), I wouldn't have to worry about it. In any case, the HttpModule, being instantiated before Application_Start() gets to run, would throw because there was no repo mapped where it could get settings from the database. This makes total sense. The fix is sort of a hack, where I don't setup the innards of the HttpModule until a call to its BeginRequest is made. I say it's a hack, because its primary function, logging exceptions, won't work until the app has warmed up. Still, this brings up an interesting question about the race condition, and what changed in 4.5 when it's running in IIS. In ASP.NET 4, it would appear that the code called via the PreApplicationStartMethod was either failing silently, and running again later, or it was getting to that code after Application_Start was called. In any case, weird thing. The real pain point I'm experiencing now is a bug in MVC 4 that is extremely serious because it renders the mobile/alternate view functionality very much broken.

    Read the article

  • SQLSaturday 33 Observations

    - by Geoff N. Hiten
    Along with a lot of my colleagues, I went to SQLSaturday #33 in Charlotte this last weekend.  Overall a really good event, especially for a first-time organizer.  There is some controversy over certain events where my name got mentioned so I thought I would clear the air. Before I get to the core controversy, let's get the details out of the way.  The Microsoft Offices in Charlotte were an excellent venue for this event.  I really appreciated the Microsoft employees that helped out by letting us in and out of normally secure areas.  This is definitely above and beyond on their part. Thanks to the organizers (especially Greg and Peter) for the great hospitality they showed to the speakers.  Now for the specifics.  Like most events of this type, there was a raffle at the end for some cool swag.  As a speaker I got raffle tickets just like any other attendee.  The raffle was clearly promoted as "must be present to win".  The problem is that for various reasons, the raffle kicked off immediately after the last speaker finished in the largest room.  That room was across the parking lot from all the other rooms for the event.  I happened to have one of the last sessions of the day, and not in the main room.  I also ran long since the audience was very interactive and there were a lot of follow-up questions.  (BTW, thanks to everyone who came and stayed for my session.  Sorry it cost you the chance to win too.).  My name was drawn for an very nice piece of swag (iPod Touch if you insist).  Since I wasn't there, I didn't win. Several folks mentioned I was still speaking and was "here" (as in at the event) just not "here in the room". Yes, I was mad when I found out about it. I think that was handled poorly.  I personally lost out as did my audience (dunno if anyone specific lost anything, but it is the idea that counts).  It was a mistake. Mistakes happen.  Nobody acted maliciously.  Heck, the guys running the event who made the decision are my friends and remain so.  I got over my mad.  We talked about this privately and we are all OK with what happened.  I am not going to let a gadget get in the way of a couple of good friendships. I think the mistake was mostly due to a lack of unity between the venue buildings   Pam Shaw had a similar challenge in Tampa a few weeks ago, including a speaker who ran long on the last session (not me that time).  She had a couple of teenage volunteers to act as gofers/runners.  They counted heads in sessions, pointed people to last-minute room and session changes, and generally helped connect the organizers to what was actually happening.  Note that this was not Pam's first SQLSaturday event.  She knew but the knowledge had not been institutionalized.  We (The SQL community in general and SQLSaturday organizers in particular) now know how essential gofers are to success. I know I spent most of this post focusing on the controversy, but I wanted to clear everything up.  I don't want to let a minor mistake, made in good faith, overshadow what was a tremendously good event for the community. As for the iPod Touch, someone in the SQL community is enjoying it, so it is not a total loss.  And if losing out on it is the price I pay so we can learn this, then that is what a community leader does.  Consider it a gift.  Besides, I really wanted a Zune 120 :)

    Read the article

  • concurrency::accelerator_view

    - by Daniel Moth
    Overview We saw previously that accelerator represents a target for our C++ AMP computation or memory allocation and that there is a notion of a default accelerator. We ended that post by introducing how one can obtain accelerator_view objects from an accelerator object through the accelerator class's default_view property and the create_view method. The accelerator_view objects can be thought of as handles to an accelerator. You can also construct an accelerator_view given another accelerator_view (through the copy constructor or the assignment operator overload). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator_view objects between them to determine if they refer to the same underlying accelerator. We'll see later that when we use concurrency::array objects, the allocation of data takes place on an accelerator at array construction time, so there is a constructor overload that accepts an accelerator_view object. We'll also see later that a new concurrency::parallel_for_each function overload can take an accelerator_view object, so it knows on what target to execute the computation (represented by a lambda that the parallel_for_each also accepts). Beyond normal usage, accelerator_view is a quality of service concept that offers isolation to multiple "consumers" of an accelerator. If in your code you are accessing the accelerator from multiple threads (or, in general, from different parts of your app), then you'll want to create separate accelerator_view objects for each thread. flush, wait, and queuing_mode When you create an accelerator_view via the create_view method of the accelerator, you pass in an option of immediate or deferred, which are the two members of the queuing_mode enum. At any point you can access this value from the queuing_mode property of the accelerator_view. When the queuing_mode value is immediate (which is the default), any commands sent to the device such as kernel invocations and data transfers (e.g. parallel_for_each and copy, as we'll see in future posts), will get submitted as soon as the runtime sees fit (that is the definition of immediate). When the value of queuing_mode is deferred, the commands will be batched up. To send all buffered commands to the device for execution, there is a non-blocking flush method that you can call. If you wish to block until all the commands have been sent, there is a wait method you can call. Deferring is a more advanced scenario aimed at performance gains when you are submitting many device commands and you want to avoid the tiny overhead of flushing/submitting each command separately. Querying information Just like accelerator, accelerator_view exposes the is_debug and version properties. In fact, you can always access the accelerator object from the accelerator property on the accelerator_view class to access the accelerator interface we looked at previously. Interop with D3D (aka DX) In a later post I'll show an example of an app that uses C++ AMP to compute data that is used in pixel shaders. In those scenarios, you can benefit by integrating C++ AMP into your graphics pipeline and one of the building blocks for that is being able to use the same device context from both the compute kernel and the other shaders. You can do that by going from accelerator_view to device context (and vice versa), through part of our interop API in amp.h: *get_device, create_accelerator_view. More on those in a later post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Java Developer Days India Trip Report

    - by reza_rahman
    You are probably aware of Oracle's decision to discontinue the relatively resource intensive regional JavaOnes in favor of more Java Developer Days, virtual events and deeper involvement with independent conferences. In comparison to the regional JavaOnes, Java Developer Days are smaller, shorter (typically one full day), more focused (mostly Oracle speakers/topics) and more local (targeting cities). For those who have been around the Java ecosystem for a few years, they are basically the current incarnation of the highly popular and developer centric Sun Tech Days. October 21st through October 25th I spoke at Java Developer Days India. This was basically three separate but identical events in the cities of Pune (October 21st), Chennai (October 24th) and Bangalore (October 25th). For those with some familiarity with India, other than Hyderabad these cities are India's IT powerhouses. The events were basically focused on Java EE. I delivered five of the sessions (yes, you read that right), while my friend NetBeans Group Product Manager Ashwin Rao delivered three talks. Jagadish Ramu from the GlassFish team India helped me out in Bangalore by delivering two sessions. It was also a pleasure to introduce my co-contributor to the Cargo Tracker Java EE Blue Prints project Vijay Nair at Bangalore during the opening talk. I thought it was a great dynamic between Ashwin and I flipping between talking about the new features and demoing live code in NetBeans. The following were my sessions (source PDF and abstracts posted as usual on my SlideShare account): JavaEE.Next(): Java EE 7, 8, and Beyond Building Java HTML5/WebSocket Applications with JSR 356 What’s New in Java Message Service 2 JAX-RS 2: New and Noteworthy in the RESTful Web Services API Using NoSQL with JPA, EclipseLink and Java EE The event went well and was packed in all three cities. The Q&A was great and Indian developers were particularly generous with kind words :-). It seemed the event and our presence was appreciated in the truest sense which I must say is a rarity. The events were exhausting but very rewarding at the same time. As hectic as the three city trip was I tried to see at least some of the major sights (mostly at night) since this was my very first time to India. I think the slideshow below is a good representation of the riddle wrapped up in an enigma that is India (and the rest of the Indian sub-continent for that matter): Ironically enough what struck me the most during this trip is the woman pictured below - Shushma. My chauffeur, tour guide and friend for a day, she fluidly navigated the madness that is Mumbai traffic with skills that would make Evel Knievel blush while simultaneously pointing out sights and prompting me to take pictures (Mumbai was my stopover and gateway to/from India). In some ways she is probably the most potent symbol of the new India. When we parted ways I told her she should take solace in the fact she has won mostly without a fight a potentially hazardous battle her sisters across the Arabian sea are still fighting. I'm not sure she entirely understood the significance of what I told her. I hope that she did. I also had occasion to take a pretty cool local bus ride from Chennai to Bangalore instead of yet another boring flight. All in all I really enjoyed the trip to India and hope to return again soon. Jai Hind :-)!

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • How to Set Up Your Enterprise Social Organization

    - by Mike Stiles
    The rush for business organizations to establish, grow, and adopt social was driven out of necessity and inevitability. The result, however, was a sudden, booming social presence creating touch points with customers, partners and influencers, but without any corporate social organization or structure in place to effectively manage it. Even today, many business leaders remain uncertain as to how to corral this social media thing so that it makes sense for their enterprise. Imagine their panic when they hear one of the most beneficial approaches to corporate use of social involves giving up at least some hierarchical control and empowering employees to publicly engage customers. And beyond that, they should also be empowered, regardless of their corporate status, to engage and collaborate internally, spurring “off the grid” innovation. An HBR blog points out that traditionally, enterprise organizations function from the top down, and employees work end-to-end, structured around business processes. But the social enterprise opens up structures that up to now have not exactly been embraced by turf-protecting executives and managers. The blog asks, “What if leaders could create a future where customers, associates and suppliers are no longer seen as objects in the system but as valued sources of innovation, ideas and energy?” What if indeed? The social enterprise activates internal resources without the usual obsession with position. It is the dawn of mass collaboration. That does not, however, mean this mass collaboration has to lead to uncontrolled chaos. In an extended interview with Oracle, Altimeter Group analyst Jeremiah Owyang and Oracle SVP Reggie Bradford paint a complete picture of today’s social enterprise, including internal organizational structures Altimeter Group has seen emerge. One sign of a mature social enterprise is the establishing of a social Center of Excellence (CoE), which serves as a hub for high-level social strategy, training and education, research, measurement and accountability, and vendor selection. This CoE is led by a corporate Social Strategist, most likely from a Marketing or Corporate Communications background. Reporting to them are the Community Managers, the front lines of customer interaction and engagement; business unit liaisons that coordinate the enterprise; and social media campaign/product managers, social analysts, and developers. With content rising as the defining factor for social success, Altimeter also sees a Content Strategist position emerging. Across the enterprise, Altimeter has seen 5 organizational patterns. Watching the video will give you the pros and cons of each. Decentralized - Anyone can do anything at any time on any social channel. Centralized – One central groups controls all social communication for the company. Hub and Spoke – A centralized group, but business units can operate their own social under the hub’s guidance and execution. Most enterprises are using this model. Dandelion – Each business unit develops their own social strategy & staff, has its own ability to deploy, and its own ability to engage under the central policies of the CoE. Honeycomb – Every employee can do social, but as opposed to the decentralized model, it’s coordinated and monitored on one platform. The average enterprise has a whopping 178 social accounts, nearly ¼ of which are usually semi-idle and need to be scrapped. The last thing any C-suite needs is to cope with fragmented technologies, solutions and platforms. It’s neither scalable nor strategic. The prepared, effective social enterprise has a technology partner that can quickly and holistically integrate emerging platforms and technologies, such that whatever internal social command structure you’ve set up can continue efficiently executing strategy without skipping a beat. @mikestiles

    Read the article

  • Podcast Show Notes: Collaborate 10 Wrap-Up - Part 1

    - by Bob Rhubart
    OK, I know last week I promised you a program featuring Oracle ACE Directors Mike van Alst (IT-Eye) and Jordan Braunstein (TUSC) and The Definitive Guide to SOA: Oracle Service Bus author Jeff Davies. But things happen. In this case, what happened was Collaborate 10 in Las Vegas. Prior to the event I asked Oracle ACE Director and OAUG board member Floyd Teter to see if he could round up a couple of people at the event for an impromtu interview over Skype (I was here in Cleveland) to get their impressions of the event. Listen to Part 1 Floyd, armed with his brand new iPad, went above and beyond the call of duty. At the appointed hour, which turned out to be about hour after the close of Collaborate 10,  Floyd had gathered nine other people to join him in a meeting room somewhere in the Mandalay Bay Convention Center. Here’s the entire roster: Floyd Teter - Project Manager at Jet Propulsion Lab, OAUG Board Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Mark Rittman - EMEA Technical Director and Co-Founder, Rittman Mead,  ODTUG Board Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Chet Justice - OBI Consultant at BI Wizards Blog | Twitter | LinkedIn | Oracle Mix | Oracle ACE Profile Elke Phelps - Oracle Applications DBA at Humana, OAUG SIG Chair Blog | LinkedIn | Oracle Mix | Book | Oracle ACE Profile Paul Jackson - Oracle Applications DBA at Humana Blog | LinkedIn | Oracle Mix | Book Srini Chavali - Enterprise Database & Tools Leader at Cummins, Inc Blog | LinkedIn | Oracle Mix Dave Ferguson – President, Oracle Applications Users Group LinkedIn | OAUG Profile John King - Owner, King Training Resources Website | LinkedIn | Oracle Mix Gavyn Whyte - Project Portfolio Manager at iFactory Consulting Blog | Twitter | LinkedIn | Oracle Mix John Nicholson - Channels & Alliances at Greenlight Technologies Website | LinkedIn Big thanks to Floyd for assembling the panelists and handling the on-scene MC/hosting duties.  Listen to Part 1 On a technical note, this discussion was conducted over Skype, using Floyd’s iPad, placed in the middle of the table.  During the call the audio was fantastic – the iPad did a remarkable job. Sadly, the Technology Gods were not smiling on me that day. The audio set-up that I tested successfully before the call failed to deliver when we first connected – I could hear the folks in Vegas, but they couldn’t hear me. A frantic, last-minute adjustment appeared to have fixed that problem, and the audio in my headphones from both sides of the conversation was loud and clear.  It wasn’t until I listened to the playback that I realized that something was wrong. So the audio for Vegas side of the discussion has about the same fidelity as a cell phone. It’s listenable, but disappointing when compared to what it sounded like during the discussion. Still, this was a one shot deal, and the roster of panelists and the resulting conversation was too good and too much fun to scrap just because of an unfortunate technical glitch.   Part 2 of this Collaborate 10 Wrap-Up will run next week. After that, it’s back on track with the previously scheduled program. So stay tuned: RSS del.icio.us Tags: oracle,otn,collborate 10,c10,oracle ace program,archbeat,arch2arch,oaug,odtug,las vegas Technorati Tags: oracle,otn,collborate 10,c10,oracle ace program,archbeat,arch2arch,oaug,odtug,las vegas

    Read the article

  • Archiving SQLHelp tweets

    - by jamiet
    #SQLHelp is a Twitter hashtag that can be used by any Twitter user to get help from the SQL Server community. I think its fair to say that in its first year of being it has proved to be a very useful resource however Kendra Little (@kendra_little) made a very salient point yesterday when she tweeted: Is there a way to search the archives of #sqlhelp Trying to remember answer to a question I know I saw a couple months ago http://twitter.com/#!/Kendra_Little/status/15538234184441856 This highlights an inherent problem with Twitter’s search capability – it simply does not reach far enough back in time. I have made steps to remedy that situation by putting into place two initiatives to archive Tweets that contain the #sqlhelp hashtag. The Archivist http://archivist.visitmix.com/ is a free service that, quite simply, archives a history of tweets that contain a given search term by periodically polling Twitter’s search service with that search term and subsequently displaying a dashboard providing an aggregate view of those tweets for things like tweet volume over time, top users and top words (Archivist FAQ). I have set up an archive on The Archivist for “sqlhelp” which you can view at http://archivist.visitmix.com/jamiet/7. Here is a screenshot of the SQLHelp dashboard 36 minutes after I set it up: There is lots of good information in there, including the fact that Jonathan Kehayias (@SQLSarg) is the most active SQLHelp tweeter (I suspect as an answerer rather than a questioner ) and that SSIS has proven to be a rather (ahem) popular subject!! Datasift The Archivist has its uses though for our purposes it has a couple of downsides. For starters you cannot search through an archive (which is what Kendra was after) and nor can you export the contents of the archive for offline analysis. For those functions we need something a bit more heavyweight and for that I present to you Datasift. Datasift is a tool (currently an alpha release) that allows you to search for tweets and provide them through an object called a Datasift stream. That sounds very similar to normal Twitter search though it has one distinct advantage that other Twitter search tools do not – Datasift has access to Twitter’s Streaming API (aka the Twitter Firehose). In addition it has access to a lot of other rather nice features: It provides the Datasift API that allows you to consume the output of a Datasift stream in your tool of choice (bring on my favourite ultimate mashup tool J ) It has a query language (called Filtered Stream Definition Language – FSDL for short) A Datasift stream can consume (and filter) other Datasift streams Datasift can (and does) consume services other than Twitter If I refer to Datasift as “ETL for tweets” then you may get some sort of idea what it is all about. Just as I did with The Archivist I have set up a publicly available Datasift stream for “sqlhelp” at http://datasift.net/stream/1581/sqlhelp. Here is the FSDL query that provides the data: twitter.text contains "sqlhelp" Pretty simple eh? At the current time it provides little more than a rudimentary dashboard but as Datasift is currently an alpha release I think this may be worth keeping an eye on. The real value though is the ability to consume the output of a stream via Datasift’s RESTful API, observe: http://api.datasift.net/stream.xml?stream_identifier=c7015255f07e982afdeebdf1ae6e3c0d&username=jamiet&api_key=XXXXXXX (Note that an api_key is required during the alpha period so, given that I’m not supplying my api_key, this URI will not work for you) Just to prove that a Datasift stream can indeed consume data from another stream I have set up a second stream that further filters the first one for tweets containing “SSIS”. That one is at http://datasift.net/stream/1586/ssis-sqlhelp and here is the FSDL query: rule "414c9845685ff8d2548999cf3162e897" and (interaction.content contains "ssis") When Datasift moves beyond alpha I’ll re-assess how useful this is going to be and post a follow-up blog. @Jamiet

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >